text
stringlengths 4
2.78M
| meta
dict |
---|---|
---
abstract: 'In this paper, we beginning by reviewing a certain number of mathematical challenges posed by the modelling of collective dynamics and self-organization. Then, we focus on two specific problems, first, the derivation of fluid equations from particle dynamics of collective motion and second, the study of phase transitions and the stability of the associated equilibria.'
author:
- Pierre Degond
title: 'Mathematical models of collective dynamics and self-organization'
---
[**Acknowledgements:**]{} PD acknowledges support by the Engineering and Physical Sciences Research Council (EPSRC) under grants no. EP/M006883/1 and EP/P013651/1, by the Royal Society and the Wolfson Foundation through a Royal Society Wolfson Research Merit Award no. WM130048 and by the National Science Foundation (NSF) under grant no. RNMS11-07444 (KI-Net). PD is on leave from CNRS, Institut de Mathématiques de Toulouse, France. Works mentionned in this article has been realized in collaboration with many people. I wish to acknowledge more particularly A. Frouvelle, J-G. Liu, S. Merino-Aceituno, S. Motsch and A. Trescases for their decisive contributions.
[**Data statement:** ]{} No new data were collected in the course of this research.
[**Conflict of interest:**]{} The authors declare that they have no conflict of interest.
[**Key words:** ]{} Body attitude coordination; collective motion; Vicsek model; generalized collision invariant; rotation group, phase transitions, order parameter.
[**AMS Subject classification:** ]{} 35Q92, 82C22, 82C70, 92D50
0.4cm
\[sec:intro\]
Fascinating examples of collective motion can be observed in nature, such as insect swarms [@Bazazi_etal_CurrBiol08; @Khuong_etal_ECAL11], bird flocks [@Lukeman_etal_PNAS10], fish schools [@Aoki_BullJapSocSciFish92; @Degond_Motsch_JSP08; @Degond_Motsch_JSP11; @Domeier_Colin_BullMarSci97; @Gautrais_etal_JMB09; @Gautrais_etal_PlosCB12], or in social phenomena, such as the spontaneous formation of lanes in pedestrian crowds [@Moussaid_etal_PlosCB12]. Similarly, at the microscopic scale, collective bacterial migration is frequently observed [@Czirok_etal_PRE96] and collective cell migration occurs during organism development [@Shraiman_PNAS05] or healing [@Poujade_etal_PNAS07]. Such systems of many autonomous agents locally interacting with each other are able to generate large-scale structures of sizes considerably exceeding the perception range of the agents. These large-scale structures are not directly encoded in the interaction rules between the individuals, which are usually fairly simple. They spontaneously emerge when a large number of individuals collectively interact [@Vicsek_Zafeiris_PhysRep12]. This is referred to as “emergence”.
Emergence is a sort of bifurcation, or phase transition. In physics, phase transitions are dramatic changes of the system state consecutive to very small changes of some parameters, such as the temperature. In self-organized systems, the role of temperature is played by the noise level associated to the random component of the motion of the agents. For instance, in road traffic, the presence of drivers with erratic behavior can induce the formation of stop-and-go waves leading to a transition from fluid to congested traffic. Here, an increase of temperature (the random behavior of some agents) leads to a sudden blockage of the system. This is an example of the so-called “freezing-by-heating” phenomenon [@Helbing_Farkas_Vicsek_PRL00] also observed in pedestrian crowds and a signature of the paradoxical and unconventional behavior of self-organized systems.
Another parameter which may induce phase transitions is the density of individuals. An increase of this density is very often associated with an increase of the order of the system [@Vicsek_etal_PRL95]. For instance, the spontaneous lane formation in pedestrian crowds only appears when the density is high enough. This increase of order with the density is another paradoxical phenomenon in marked contrast with what is observed in more classical physical systems where an increase of density is generally associated with an increase of temperature, i.e. of disorder (this can be observed when pumping air into a bicycle tire: after using it, the pump core has heated up).
The passage between two different phases is called a critical state. In physical systems, critical states appear only for well-chosen ranges of parameters. For instance, at ambient pressure, liquid water passes to the gaseous state at the temperature of $100$ $^\circ$C. In self-organized systems, by contrast, critical states are extremely robust: they appear almost systematically, whatever the initial conditions of the system. In dynamical systems terms, the critical state is an attractor. The presence of critical states which are attractors of the dynamics is called “Self-Organized Criticality” [@Bak_etal_PRL87] and its study is important in physics.
We shall focus on models of collective dynamics and self-organization that provide a prediction from an initial state of the system. These are stated as Cauchy problems for appropriate systems of differential equations. The modelling of self-organization meets important scientific and societal challenges. There are environmental and societal stakes: for instance, better understanding the behavior of a gregarious species can lead to improved conservation policies ; modelling human crowds improves the security, efficiency and profitability of public areas ; understanding collective cell migration opens new paradigms in cancer treatment or regenerative medicine. There are also technological stakes: roboticists use social interaction mechanisms to geer fleets of robots or drones ; architects study social insect nests to look for new sustainable architecture ideas.
Large systems of interacting agents (aka particles) are modelled at different levels of detail. The most detailed models are particle models (aka individual-based or agent-based models). They describe the position and state of any single agent (particle) of the system as it evolves in time through its interactions with the other agents and the environment. This leads to large coupled systems of ordinary or stochastic differential equations (see an example in [@Vicsek_etal_PRL95]). When the number of particles is large, these systems are computationally intensive as their cost increases polynomially with the number of particles. Additionally their output is not directly exploitable as we are only interested in statistical averages (e.g. the pressure in a gas) and requires some post-processing which can generate errors.
For this reason, continuum models are often preferred [@Toner_etal_PRE98]. They consist of partial differential equations for averaged quantities such as the mean density or mean velocity of the agents. However, in the literature, a rigorous and systematic link between particle and continuum models is rarely found. Yet, establishing such a link is important. Indeed, often, the microscopic behavior of the agents is not well-known and is the actual target. On the other hand, large-scale structures are more easily accessible to experiments and can be used to calibrate continuum models. But to uncover the underlying individual behavior requires the establishment of a rigorous correspondence between the two types of models. Our goal is precisely to provide methodologies to establish this correspondence.
To derive continuum models from particle models rigorously requires a coarse-graining methodology. There are two steps of coarsening. The first step consists of deriving a “kinetic model”, which provides the time evolution of the probability distribution of the agents in position and state spaces. The equation for this kinetic distribution can be derived from the particle model, however not in closed form unless one assumes a strong hypothesis named “propagation of chaos” which means statistical independence between the particles. This hypothesis is generally wrong but admittedly, becomes asymptotically valid as the particle number tends to infinity. To prove such a result is a very difficult task and until recently [@Gallagher_etal_EMS13; @Mischler_Mouhot_InvMath13], the only available result one was due to Lanford for the Boltzmann model [@Lanford_76]. Kinetic models are differential or integro-differential equations posed on a large dimensional space such as the Boltzmann or Fokker-Planck equations.
The second step of coarsening consists of reducing the description of the system to a few macroscopic averages (or moments) such as the density or the mean velocity as functions of position and time. The resulting fluid models are systems of nonlinear partial differential equations such as the Euler or Navier-Stokes equations. Fluid models are derived by averaging out the state variable of kinetic models (such as the particle velocity) to only keep track of the spatio-temporal dependence. Here again, a closure assumption is needed, by which one postulates a known shape of the distribution function as functions of its fluid moments. It can be justified in the hydrodynamic regime when the kinetic phenomena precisely bring the distribution function close to the postulated one. Providing a rigorous framework to these approaches is the core subject of “kinetic theory”, whose birthdate is the statement of his 6th problem by Hilbert in his 1900 ICM address. Since then, kinetic theory has undergone impressive developments, with Field’s medals awarded to P. L. Lions and C. Villani for works in this theory.
It is therefore appealing to apply kinetic theory methods to collective dynamics and self-organization. However, this has proved more delicate than anticipated and fascinating new mathematical questions have emerged from these difficulties. A first difficulty is that kinetic models may lose validity as propagation of chaos may simply be not true. Indeed, self-organization supposes the build-up of correlations between the particles. It is not clear that these correlations disappear with the number of particles tending to infinity. We have indeed proved (with E. Carlen and B. Wennberg [@Carlen_etal_M3AS13]) in a simple collective dynamics model that propagation of chaos may break down at large temporal scales. Are there new models that can replace the defective kinetic equations when propagation of chaos breaks down ? Some phenomenological answers have been proposed but to the best of our knowledge, no mathematical theory is available yet.
A second difficulty arises at the passage between kinetic and fluid models. In classical physics, a fundamental concept is that of conservation law (such as mass, momentum or energy conservations). These conservation laws are satisfied at particle level and so, are transferred to the macroscopic scale and serve as corner stone in the derivation of fluid equations. By contrast, biological or social systems are open systems which exchange momentum and energy with the outside world and have no reason to satisfy such conservation laws. This is a major difficulties as acknowledged in Vicsek’s review [@Vicsek_Zafeiris_PhysRep12]. In a series of works initiated in [@Degond_Motsch_M3AS08], we have overcome this problem and shown that some weaker conservation laws which we named “generalized collision invariants (GCI)” prevail. They enabled us to derive fluid models showing new and intringuing properties. Their mathematical study is still mostly open. We will provide more details in Section \[sec:fluid\].
The third difficulty is linked to the ubiquity of phase transitions in self-organized systems. This puts a strong constraint on fluid models which must be able to correctly describe the various phases and their interfaces. Complex phenomena like hysteresis [@Couzin_etal_JTB02], which results from the presence of multiple stable equilibria and involves the time-history of the system, must also be correctly rendered. However, different phases are described by types of fluid models. For instance, in symmetry-breaking phase transitions, the disordered phase is described by a parabolic equation while the ordered phase is described by a hyperbolic equation [@Degond_etal_JNonlinearSci13; @Degond_etal_arXiv:1304.2929]. At the critical state, these two phases co-exist and should be related by transmission conditions through phase boundaries. These transmission conditions are still unknown. More about phase transitions can be found in Section \[sec:phasetrans\] and references [@Barbaro_Degond_DCDSB13; @Frouvelle_Liu_SIMA12]. Convergence to swarming states for the Cucker-Smale model [@Cucker_Smale_IEEETransAutCont07] has been extensively studied in the mathematical literature [@Carrillo_etal_SIMA10; @Ha_Liu_CMS09; @Ha_Tadmor_KRM08; @Motsch_Tadmor_JSP11; @Shen_SIAP07], as well as for related models [@Chuang_etal_PhysicaD07].
We have used symmetry-breaking phase transitions in a surprising context: to design automatized fertility tests for ovine sperm samples [@Creppy_etal_Interface16]. Other types of phase transition play important roles. One of them is the packing transition which occurs when finite size particles reach densities at which they are in contact with each other. This transition occurs for instance in cancer tumors [@Leroy_etal_BMB17], crowds [@Degond_Hua_JCP13; @Degond_etal_JCP11], road traffic [@Berthelin_etal_ARMA08], herds [@Degond_etal_JSP10] or tissue self-organization [@Peurichard_etal_JTB17]. Another example is the transition from a continuum to a network, and is at play for instance in the emergence of ant-trail networks [@Boissard_etal_JMB13; @Haskovec_etal_NonlinAnalTMA16]. For such systems, many challenges remain such as the derivation of macroscopic models.
In the forthcoming sections, we will focus on two specific aspects: the derivation of fluid models in spite of the lack of conservations relations (Section \[sec:fluid\]) and the investigation of phase transitions (Section \[sec:phasetrans\]).
Derivation of fluid models {#sec:fluid}
==========================
The Vicsek model {#subsec:vicsek}
----------------
We start with the description of particle models of collective behavior. As an example, we introduce the Vicsek model [@Vicsek_etal_PRL95] (see related models in [@Bertin_etal_JPhysA09; @Degond_etal_DCDSB16; @Ginelli_etal_PRL10]). It considers systems of self-propelled particles moving with constant speed (here supposed equal to $1$ for notational simplicity) and interacting with their neighbors through local alignment. Such a model describes the dynamics of bird flocks and fish schools [@Vicsek_Zafeiris_PhysRep12]. It is written in the form of the following stochastic differential system: $$\begin{aligned}
&& \hspace{-1cm}
dX_i(t) = V_i(t) dt, \label{eq:vicsek1} \\
&& \hspace{-1cm}
dV_i(t) = P_{V_i(t)^\bot} \circ (F_i(t) \, dt + \sqrt{2 \, \tau} dB_t^i), \label{eq:vicsek2} \\
&& \hspace{-1cm}
F_i(t) = \nu \, U_i(t), \quad U_i(t) = \frac{J_i(t)}{|J_i(t)|}, \quad J_i(t) = \sum_{j \, | \, |X_j(t) - X_i(t)|\leq R} V_j(t). \label{eq:vicsek3}\end{aligned}$$ Here, $X_i(t) \in {\mathbb R}^d$ is the position of the $i$-th particle (with $i \in \{1, \ldots, N\}$), $V_i(t) \in {\mathbb S}^{d-1}$ is its velocity direction. $B_t^i$ are standard independent Brownian motions in ${\mathbb R}^d$ describing idiosyncratic noise i.e. noise specific to each agent and $\sqrt{2 \, \tau}$ is a constant and uniform noise intensity. $F_i$ is the alignment force acting on the particles: it is proportional to the mean orientation $U_i(t) \in {\mathbb S}^{d-1}$ of the agents around agent $i$, with a constant and uniform multiplication factor $\nu$ encoding the alignment force intensity. $U_i(t)$ itself is obtained by normalizing the total momentum $J_i(t)$ of the agents belonging to a ball of radius $R$ centered at the position $X_i(t)$ of agent $i$. The normalization of $J_i(t)$ (i.e. its division by $|J_i(t)|$ where $|\cdot|$ denotes the euclidean norm) makes only sense if $J_i(t) \not = 0$, which we assume here. The projection $P_{V_i(t)^\bot}$ onto $\{V_i(t)\}^\bot$ is there to maintain $V_i(t)$ of unit norm and is a matrix given by $P_{V_i^\bot} = \mbox{Id} - V_i \otimes V_i$ where Id is the identity matrix of ${\mathbb R}^d$ and $\otimes$ denotes the tensor product. The Stochastic Differential Equation (\[eq:vicsek2\]) is understood in the Stratonovich sense, hence the symbol $\circ$, so that the noise term provides a Brownian motion on the sphere ${\mathbb S}^{d-1}$ [@Hsu_AMS02]. Eq. (\[eq:vicsek2\]) models two antagonist effects acting on the particles: the alignment force (the first term) which has a focusing effect and the noise (the second term) which has a defocusing effect. The original model proposed in [@Vicsek_etal_PRL95] is a time-discretized variant of this model.
Next, we present the kinetic model corresponding to this discrete model. It is written: $$\begin{aligned}
&& \hspace{-1cm}
\partial_t f + \nabla_x \cdot (vf) = \nabla_v \cdot \big( - (P_{v^\bot} \, F_f) \, f + \tau \, \nabla_v f \big), \label{eq:vicsek_KM1} \\
&& \hspace{-1cm}
F_f(x,t) = \nu \, U_f(x,t), \quad U_f(x,t) = \frac{J_f(x,t)}{|J_f(x,t)|}, \label{eq:vicsek_KM2} \\
&& \hspace{-1cm}
J_f(x,t) = \int_{|y-x| \leq R} \int_{{\mathbb S}^{d-1}} f(y,w,t) \, w \, dw \, dy, \label{eq:vicsek_KM3} \end{aligned}$$ where $f=f(x,v,t)$ is the particle distribution function and is a function of the position $x \in {\mathbb R}^d$, velocity $v \in {\mathbb S}^{d-1}$ and time $t>0$, $\nabla_v$ stands for the nabla operator on the sphere ${\mathbb S}^{d-1}$ and $P_{v^\bot}$ is the projection operator on $\{v\}^\bot$. $f(x,v,t)$ represents the probability density of particles in the $(x,v)$ space. The left-hand side of (\[eq:vicsek\_KM1\]) describes motion of the particles in physical space with speed $v$, while the right-hand side models the contributions of the alignment force $F_f$ and of velocity diffusion (with diffusion coefficient $\tau$) induced by Brownian noise at the particle level. The construction of the force term follows the same principles as for the discrete model, with $F_f(x,t)$, $U_f(x,t)$, $J_f(x,t)$ replacing $F_i(t)$, $U_i(t)$, $J_i(t)$. The sum of the velocities over neighboring particles in the computation of the momentum (\[eq:vicsek3\]) is replaced by integrals of the velocity weighted by $f$, with spatial integration domain being the ball centered at $x$ and of radius $R$, and velocity integration domain being the whole sphere ${\mathbb S}^{d-1}$ (Eq. (\[eq:vicsek\_KM3\])). Analysis of this model can be found in [@Figalli_etal_arxiv15; @Gamba_Kang_ARMA16]. The passage from (\[eq:vicsek1\])-(\[eq:vicsek3\]) to (\[eq:vicsek\_KM1\])-(\[eq:vicsek\_KM2\]) is shown in [@Bolley_etal_AML11], in the variant where $J_i(t)$ is directly used in (\[eq:vicsek2\]) instead of $F_i(t)$. In the case presented here, the control of $J_i(t)$ away from zero presents additional difficulties which haven’t been solved yet.
The macroscopic equations describe a large spatio-temporal scale regime. This regime is modelled by a time and space rescaling in (\[eq:vicsek\_KM1\])-(\[eq:vicsek\_KM2\]) involving a small parameter $\varepsilon \ll 1$ describing the ratio between the micro and the macro scales, which leads to $$\begin{aligned}
&& \varepsilon \big( \partial_t f^\varepsilon + \nabla_x \cdot (vf^\varepsilon) \big) = \nabla_v \cdot \big( - (P_{v^\bot} \, F_{f^\varepsilon}) \, f^\varepsilon + \tau \, \nabla_v f^\varepsilon \big), \label{eq:vicsek_KM1_eps} \\
&& F_f (x,t) =\nu \, u_f(x,t), \quad u_f(x,t)=\frac{j_f(x,t)}{|j_f(x,t)|},\label{eq:vicsek_KM2_eps} \\
&& j_f(x,t) = \int_{{\mathbb S}^{d-1}} f(x,w,t) \, w \, dw. \label{eq:vicsek_KM3_eps} \end{aligned}$$ The scale change brings a factor $\varepsilon$ in front of the terms at the left-hand side of (\[eq:vicsek\_KM1\_eps\]) describing the motion of the particles in position space. It also localizes the integral describing the momentum of particles which now only involves an integration with respect to the velocity $w$ of the distribution at the same location $x$ as the particle onto which the force applies (see Eq. (\[eq:vicsek\_KM2\_eps\])). This is due to the interaction radius $R$ being of order $\varepsilon$ in this regime. The expansion of $J_f$ in powers of $\varepsilon$ leads to (\[eq:vicsek\_KM2\_eps\]) up to terms of order $\varepsilon^2$ which are neglected here as not contributing to the final macroscopic model at the end. The macroscopic model is obtained as the limit $\varepsilon \to 0$ of this perturbation problem.
Before stating the result, we introduce the “von Mises Fisher (VMF)” distribution of orientation $u$ and concentration parameter $\kappa$ where $u$ is an arbitrary vector in ${\mathbb S}^{d-1}$ and $\kappa \in [0,\infty)$. This distribution denoted by $M_{\kappa u}$ is such that for all $v \in {\mathbb S}^{d-1}$: $$M_{\kappa u} (v) = \frac{1}{Z} \exp \big( \kappa \, u \cdot v \big),
\label{eq:vmf}$$ where $u \cdot v$ is the euclidean inner product of $u$ and $v$ and $Z$ is a normalization constant only depending on $\kappa$. In [@Degond_Motsch_M3AS08], we proved the following formal theorem
If the solution $f^\varepsilon$ of (\[eq:vicsek\_KM1\_eps\]), (\[eq:vicsek\_KM2\_eps\]) has a limit $f^0$ when $\varepsilon \to 0$, it is given by $$f^0(x,v,t) = \rho(x,t) \, M_{\kappa u(x,t)} (v),
\label{eq:vicsek_equi}$$ where $\kappa = \nu/\tau$ and the pair $(\rho,u)$ satisfies the following “self-organized hydrodynamic” (SOH) model: $$\begin{aligned}
&& \partial_t \rho + c_1 \nabla_x \cdot (\rho u) = 0, \label{eq:soh1} \\
&& \rho \big( \partial_t u + c_2 (u \cdot \nabla_x u) \big) + \tau \, P_{u^\bot} \nabla_x \rho = 0 , \label{eq:soh2} \\
&& |u|=1, \label{eq:soh3}\end{aligned}$$ with the coefficients $c_1, c_2$ depending on $\nu$ and $\tau$ and $P_{u^\bot}$ being the projection onto $\{u\}^\bot$. \[thm\_SOH\]
The VMF distribution provides a way to extend the concept of Gaussian distribution to statistical distributions defined on the sphere. The orientation $u$ describes the mean orientation of the particles while $1/\kappa$ measures the dispersion of the particles around this mean. When $\kappa$ is close to zero, the VMF is close to a uniform distribution while when it is large, it is close to a Dirac delta at $u$. The theorem states that at large scales, the distribution function approaches a VMF distribution weighted by the local density $\rho$. However, both $\rho$ and the orientation $u$ of the VMF depend on position and space and they are determined by solving the SOH model.
The SOH model is akin to the compressible Euler equations of gas dynamics, but with some important differences. First, the mean orientation $u$ is constrained to lie on the sphere as (\[eq:soh3\]) shows. The presence of the projection $P_{u^\bot}$ in (\[eq:soh2\]) guarantees that it is the case as soon as the initial orientation $u|_{t=0}$ belongs to the sphere. The presence of $P_{u^\bot}$ makes the system belong to the class of non-conservative hyperbolic problems, which are notoriously difficult (we can show that the model is hyperbolic). Finally, the convection terms in the two equations are multiplied by different coefficients $c_1 \not = c_2$, while they are the same in standard gas dynamics. This is a signature of a non-Galilean invariant dynamics. Indeed, as the particles are supposed to move with speed $1$, there is a preferred frame in which this speed is measured. In any other Galilean frame this property will be lost. The mathematical properties of the SOH model are open, except for a local existence result in [@Degond_etal_MAA13]. A rigorous proof of Theorem \[thm\_SOH\] has been given in [@Jiang_etal_arxiv15].
To understand how Theorem \[thm\_SOH\] can be proved, we write (\[eq:vicsek\_KM1\_eps\]) as $$\begin{aligned}
&& \partial_t f^\varepsilon + \nabla_x \cdot (vf^\varepsilon) = \frac{1}{\varepsilon} Q(f^\varepsilon) \label{eq:vicsek_KM31_eps} \\
&& Q(f) = \nabla_v \cdot \big( - (P_{v^\bot} \, F_f) \, f + \tau \, \nabla_v f \big), \label{eq:vicsek_KM4_eps} \end{aligned}$$ with $F_f$ given by (\[eq:vicsek\_KM2\_eps\]), (\[eq:vicsek\_KM3\_eps\]). It is readily seen that $Q(f)$ can be written as $$\begin{aligned}
&& Q(f) = {\mathcal Q} (f;u_f),
\label{eq:QQ} \end{aligned}$$ where $u_f$ is the mean orientation associated with $f$ and is given by (\[eq:vicsek\_KM2\_eps\]) and where for any $u \in {\mathbb S}^{d-1}$, $$\begin{aligned}
&& {\mathcal Q} (f;u) (v) = \tau \, \nabla_v \cdot \Big( M_{\kappa u} (v) \nabla_v \big( \frac{f(v)}{M_{\kappa u} (v)} \big) \Big).
\label{eq:calQ} \end{aligned}$$ We note that for a given $u \in {\mathbb S}^{d-1}$, the operator ${\mathcal Q} (\cdot;u)$ is linear. However, this is not the linearization of $Q$ around $\rho M_{\kappa u}$ as extra terms coming from the variation of $u_f$ with respect to $f$ would appear.
By formally letting $\varepsilon \to 0$ in (\[eq:vicsek\_KM31\_eps\]), we get that $f^0$ is a solution of $Q(f^0)=0$. It is an easy matter to show that this implies the existence of two functions $\rho(x,t)$ and $u(x,t)$ with values in $[0,\infty)$ and ${\mathbb S}^{d-1}$ respectively such that (\[eq:vicsek\_equi\]) holds. Indeed, from (\[eq:calQ\]) and Green’s formula, we get $$\begin{aligned}
&& \hspace{-1cm}
\int {\mathcal Q} (f;u)(v) \, \frac{f(v)}{M_{\kappa u} (v)} \, dv = - d \int M_{\kappa u} (v) \Big| \nabla_v \big( \frac{f(v)}{M_{\kappa u} (v)} \big) \Big|^2 \, dv \leq 0.
\label{eq:entrop_calQ} \end{aligned}$$ Therefore, if ${\mathcal Q} (f;u)=0$, this implies that $\frac{f(v)}{M_{\kappa u} (v)}$ does not depend on $v$. The result follows easily.
To find the equations satisfied by $\rho$ and $u$, it is necessary to remove the $1/\varepsilon$ singularity in (\[eq:vicsek\_KM31\_eps\]), i.e. to project the equation on the slow manifold. In gas dynamics, this is done by using the conservations of mass, momentum and energy. Here, the model only enjoys conservation of mass, which is expressed by the fact that $$\begin{aligned}
%&& \hspace{-1cm}
\int Q(f) \, dv = 0, \quad \forall f.
\label{eq:mass_calQ} \end{aligned}$$ Hence, integrating (\[eq:vicsek\_KM31\_eps\]) with respect to $v$ and using (\[eq:mass\_calQ\]), we get that $$\begin{aligned}
&& \partial_t \rho_{f^\varepsilon} + \nabla_x \cdot j_{f^\varepsilon} = 0. \label{eq:mass_cons_1} \end{aligned}$$ Letting $\varepsilon \to 0$, with (\[eq:vicsek\_equi\]), we get $$\begin{aligned}
\rho_{f^\varepsilon} \to \rho, \quad j_{f^\varepsilon} \to j_{f^0} = c_1 \rho u, \label{eq:mass_cons_2} \end{aligned}$$ where $c_1$ is the so called order-parameter and is given by $$\begin{aligned}
c_1 = c_1(\kappa) = \int M_{\kappa u} (v) \, (v \cdot u) \, dv.
. \label{eq:order_param} \end{aligned}$$ This leads to (\[eq:soh1\]).
We need another equation to find $u$. In gas dynamics, this is done by using momentum conservation, which in this context would be expressed by $\int Q(f) \, v \, dv = 0$. However, this equation is not true and the lack of momentum conservation relates to the particles being self-propelled and therefore, able to extract or release momentum from the underlying medium. However, in [@Degond_Motsch_M3AS08], I showed that weaker forms of conservations (named generalized collision invariants or GCI) hold and provide the missing equation.
More precisely, we define
For a given orientation $u \in {\mathbb S}^{d-1}$, we define a GCI associated with $u$ as a function $\psi(v) $ such that $$\begin{aligned}
%&& \hspace{-1cm}
\int {\mathcal Q} (f;u)(v) \, \psi(v) \, dv = 0, \quad \forall f \, \mbox{ such that } \, P_{u^\bot} j_f =0.
\label{eq:vel_calQ} \end{aligned}$$ \[def:GCI\]
By restricting the set of $f$ to which we request the conservations to apply, we enlarge the set of candidate GCI $\psi$. In [@Degond_Motsch_M3AS08] (see also [@Frouvelle_M3AS12]), we show that the following theorem:
The set ${\mathcal C}_u$ of GCI associated to a given orientation $u$ is a linear vector space of dimension $d$ expressed as follows: $$\begin{aligned}
%&& \hspace{-1cm}
{\mathcal C}_u = \{ C + A \cdot P_{u^\bot} v \, h(u \cdot v) \, \, | \, \, C \in {\mathbb R}, \, \, A \in \{u\}^\bot \}.
\label{eq:GCI_space} \end{aligned}$$ Here, defining $\theta$ by $\cos \theta = u \cdot v$, $h$ is given by $$\begin{aligned}
%&& \hspace{-1cm}
h(\cos \theta) = \frac{g(\theta)}{\sin \theta}, \quad \theta \in (0,\pi),
\label{eq:def_h} \end{aligned}$$ with $g$ being the unique solution of the elliptic problem $$\begin{aligned}
%&& \hspace{-1cm}
- \frac{d}{d \theta} \Big( \sin^{d-2} \theta \, e^{\kappa \, \cos \theta} \, \frac{dg}{d \theta} \Big) + (d-2) \, \sin^{d-4} \theta \, e^{\kappa \, \cos \theta} \, g = \sin^{d-1} \theta \, e^{\kappa \, \cos \theta}
\label{eq:def_g} \end{aligned}$$ in the space $$\begin{aligned}
%&& \hspace{-1cm}
V = \{ g \, \, | \, \, (d-2) \, \sin^{\frac{d}{2}-2} \theta \, g \in L^2(0,\pi), \quad \sin^{\frac{d}{2}-1} \theta \, g \in H^1_0(0,\pi) \}.
\label{eq:def_V} \end{aligned}$$ We recall that $L^2(0,\pi)$ is the Lebesgue space of square-integrable functions on $(0,\pi)$ and $H^1_0(0,\pi)$ is the Sobolev space of functions which are in $L^2(0,\pi)$ and whose first order derivative is in $L^2(0,\pi)$ and which vanish at $0$ and $\pi$. \[thm:GCI\]
The GCI have the remarkable property that $$\begin{aligned}
%&& \hspace{-1cm}
\int Q(f) \, P_{u_f^\bot} v \, h(u_f \cdot v) \, dv = 0, \quad \forall f.
\label{eq:cancel} \end{aligned}$$ Indeed, $P_{u_f^\bot} v \, h(u_f \cdot v)$ is a GCI $\psi$ associated with $u_f$. Thus, using (\[eq:QQ\]), and the definition (\[eq:vel\_calQ\]) of GCI, we get $$\int Q(f) \, \psi(v) \, dv = \int {\mathcal Q}(f,u_f) \, \psi(v) \, dv =0,$$ as $P_{u_f^\bot} j_f = |j_f| P_{u_f^\bot} u_f =0$. Multiplying (\[eq:vicsek\_KM31\_eps\]) by $P_{u_{f^\varepsilon}^\bot} v \, h(u_{f^\varepsilon} \cdot v)$, applying (\[eq:cancel\]) with $f = f^\varepsilon$ to cancel the right-hand side of the resulting equation, letting $\varepsilon \to 0$ and using (\[eq:vicsek\_equi\]), we get: $$\begin{aligned}
%&& \hspace{-1cm}
P_{u^\bot} \, \int \big( \partial_t + v \cdot \nabla_x \big) (\rho \, M_{\kappa u}) \, h(u \cdot v) \, v \, dv = 0.
\label{eq:eq_u} \end{aligned}$$ After some computations, this equation gives rise to (\[eq:soh2\]), where the constant $c_2$ depends on a suitable moment of the function $h$.
The GCI concept has provided a rigorous way to coarse-grain a large class of KM sharing similar structures [@Degond_etal_M3AS2016; @Degond_etal_DCDSB16; @Degond_Motsch_JSP11]. As an example, we now consider the model of [@Degond_etal_M3AS2016; @Degond_etal_MMS17] where self-propelled agents try to coordinate their full body attitude. This model is described in the next section.
A new model of full body attitude alignment {#subsec:body}
-------------------------------------------
The microscopic model considers $N$ agents with positions $X_i(t) \in {\mathbb R}^3$ and associated rotation matrices $A_i(t) \in \mbox{SO}(3)$ representing the rotation needed to map a fixed reference frame $(e_1, e_2, e_3)$ to the local frame $(A_i(t) \, e_1$, $A_i(t) \, e_2$, $A_i(t) \, e_3)$ attached to the body of agent $i$ at time $t$. As the particle are self-propelled, agent $i$ moves in the direction $A_i(t) \, e_1$ with unit speed. Agents try to coordinate their body attitude with those of their neighbors. Following these principles, the particle model is written: $$\begin{aligned}
&& \hspace{-1cm}
dX_i(t) = A_i(t) \, e_1 \, dt, \label{eq:body1} \\
&& \hspace{-1cm}
dA_i(t) = P_{T_{A_i(t)}} \circ (F_i(t) \, dt + 2 \, \sqrt{\tau} dB_t^i), \quad F_i(t) = \nu \, \Lambda_i(t), \label{eq:body2} \\
&& \hspace{-1cm}
\Lambda_i(t) = \mbox{PD}(G_i(t)), \quad G_i(t) = \sum_{j \, | \, |X_j(t) - X_i(t)|\leq R} A_j(t). \label{eq:body3}\end{aligned}$$ Here, $B_t^i$ are standard independent Brownian motions in the linear space of $3 \times 3$ matrices (in which $\mbox{SO}(3)$ is isometrically imbedded) describing idiosyncratic noise and 2 $\sqrt{\tau}$ is the noise intensity. $F_i$ is the force that aligns the body attitude of Agent $i$ to the mean body attitude of the neighbors defined by $\Lambda_i(t)$ with a force intensity $\nu$. $\Lambda_i(t)$ is obtained by normalizing the matrix $G_i(t)$ constructed as the sum of the rotation matrices of the neighbors in a ball of radius $R$ centered at the position $X_i(t)$ of Agent $i$. The normalization is obtained by using the polar decomposition of matrices. We suppose that $G_i(t)$ is non-singular. Then there exists a unique rotation matrix $\mbox{PD}(G_i(t))$ and a unique symmetric matrix $S_i(t)$ such that $G_i(t) = \mbox{PD}(G_i(t)) \, S_i(t)$. The quantity $P_{T_{A_i(t)}}$ denotes the orthogonal projection onto the tangent space $T_{A_i(t)}$to $\mbox{SO}(3)$ at $A_i(t)$ to guarantee that the dynamics maintains $A_i(t)$ on $\mbox{SO}(3)$. The Stochastic Differential Equation (\[eq:body2\]) is again understood in the Stratonovich sense, using the symbol $\circ$ to highlight this fact. As a consequence, the noise term provides a Brownian motion on $\mbox{SO}(3)$ as shown in [@Hsu_AMS02]. Note however that the noise intensity is $2 \, \sqrt{\tau}$ instead of $\sqrt{2 \tau}$ as before. This is because we endow $\mbox{SO}(3)$ with the inner product $A \cdot B = \frac{1}{2} \mbox{Tr} (A^T B)$, where Tr stands for the trace and the exponent $T$ for the matrix transpose, which corresponds to the standard metric on $3 \times 3$ matrices divided by $2$. With this convention, the noise $2 \, \sqrt{\tau}$ will exactly yields a diffusion coefficient equal to $\tau$ in the mean-field limit.
The mean-field model now provides the evolution of the distribution function $f=f(x,A,t)$ which depends on the position $x \in {\mathbb R}^d$, rotation matrix $A \in \mbox{SO}(3)$ and time $t>0$. It is written $$\begin{aligned}
&& \hspace{-1cm}
\partial_t f + \nabla_x \cdot (A\, e_1 f) = \nabla_A \cdot \big( - (P_{T_A} \, F_f) \, f + \tau \, \nabla_A f \big), \label{eq:body_KM1} \\
&& \hspace{-1cm}
F_f(x,t) = \nu \, \Lambda_f(x,t), \quad \Lambda_f(x,t) = \mbox{PD} (G_f(x,t)), \label{eq:body_KM2} \\
&& \hspace{-1cm}
G_f(x,t) = \int_{|y-x| \leq R} \int_{\mbox{\footnotesize SO}(3)} f(y,B,t) \, B \, dB \, dy, \label{eq:body_KM3} \end{aligned}$$ Here, as pointed out before, $\nabla_A$ and $\nabla_A \cdot$ stand for the gradient and divergence operators on $\mbox{SO}(3)$ when endowed with the Riemannian structure induced by the euclidean norm $\|A\| = \frac{1}{2} \mbox{Tr} (A^T A)$. The measure on $\mbox{SO}(3)$ is the Haar measure normalized to be a probability measure. The passage from (\[eq:body1\])-(\[eq:body3\]) to (\[eq:body\_KM1\])-(\[eq:body\_KM3\]) is open but in a variant where $G_i$ is used in the expression of $F_i$ instead of $\Lambda_i$, the proof of [@Bolley_etal_AML11] is likely to extend rather straightforwardly. In the case presented here, the control of $G_i(t)$ away from the set of singular matrices presents additional challenges. To the best of our knowledge, the mathematical theory of this model is nonexistent.
A similar rescaling as in the previous section leads to the following perturbation problem (dropping terms of order $\varepsilon^2$): $$\begin{aligned}
&& \hspace{-1cm}
\varepsilon \big( \partial_t f^\varepsilon + \nabla_x \cdot (A\, e_1 \, f^\varepsilon) \big) = \nabla_A \cdot \big( - (P_{T_A} \, \, F_{f^\varepsilon}) \, f^\varepsilon + \tau \, \nabla_A f^\varepsilon \big), \label{eq:body_KM1_eps} \\
&& \hspace{-1cm}
F_f (x,t) =\nu \, \lambda_f(x,t), \quad \lambda_f(x,t)=\mbox{PD} (g_f(x,t)), \label{eq:body_KM2_eps} \\
&& \hspace{-1cm}
g_f(x,t) = \int_{\mbox{\footnotesize SO}(3)} f(x,B,t) \, B \, dB, \label{eq:body_KM3_eps} \end{aligned}$$ where we have denoted by $g_f$ the local modification of $G_f$ (involving only values of $f$ at location $x$) and $\lambda_f$ its associated polar decomposition. This model can be written: $$\begin{aligned}
&& \partial_t f^\varepsilon + \nabla_x \cdot (A\, e_1 \, f^\varepsilon) = \frac{1}{\varepsilon} Q(f^\varepsilon) \label{eq:body_KM31_eps} \\
&& Q(f) = \nabla_A \cdot \big( - (P_{T_A} \, \, F_f) \, f + \tau \, \nabla_A f \big)\label{eq:body_KM4_eps} \end{aligned}$$ with $F_f$ given by (\[eq:body\_KM2\_eps\]), (\[eq:body\_KM3\_eps\]). The von Mises distribution is now defined by $$M_{\kappa \Lambda} (A) = \frac{1}{Z} \exp \big( \kappa \, \Lambda \cdot A \big),
\label{eq:vmf_body}$$ where $\Lambda \cdot A$ is the matrix inner product of $\Lambda$ and $A$ defined above, $\kappa = \nu / \tau$ and $Z$ is a normalization constant only depending on $\kappa$. Then, $Q(f)$ can be written as $$\begin{aligned}
&& Q(f) = {\mathcal Q} (f;\lambda_f),
\label{eq:body_QQ} \end{aligned}$$ where $\lambda_f$ is given by (\[eq:body\_KM2\_eps\]) and $$\begin{aligned}
&& {\mathcal Q} (f;\lambda)(A) = \tau \, \nabla_A \cdot \Big( M_{\kappa \lambda} (A) \nabla_A \big( \frac{f(A)}{M_{\kappa \lambda} (A)} \big) \Big).
\label{eq:body_calQ} \end{aligned}$$ In the same way as before, as $\varepsilon \to 0$, $f^\varepsilon \to f^0$, where $f^0$ is a solution of $Q(f^0)=0$. This implies the existence of $\rho= \rho(x,t) \in [0,\infty)$ and $\lambda = \lambda(x,t) \in \mbox{SO}(3)$ such that $$f^0(x,A,t) = \rho(x,t) \, M_{\kappa \lambda(x,t)} (A).
\label{eq:vicsek_equi_body}$$ Now, we define the GCI as follows:
For a body orientation given by the rotation matrix $\lambda \in \mbox{SO}(3)$, we define a GCI associated with $\lambda$ as a function $\psi(A) $ such that $$\begin{aligned}
%&& \hspace{-1cm}
\int {\mathcal Q} (f;\lambda)(A) \, \psi(A) \, dA = 0, \quad \forall f \, \mbox{ such that } \, P_{T_A} g_f =0.
\label{eq:vel_calQ_body} \end{aligned}$$ \[def:GCI\_body\]
Up to now, the above body attitude alignment model could have been written in any dimension, i.e. for $A \in \mbox{SO}(d)$ for any dimension $d$. The following characterization of the set of GCI now requires the dimension $d$ to be equal to $3$. A characterization like this in the case of a general dimension $d$ is still an open problem.
The set ${\mathcal C}_\lambda$ of GCI associated to the body orientation given by the rotation matrix $\lambda \in \mbox{SO}(3)$ is a linear vector space of dimension $4$ expressed as follows: $$\begin{aligned}
%&& \hspace{-1cm}
{\mathcal C}_\lambda = \{ C + P \cdot (\lambda^T \, A) \, h(\lambda \cdot A) \, \, | \, \, C \in {\mathbb R}, \, \, P \in {\mathcal A} \},
\label{eq:GCI_space_boody} \end{aligned}$$ where ${\mathcal A}$ denotes the space of antisymmetric $3 \times 3$ matrices and where $h$: $(0,\pi) \to {\mathbb R}$ is the unique solution of $$\begin{aligned}
&& \hspace{-1.5cm}
- \frac{d}{d \theta} \Big( \sin^2 (\theta/2) \, m(\theta) \, \frac{d}{d \theta} \big(\sin \theta \, h(\theta) \big) \Big) + \frac{1}{2} \, \sin \theta \, m(\theta) \, h (\theta)\nonumber \\
&& \hspace{4.5cm}
= - \sin^2 (\theta/2) \, \sin \theta \, m(\theta),
\label{eq:def_h_body} \end{aligned}$$ in the space $$\begin{aligned}
&& \hspace{-1cm}
H = \{ h: \, (0,\pi) \to {\mathbb R} \, \, | \nonumber \\
&& \hspace{1cm}
\sin \theta \, h \in L^2(0,\pi), \, \, \sin (\theta/2) \, \frac{d}{d \theta} (\sin \theta \, h) \in L^2(0,\pi) \}.
\label{eq:def_H_body} \end{aligned}$$ Here, we have denoted by $$m(\theta) = \frac{1}{Z} \, \exp \big( \kappa \, (\frac{1}{2} + \cos \theta ) \big),$$ where $Z$ is the normalization constant involved in (\[eq:vmf\_body\]) \[thm:GCI\_body\]
Using this expression of the GCI and the same methodology as in the previous section, in [@Degond_etal_M3AS2016], we have proved the following:
Suppose that the solution $f^\varepsilon$ of (\[eq:body\_KM1\_eps\]), (\[eq:body\_KM2\_eps\]) has a limit $f^0$ when $\varepsilon \to 0$. Then, $f^0$ is given by (\[eq:vicsek\_equi\_body\]) where $\kappa = \nu/\tau$ and the pair $(\rho,\lambda)$: $(x,t) \in {\mathbb R}^3 \times [0,\infty) \mapsto (\rho,\lambda)(x,t) \in [0,\infty) \times \mbox{SO}(3)$ satisfies the following “self-organized hydrodynamics for body attitude coordination” (SOHB) model: $$\begin{aligned}
&& \hspace{-1cm}
\partial_t \rho + c_1 \nabla_x \cdot (\rho \, \lambda e_1) = 0, \label{eq:soh1_body} \\
&& \hspace{-1cm}
\rho \big( \partial_t \lambda + c_2 (\lambda e_1 \cdot \nabla_x) \lambda \big) \nonumber \\
&& \hspace{0cm}
+ \Big[ (\lambda e_1) \times \big( c_3 \, \nabla_x \rho + c_4 \, \rho \, r_x(\lambda) \big) + c_4 \, \rho \, \delta_x(\lambda) \, \lambda e_1 \Big]_\times \, \lambda = 0 , \label{eq:soh2_body} \end{aligned}$$ with the coefficients $c_1$ to $c_4$ depending on $\nu$ and $\tau$. The quantities $r_x(\lambda)$ and $\delta_x(\lambda)$ are given by: $$\begin{aligned}
\delta_x(\lambda) = \mbox{Tr} \{ {\mathcal D}_x(\lambda) \}, \quad r_x(\lambda) = {\mathcal D}_x(\lambda) - {\mathcal D}_x(\lambda)^T,
\label{eq:delta_r} \end{aligned}$$ where ${\mathcal D}_x(\lambda)$ is the matrix defined, for any vector $w \in {\mathbb R}^3$, as follows: $$\begin{aligned}
(w \cdot \nabla_x) \lambda = [{\mathcal D}_x(\lambda) w]_\times \lambda.
\label{eq:def_Dx} \end{aligned}$$ Here and above, for a vector $w \in {\mathbb R}^3$, we denote by $[w]_\times$ the antisymmetric matrix defined for any vector $z \in {\mathbb R}^3$ by $$\begin{aligned}
[w]_\times z = w \times z,
\label{eq:[]_times} \end{aligned}$$ where $\times$ denote the cross product of two vectors. \[thm\_SOH\_body\]
We note that (\[eq:def\_Dx\] ) makes sense as $(w \cdot \nabla_x) \lambda$ belongs to the tangent space $T_\lambda$ of SO$(3)$ at $\lambda$ and $T_\lambda =\{ P \, \lambda \, | \, P \in {\mathcal A} \}$. So, there exists $u \in {\mathbb R}^3$ such that $(w \cdot \nabla_x) \lambda = [u]_\times \, \lambda$ and since $u$ depends linearly on $w$, there exists a matrix ${\mathcal D}_x(\lambda)$ such that $u = {\mathcal D}_x(\lambda) w$. The notation ${\mathcal D}_x(\lambda)$ recalls that the coefficients of this matrix are linear combinations of first order derivatives of $\lambda$. Using the exponential map, in the neighborhood of any point $x_0$, we can write (omitting the time-dependence) $ \lambda(x) = \exp \big( [b(x)]_\times \big) \lambda(x_0)$ where $b$ is a smooth function from a neighborhood of $x_0$ into ${\mathbb R}^3$. It is shown in [@Degond_etal_M3AS2016] that $$\delta_x(\lambda) (x_0) = (\nabla_x \cdot b)(x_0), \quad r_x(\lambda) (x_0) = (\nabla_x \times b) (x_0),$$ and thus, $\delta_x(\lambda)$ and $r_x(\lambda)$ can be interpreted as local “divergence” and “curl” of the matrix field $\lambda$. We note that (\[eq:soh2\_body\]) equally makes sense. Indeed, the expression on the first line is a derivative of the rotation field $\lambda$ and should consequently belong to $T_{\lambda(x,t)}$. But the second line has precisely the required structure as it is the product of an antisymmetric matrix with $\lambda$. Eq. (\[eq:soh1\_body\]) is the continuity equation for the density of agents moving at bulk velocity $c_1 \, \lambda e_1$ so that $\lambda e_1$ describes the fluid direction of motion. Eq. (\[eq:soh2\_body\]) gives the evolution of $\lambda$. The first line describes transport at velocity $c_2 \, \lambda e_1$ and since $c_2 \not = c_1$, the transport of $\lambda$ occurs at a different speed from the transport of mass, as in the SOH model (\[eq:soh1\]), (\[eq:soh2\]). The second line describes how $\lambda$ evolves during its transport. The first term (proportional to $\nabla_x \rho$) is the action of the pressure gradient and has the effect of turning the direction of motion away from high density regions. The other two terms are specific to the body attitude alignment model and do not have their counterpart in the classical SOH model (\[eq:soh1\]), (\[eq:soh2\]). The expressions of the coefficients $c_2$ to $c_4$ involve moments of the function $h$ intervening in the expression of the GCI. The mathematical theory of the SOHB model is entirely open. We note that the above theory can be recast in the unitary quaternion framework, as done in [@Degond_etal_MMS17].
Phase transitions {#sec:phasetrans}
=================
A Vicsek model exhibiting multiple equilibria {#subsec:Vic_multiple}
---------------------------------------------
Now, we go back to the Vicsek model of Section \[subsec:vicsek\]. More precisely, we consider the kinetic model (\[eq:vicsek\_KM1\_eps\])-(\[eq:vicsek\_KM3\_eps\]) in the spatially homogeneous case (i.e. we drop all dependences and derivatives with respect to position $x$) and with $\varepsilon = 1$. However, we are interested in the case where the coefficients $\tau$ and $\nu$ are functions of $|j_f|$. More precisely, we consider the system $$\begin{aligned}
&& \hspace{-1.4cm}
\partial_t f (v,t) = Q(f) (v,t), \label{eq:vicsek_KM1_homo} \\
&& \hspace{-1.4cm}
Q(f) (v,t) = \nabla_v \cdot \big( - \nu(|j_f(t)|) \, (P_{v^\bot} \, u_f(t)) \, f(v,t) + \tau(|j_f(t)|) \, \nabla_v f (v,t) \big), \label{eq:vicsek_KM2_homo} \\
&& \hspace{-1.4cm}
u_f(t)=\frac{j_f(t)}{|j_f(t)|}, \quad j_f(t) = \int_{{\mathbb S}^{d-1}} f(w,t) \, w \, dw. \label{eq:vicsek_KM3_homo} \end{aligned}$$ For future usage, we introduce the function $ k(|j|) = \frac{\nu(|j|)}{\tau(|j|)}$, as well as $\Phi$ the primitive of $k$: $\Phi(r) = \int_0^r k(s) \, ds$. Introducing the free energy $$\begin{aligned}
&& \hspace{-1.4cm}
{\mathcal F}(f) = \int_{{\mathbb S}^{d-1}} f(v) \, \log f(v) \, dv - \Phi(|j_f|),
\label{eq:free_ener} \end{aligned}$$ we find the free energy dissipation inequality $$\begin{aligned}
&& \hspace{-1.4cm}
\frac{d}{dt} {\mathcal F}(f)(t) = - {\mathcal D}(f) (t),
\label{eq:free_ener_dissip_1} \\
&& \hspace{-1.4cm}
{\mathcal D}(f) (t) = \tau(|j_f(t)|) \int_{{\mathbb S}^{d-1}} f(v,t) \, \Big| \nabla_v \big( f(v,t) - k(|j_f(t)|) \, (v \cdot u_f(t) ) \big) \Big|^2 \, .
\label{eq:free_ener_dissip_2} \end{aligned}$$
In [@Degond_etal_arXiv:1304.2929] (see a special case in [@Degond_etal_JNonlinearSci13]), we first give the proof of the following
\[theorem-existence-uniqueness\] Given an initial finite nonnegative measure $f_0$ in the Sobolev space $H^s({\mathbb S}^{d-1})$, there exists a unique weak solution $f$ of such that $f(0)=f_0$. This solution is global in time. Moreover, $f\in C^1(\mathbb{R}^*_+,C^\infty({\mathbb S}^{d-1}))$, with $f(v,t)>0$ for all positive $t$. Furethermore, we have the following instantaneous regularity and uniform boundedness estimates (for $m\in\mathbb{N}$, the constant $C$ being independent of $f_0$): $$\|f(t)\|^2_{H^{s+m}}\leqslant C\left(1+\frac1{t^m}\right)\|f_0\|^2_{H^{s}}.$$ For these solutions, the density $\rho(t) = \int_{{\mathbb S}^{d-1}} f(v,t) \, dv$ is constant in time, i.e. $\rho(t) = \rho$, where $\rho = \int_{{\mathbb S}^{d-1}} f_0(v) \, dv$.
The equilibria, i.e. the solutions of $Q(f)=0$ are given by $\rho \, M_{\kappa u}$ where $\rho$ is the initial density as defined in Theorem \[theorem-existence-uniqueness\] and $M_{\kappa u}$ is still the von Mises Fisher distribution (\[eq:vmf\]) with arbitrary value of $u \in {\mathbb S}^{d-1}$. However, now the value of $\kappa$ is found by the resolution of a fixed-point equation (the consistency condition) $$\begin{aligned}
&& \hspace{-1.4cm}
\kappa = k(|j_{\rho M_{\kappa u}}|).
\label{eq:fixed_point_kappa} \end{aligned}$$ This equation can be recast by noting that $|j_{\rho M_{\kappa u}}| = \rho \, c_1(\kappa)$ where $c_1(\kappa)$ is the order parameter (\[eq:order\_param\]). Assuming that the function $k$: $|j| \in [0,\infty) \mapsto k(|j|) \in [0,\infty)$ is strictly increasing and surjective, we can define its inverse $\iota$: $\kappa \in [0, \infty) \mapsto \iota(\kappa) \in [0,\infty)$. This assumption may be seen as restrictive, but it is easy to remove it at the expense of more technicalities, which we want to avoid in this presentation. As by definition $\iota(k(|j|)) = |j|$, applying the function $\iota$ to (\[eq:fixed\_point\_kappa\]), we can recast it in $$\begin{aligned}
&& \hspace{-1.4cm}
\mbox{either } \quad \kappa = 0 \quad \mbox{ or } \quad \frac{\iota(\kappa)}{c_1(\kappa)} = \rho .
\label{eq:fixed_point_kappa_2} \end{aligned}$$ Note that for $\kappa = 0$, the von Mises distribution is the uniform distribution on the sphere. We will call the corresponding equilibrium, “isotropic equilibrium”. Any von Mises distribution with $\kappa > 0$ will be called a “non-isotropic equilibrium”. For a given $\kappa >0$, the von Mises equilibria $\rho \, M_{\kappa u}$ form a manifold diffeomorphically parametrized by $u \in {\mathbb S}^{d-1}$. Both $\iota$ and $c_1$ are increasing functions of $\kappa$ so the ratio $\frac{\iota(\kappa)}{c_1(\kappa)}$ has no defined monotonicity a priori. For a given $\rho$ the number of solutions $\kappa$ of (\[eq:fixed\_point\_kappa\_2\]) depends on the particular choice of the function $k$. However, we can state the following proposition:
\[prop-two-thresholds\] Let $\rho>0$. We define $$\begin{aligned}
\label{def-rho-c}
\rho_c=\lim_{\kappa\to 0} \frac{\iota(\kappa)}{c_1(\kappa)}, \quad \rho_*=\inf_{\kappa\in(0,\infty)} \frac{\iota(\kappa)}{c_1(\kappa)},\end{aligned}$$ where $\rho_c>0$ may be equal to $+\infty$. Then we have $\rho_c\geqslant\rho_*$, and
- If $\rho<\rho_*$, the only solution to (\[eq:fixed\_point\_kappa\_2\]) is $\kappa=0$ and the only equilibrium with total mass $\rho$ is the uniform distribution $f=\rho$.
- If $\rho>\rho_*$, there exists at least one positive solution $\kappa>0$ to (\[eq:fixed\_point\_kappa\_2\]). It corresponds to a family $\{\rho M_{\kappa u}, \, u \in {\mathbb S^{d-1}}\}$ of non-isotropic von Mises equilibria.
- The number of families of nonisotropic equilibria changes as $\rho$ crosses the threshold $\rho_c$. Under regularity and non-degeneracy hypotheses, in a neighborhood of $\rho_c$, this number is even when $\rho<\rho_c$ and odd when $\rho>\rho_c$.
Now, the key question is the stability of these equilibria. A first general result can be established thanks to the La Salle principle:
\[prop-lasalle-refined\] Let $f_0$ be a positive measure on the sphere ${\mathbb S}^{d-1}$, with mass $\rho$, and $f(t)$ the associated solution to (\[eq:vicsek\_KM1\_homo\]). If no open interval is included in the set $\{\kappa \in [0,\infty) \, | \, \rho c(\kappa)=\iota(\kappa)\}$, then there exists a solution $\kappa_\infty$ to (\[eq:fixed\_point\_kappa\_2\]) such that: $$\begin{gathered}
\label{eq-limJ}
\lim_{t\to\infty} |j_f(t)|=\rho c(\kappa_\infty)
\intertext{and}
\label{eq-limf}
\forall s\in\mathbb{R}, \lim_{t\to\infty}\,\|f(t)-\rho M_{\kappa_\infty u_f(t)}\|_{H^s}=0.\end{gathered}$$
In other words, under these conditions, the family of equilibria $\{ \rho M_{\kappa_\infty u} \, | \, u \in {\mathbb S}^{d-1} \}$ is an $\omega$-limit set of the trajectories of (\[eq:vicsek\_KM1\_homo\]). Now, we study separately the stability of the isotropic and non-isotropic equilibria.
Stability of the isotropic equilibria {#subsec:stab_iso}
-------------------------------------
For the isotropic equilibria, we have the following two propositions:
\[prop-unstability-uniform\] Let $f(t)$ be the solution to (\[eq:vicsek\_KM1\_homo\]) associated with initial condition $f_0$ of mass $\rho$. If $\rho>\rho_c$, and if $j_{f_0}\neq0$, then we cannot have $\kappa_\infty=0$ in Proposition \[prop-lasalle-refined\].
\[prop-stability-uniform\] Suppose that $\rho<\rho_c$. We define $$\lambda=(n-1)\tau_0(1-\frac{\rho}{\rho_c})>0.
\label{eq:decayrate_iso}$$ Let $f_0$ be an initial condition with mass $\rho$, and $f$ the corresponding solution to (\[eq:vicsek\_KM1\_homo\]). There exists $\delta>0$ independent of $f_0$ such that if $\|f_0-\rho\|_{H^s}<\delta$, then for all $t\geqslant0$ $$\|f(t)-\rho\|_{H^s}\leqslant\frac{\|f_0-\rho\|_{H^s}}{1-\frac{1}{\delta}\|f_0-\rho\|_{H^s}}e^{-\lambda t}.$$
Prop. \[prop-unstability-uniform\] implies the instability of the uniform equilibria for $\rho>\rho_c$ (provided the initial current $j_{f_0}$ does not vanish) as the $\omega$-limit set of the trajectories consists of non-isotropic equilibria. Prop. \[prop-stability-uniform\] shows the stability of the uniform equilibria for $\rho<\rho_c$ in any $H^s$ norm with exponential decay rate given by (\[eq:decayrate\_iso\]). We stress that these are fully nonlinear stability/instability results.
Stability of the non-isotropic equilibria {#subsec:stab_aniso}
-----------------------------------------
Let $\kappa>0$ and $\rho>0$ be such that $\kappa$ is a solution to (\[eq:fixed\_point\_kappa\_2\]). In addition to the hypotheses made so far on $k$, we assume that $k$ is differentiable, with its derivative $k'$ being itself Lipschitz. The following result shows that the stability or instability of the non-isotropic equilibria is determined by whether the function $\kappa \mapsto \frac{\iota(\kappa)}{c_1(\kappa)}$ is strictly increasing or decreasing.
\[prop-unstability-stability-anisotropic\] Let $\kappa>0$ and $\rho=\frac{\iota(\kappa)}{c_1(\kappa)}$. We denote by $\mathcal F_\kappa$ the value of $\mathcal F(\rho M_{\kappa u})$ (independent of $u\in {\mathbb S}^{d-1}$).
- Suppose $(\frac{\iota}{c_1})'(\kappa)<0$. Then any equilibrium of the form $\rho M_{\kappa u}$ is unstable, in the following sense: in any neighborhood of $\rho M_{\kappa u}$, there exists an initial condition $f_0$ such that $\mathcal F(f_0)<\mathcal F_\kappa$. Consequently, in that case, we cannot have $\kappa_\infty=\kappa$ in Proposition \[prop-lasalle-refined\].
- Suppose $(\frac{\iota}{c_1})'(\kappa)>0$. Then the family of equilibria $\{\rho M_{\kappa u}, u \in {\mathbb S}^{d-1}\}$ is stable, in the following sense: for all $K>0$ and $s>\frac{n-1}2$, there exists $\delta>0$ and $C$ such that for all $f_0$ with mass $\rho$ and with $\|f_0\|_{H^s}\leqslant K$, if $\|f_0-\rho M_{\kappa u}\|_{L^2}\leqslant\delta$ for some $u \in {\mathbb S}^{d-1}$, then for all $t\geqslant0$, we have $$\begin{gathered}
\mathcal F(f)\geqslant\mathcal F_\kappa,\\
\|f-\rho M_{\kappa u_f}\|_{L^2}\leqslant C\|f_0-\rho M_{\kappa u_{f_0}}\|_{L^2}.\end{gathered}$$
Note that the marginal case $(\frac{\iota}{c_1})'(\kappa)=0$ is not covered by the above theorem and is still an open problem. In the stable case, the following proposition provides the rate of decay to an element of the same family of equilibria:
\[thm-strong-stability-anisotropic\] Suppose $(\frac{\iota}{c_1})'(\kappa)>0$. Then, for all $s>\frac{n-1}2$, there exist constants $\delta>0$ and $C>0$ such that for any $f_0$ with mass $\rho$ satisfying $\|f_0-\rho M_{\kappa u}\|_{H^s}<\delta$ for some $u \in {\mathbb S}^{d-1}$, there exists $u_\infty \in {\mathbb S}^{d-1}$ such that $$\|f-\rho M_{\kappa u_\infty}\|_{H^s}\leqslant C\|f_0-\rho M_{\kappa u}\|_{H^s} \, e^{-\lambda t},$$ where the rate $\lambda$ is given by $$\label{def-lambda}
\lambda=\frac{c_1(\kappa) \, \tau(\iota(\kappa))}{\iota'(\kappa)} \, \Lambda_\kappa \, \big(\frac{\iota}{c_1} \big)'(\kappa).$$ The constant $\Lambda_\kappa$ is the best constant for the following weighted Poincaré inequality (see the appendix of [@Degond_etal_JNonlinearSci13]): $$\label{poincare-lambda}
\langle|\nabla_\omega g|^2\rangle_{M}\geqslant\Lambda_\kappa\langle(g-\langle g\rangle_{M})^2\rangle_{M},$$ where we have writen $\langle g \rangle_M$ for $\int_\mathbb{S}g(v)M_{\kappa u} (v) \, dv$.
Conclusion {#sec:conclusion}
==========
In this short overview, we have surveyed some of the mathematical questions posed by collective dynamics and self-organization. We have particularly focused on two specific problems: the derivation of macroscopic models and the study of phase transitions. There are of course many other fascinating challenges posed by self-organized systems. These have shown to be an inexhaustible source of problems for mathematicians and a drive for the invention of new mathematical concepts.
[99]{}
I. Aoki, A simulation study on the schooling mechanism in fish, Bulletin of the Japan Society of Scientific Fisheries, 48 (1982) 1081-1088.
P. Bak, C. Tang, K. Wiesenfeld, Self-organized criticality: an explanation of $1/f$ noise, Phys. Rev. Lett., 59 (1987) 381-384.
A. Barbaro, P. Degond, Phase transition and diffusion among socially interacting self-propelled agents, Discrete Contin. Dyn. Syst. Ser. B, to appear.
S. Bazazi et al, Collective Motion and Cannibalism in Locust Migratory Bands, Current Biology 18 (2008) 735-739.
F. Berthelin, P. Degond, M. Delitala, M. Rascle, A model for the formation and evolution of traffic jams, Arch. Rat. Mech. Anal., 187 (2008) 185-220.
E. Bertin, M. Droz and G. Grégoire, Hydrodynamic equations for self-propelled particles, J. Phys. A: Math. Theor. 42 (2009) 445001.
E. Boissard, P. Degond, S. Motsch, Trail formation based on directed pheromone deposition, J. Math. Biol., 66 (2013) 1267-1301.
F. Bolley, J. A. Cañizo, J. A. Carrillo, Mean-field limit for the stochastic Vicsek model, Appl. Math. Lett., 25 (2011) 339-343.
E. Carlen, P. Degond, and B Wennberg, Kinetic limits for pair-interaction driven master equations and biological swarm models, Math. Models Methods Appl. Sci., 23 (2013) 1339-1376.
J. A. Carrillo et al, Asymptotic Flocking Dynamics for the kinetic Cucker-Smale model, SIAM J. Math. Anal., 42 (2010) 218-236.
Y-L. Chuang et al, State transitions and the continuum limit for a 2D interacting, self-propelled particle system, Physica D, 232 (2007) 33-47.
I. D. Couzin et al, Collective Memory and Spatial Sorting in Animal Groups, J. theor. Biol., 218 (2002) 1-11.
A. Creppy et al, Symmetry-breaking phase-transitions in highly concentrated semen, Journal of the Royal Society Interface, 13 (2016), p. 20160575.
F. Cucker, S. Smale, Emergent behavior in flocks, IEEE Transactions on Automatic Control, 52 (2007) 852-862.
A. Cziròk, E. Ben-Jacob, I. Cohen, T. Vicsek, Formation of complex bacterial colonies via self-generated vortices, Phys. Rev. E, 54 (1996) 1791-18091.
P. Degond, A. Frouvelle, J-G. Liu, Macroscopic limits and phase transition in a system of self-propelled particles, J. Nonlinear Sci., 23 (2013) 427-456.
P. Degond, A. Frouvelle, J-G. Liu, Phase transitions, hysteresis, and hyperbolicity for self-organized alignment dynamics, Arch. Ration. Mech. Anal., 216 (2015), pp 63-115.
P. Degond, A. Frouvelle, S. Merino-Aceituno, A new flocking model through body attitude coordination, Math. Models Methods Appl. Sci. 27 (2017) 1005-1049.
P. Degond, A. Frouvelle, S. Merino-Aceituno, A. Trescases, Quaternions in collective dynamics, Multiscale Model. Simul., to appear. arXiv:1701.01166
P. Degond, J. Hua, Self-Organized Hydrodynamics with congestion and path formation in crowds, J. Comput. Phys., 237 (2013) 299-319.
P. Degond, J. Hua, L. Navoret, Numerical simulations of the Euler system with congestion constraint, J. Comput. Phys., 230 (2011) 8057-8088.
P. Degond et al, Hydrodynamic models of self-organized dynamics: derivation and existence theory, Methods Appl. Anal., 20 (2013) 089-114.
P. Degond, A. Manhart, H. Yu, A continuum model for nematic alignment of self-propelled particles, DCDS B, 22 (2017) 1295-1327.
P. Degond, S. Motsch, Continuum limit of self-driven particles with orientation interaction, Math. Models Methods Appl. Sci., 18 Suppl. (2008) 1193-1215.
P. Degond, S. Motsch, Large scale dynamics of the Persistent Turning Walker model of fish behavior, J. Stat. Phys., 131 (2008) 989-1021.
P. Degond, S. Motsch, A macroscopic model for a system of swarming agents using curvature control, J. Stat. Phys., 143 (2011) 685-714
P. Degond et al, Congestion in a macroscopic model of self-driven particles modeling gregariousness, J. Stat. Phys., 138 (2010) 85-125.
M. L. Domeier, P. L. Colin, Tropical reef fish spawning aggregations: defined and reviewed, Bulletin of Marine Science, 60 (1997) 698-726.
A. Figalli, M-J. Kang, J. Morales, Global well-posedness of the spatially homogeneous Kolmogorov-Vicsek model as a gradient flow, arXiv:1509.02599.
A. Frouvelle, A continuum model for alignment of self-propelled particles with anisotropy and density-dependent parameters, Math. Mod. Meth. Appl. Sci., 22 (2012) 1250011 (40 p.).
A. Frouvelle, J.-G. Liu, Dynamics in a kinetic model of oriented particles with phase transition, SIAM J. Math. Anal., 44 (2012) 791-826.
I. Gallagher, L. Saint-Raymond, B. Texier, From Newton to Boltzmann: hard spheres and short-range potentials. European math. soc., 2013.
I. M. Gamba, M-J. Kang, Global weak solution of the Kolmogorov-Fokker-Planck type equation with orientational interaction, Arch. Rat. Mech. Anal. 222 (2016) 317-342.
J. Gautrais et al, Analyzing fish movement as a persistent turning walker, J. Math. Biol., 58 (2009) 429-445.
J. Gautrais et al, Deciphering interactions in moving animal groups. Plos Comput. Biol., 8 (2012) e1002678.
F. Ginelli, F. Peruani, M. Bär, H. Chaté, Large-scale collective properties of self-propelled rods, Phys. Rev. Lett. 104 (2010) 184502.
S. -Y. Ha, J.-G. Liu, A simple proof of the Cucker-Smale flocking dynamics and mean-field limit, Commun. Math. Sci., 7 (2009) 297-325.
S.-Y. Ha, E. Tadmor, From particle to kinetic and hydrodynamic descriptions of flocking, Kinetic and Related Models, 1 (2008) 415-435.
J. Haskovec et al, Notes on a PDE system for biological network formation, Nonlinear Anal. 138 (2016) 127-155.
D. Helbing, I. J. Farkas, T. Vicsek, Freezing by heating in a driven mesoscopic system, Phys. Rev. Lett. 84 (2000) 1240.
E. P. Hsu, Stochastic Analysis on Manifolds, Graduate Series in Mathematics, American Mathematical Society, 2002.
N. Jiang, L. Xiong, T-F. Zhang, Hydrodynamic limits of the kinetic self-organized models, SIAM J. Math. Anal. 48 (2016) 3383-3411.
A. Khuong et al, A computational model of ant nest morphogenesis, in "Advances in Artificial Life, ECAL 2011, MIT Press, 2011, pp. 404-411.
O. E. Lanford, III. On a derivation of the Boltzmann equation. In Astérisque No. 40, Soc. Math. France, Paris, 1976, pp. 117-137.
M. Leroy-Lerêtre et al, Are tumor cell lineages solely shaped by mechanical forces ? Bull. Math. Biol., 79 (2017) 2356-2393.
R. Lukemana, Y.-X. Li, L. Edelstein-Keshet, Inferring individual rules from collective behavior, Proc. Natl. Acad. Sci. USA 107 (2010), 12576-12580.
S. Mischler, C. Mouhot, Kac’s Program in Kinetic Theory, Invent. Math., 193 (2013) 1-147,
S. Motsch, E. Tadmor, A new model for self-organized dynamics and its flocking behavior, J. Stat. Phys., 144 (2011) 923-947.
M. Moussaïd et al, Traffic Instabilities in Self-organized Pedestrian Crowds, PLoS Computational Biology, 8 (2012) e1002442
B. Perthame, F. Quiròs, J. L. Vàzquez, The Hele-Shaw asymptotics for mechanical models of tumor growth, Arch. Ration. Mech. Anal. 212 (2014) 93-127.
D. Peurichard et al, Simple mechanical cues could explain adipose tissue morphology, J. Theoret. Biol., 429 (2017), 61-81.
M. Poujade et al, Collective migration of an epithelial monolayer in response to a model wound, Proc. Natl. Acad. Sci. USA 104 (2007), 15988-15993.
B. I. Shraiman, Mechanical feedback as a possible regulator of tissue growth, Proc. Natl. Acad. Sci. USA 102 (2005), 3318-3323.
J. Shen, Cucker-Smale flocking under hierarchical leadership, SIAM J. Appl. Math., 58 (2007) 694-719.
J. Toner, Y. Tu, Flocks, herds, and schools: A quantitative theory of flocking, Phys. Rev. E 58 (1998), 4828.
T. Vicsek et al, Novel type of phase transition in a system of self-driven particles, Phys. Rev. Lett., 75 (1995) 1226-1229.
T. Vicsek, A. Zafeiris, Collective motion, Phys. Rep., 517 (2012) 71-140.
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'precision1.bib'
---
[ **Selecting and Ranking Individualized Treatment Rules with Unmeasured Confounding**]{}
Introduction
============
A central statistical problem in precision medicine and health policy is to learn treatment rules that are tailored to the patient’s characteristics. There is now an exploding literature on individualized policy discovery; see @Precision_medicine for an up-to-date review. Although randomized experiments remain the gold standard for causal inference, there has been a growing interest in using observational data to draw causal conclusions and discover individualized treatment rules due to the increasing availability of electronic health records and other observational data sources [@moodie2012q; @athey2017efficient; @kallus2017recursive; @M_learning; @zhao2019efficient].
A common way to formulate the problem of individualized policy discovery is via the *value function*, which is the expected potential outcome under a treatment rule or regime. The optimal treatment rule is usually defined as the one that maximizes the value function. In the single-decision setting, the value function can be easily identified when the data come from a randomized experiment (as long as the probability of receiving treatment is never $0$ or $1$). When the data come from an observational study, the value function can still be identified under the assumption that all confounders are measured. This assumption can be further extended to the multiple-decision setting [@murphy2003optimal; @robins2004optimal]. In this paper we will focus our discussion on the single-decision setting but consider the possibility of unmeasured confounding.
With few exceptions, the vast majority of existing methods for treatment rule discovery from observational data are based on the no unmeasured confounding assumption. Typically, these methods first estimate the value function assuming no unmeasured confounding and then select the treatment rule that maximizes the estimated value function. However, it is common that a substantial fraction of the population appear to behave similarly under treatment or control. From a statistical perspective and if there is truly no unmeasured confounder, we should still attempt to estimate the treatment effect for individuals in this subpopulation and optimize the treatment rule accordingly. However, the optimal treatment decisions for these individuals are, intuitively, also the most sensitive to unmeasured confounding. It may only take a small amount of unmeasured confounding to change the sign of the estimated treatment effects for these individuals. From a policy perspective (especially when there is a cost constraint), learning the “optimal” treatment decision for these individuals from observational data seems likely to be error-prone.
Sensitivity analysis for individualized treatment rules {#sec:sens-analys-indiv}
-------------------------------------------------------
There is a long literature on studying the sensitivity of observational studies to unmeasured confounding, dating from @Cornfield1959. In short, such sensitivity analysis asks how much unmeasured confounding is needed to alter the causal conclusion of an observational study qualitatively. In this paper, we will study the sensitivity of individualized treatment rules to unmeasured confounding using a prominent model proposed by @Rosenbaum1987, where the odds ratio of receiving the treatment for any two individuals with the same observed covariates is bounded between $1/\Gamma$ and $\Gamma$ ($\Gamma \ge 1$; $\Gamma = 1$ corresponds to no unmeasured confounding). More specifically, we will consider selecting and ranking individualized treatment rules under Rosenbaum’s model for unmeasured confounding.
Our investigation is motivated by the impact of effect modification on the power of Rosenbaum’s sensitivity analysis that is studied by @Hsu2013effectmodification. A phenomenon found by @Hsu2013effectmodification is that subgroups with larger treatment effect may have larger design sensitivity. For example, suppose a subgroup A has larger treatment effect than a subgroup B based on observational data. Then, there may exist a $\Gamma > 1$ such that, when the sample size of both subgroups go to infinity, the probability of rejecting Fisher’s sharp null hypothesis under the $\Gamma$-sensitivity model goes to $1$ for subgroup A and $0$ for subgroup B. Therefore, to obtain causal conclusions that are most robust to unmeasured confounding, it may be more desirable to use a smaller subgroup with larger treatment effect than to use a larger subgroup with smaller treatment effect.
When comparing individualized treatment rules to a baseline, the above phenomenon suggests that a treatment rule with smaller value may be less sensitive to unmeasured confounding than a treatment rule with larger value. In other words, when there is unmeasured confounding, the “optimal” treatment rule might not be the one that maximizes the value function assuming no measured confounding; in fact, there are usually many “optimal” treatment rules. This is because the value function in this case only defines a *partial order* on the set of individualized treatment rules, so two rules with different value function assuming no unmeasured confounding may become indistinguishable under the $\Gamma$-sensitivity model when $\Gamma >
1$. Fundamentally, the reason is that the value function is only partially identified in Rosenbaum’s $\Gamma$-sensitivity model.
As an example, let’s use $r_2 \succ_{\Gamma} r_1$ (abbreviated as $r_2 \succ r_1$ if the value of $\Gamma$ is clear from the context) to denote that the value of rule $r_2$ is *always* greater than the value of $r_1$ when the unmeasured confounding satisfies the $\Gamma$-sensitivity model. Then, it is possible that
- Under $\Gamma = 1$, $r_2 \succ r_1 \succ r_0$ (so $r_2 \succ r_0$);
- Under some $\Gamma > 1$, $r_1 \succ r_0$ but $r_2 \not \succ r_0$.
This phenomenon occurs frequently in real data examples, see in . Note that the relation $\succ$ is defined using the value function computed using the population instead of a particular sample.
Because the value function only defines a partial order on the treatment rules, it is no longer well-defined to estimate *the* optimal treatment rule when there is unmeasured confounding. Instead, we aim to recover the partial ordering of a set of treatment rules or select a subset of rules that satisfy certain statistical properties. This problem is related to the problem of selecting and ranking subpopulations (as a post hoc analysis for randomized experiments) which has been extensively studied in statistics [@gupta1979multiple; @gibbons1999selecting]. Unfortunately, in problems considered by the existing literature, the subpopulations always have a *total order*. For example, a prototypical problem in that literature is to select a subset that contains the largest $\mu_i$ based on independent observations $Y_i \sim \mathrm{N}(\mu_i,1)$. It is evident that the methods developed there cannot be directly applied to the problem of comparing treatment rules which only bears a partial order. Nevertheless, we will borrow some definitions in that literature to define the goal of selecting and ranking individualized treatment rules.
Related work and our approach
-----------------------------
Existing methods for individualized policy discovery from observational data often take an *exploratory* stance. They often aim to select the individualized treatment rule, often within an infinite-dimension function class, that maximized the estimated value function using outcome regression-based [@robins2004optimal; @qian_murphy2011], inverse-probability weighting [@Zhao_OWL; @kallus2017recursive], or doubly-robust estimation[@dudik2014doubly; @athey2017efficient]. In order to estimate the value function, some parametric or semiparametric models are specified to model the outcome and/or the treatment selection process. To identify the value function, the vast majority of these approaches make the no unmeasured confounding assumption which may be unrealistic in many applications. The only exception to our knowledge is @kallus2018confounding, in which the authors propose to maximize the minimum value of an individualized treatment rule when the unmeasured confounding satisfies a marginal sensitivity model [@tan2006distributional; @Zhao2017]. This is further extended to the estimation of conditional average treatment effect with unmeasured confounding in @kallus2018interval. Another related work is @yadlowsky2018bounds who consider semiparametric inference for the average treatment effect in Rosenbaum’s sensitivity model.
In this paper we take a different perspective. Our approach is based on a statistical test to compare the value of two individualized treatment rules when there is limited unmeasured confounding. Briefly speaking, we first match the treated and control observations by the observed covariates and then propose to use Rosenbaum’s sensitivity model to quantify the magnitude of unmeasured confounding after matching (the deviation of the matched observational study from a pairwise randomized experiment). At the core of our proposal is a randomization test introduced by [@fogarty2016studentized] to compare the value of two individualized treatment rules in Rosenbaum’ sensitivity model. Based on this test, we introduce a framework to rank and select treatment rules within a given finite collection and show that different statistical errors can be controlled with the appropriate multiple hypothesis testing methods.
In principle, our framework can be used with an arbitrary (finite) number of pre-specified treatment rules. In practice, it is more suitable for small-scale policy discovery with relatively few decision variables, where it is not needed to use machine learning methods to discover complex patterns or such methods have already been employed in a preliminary study to suggest a few candidat rules. The design-based nature of our approach makes it particularly useful for *confirmatory* analyses, the importance of which is widely acknowledged in the policy discovery literature [e.g. @kallus2017recursive; @zhang2018interpretable; @Precision_medicine]. Methods proposed in this paper thus complement the existing literature on individualized treatment rules by providing a way to confirm the effectiveness of a treatment rule learned from observational data and assess its robustness to unmeasured confounding. When there are several competing treatment rules, our framework further facilitates the decision maker to select or rank the treatment rules using flexible criteria.
The rest of the paper is organized as follows. In we introduce a real data example that will be used to illustrate the proposed methods. We then introduce some notations and discuss how to compare two treatment rules when there is unmeasured confounding. In we consider three questions about ranking and selecting among multiple treatment rules. We compare our proposal with some baseline procedures in a simulation study and apply our method to another application using data from the Health and Retirement Study. Finally, we conclude our paper with some brief discussion in .
Comparing treatment rules with unmeasured confounding {#sec:2}
=====================================================
Running example: Malaria in West Africa {#sec:running-example}
---------------------------------------
The Garki Project, conducted by the World Health Organization and the Government of Nigeria from 1969-1976, was an observational study that compared several strategies to control malaria. @Hsu2013effectmodification studied the effect modification for one of the malaria control strategies, namely spraying with an insecticide, propoxur, together with mass administration of a drug sulfalene-pyrimethamine at high frequency. The outcome is the difference between the frequency of Plasmodium falciparum in blood samples, that is, the frequency of a protozoan parasite that causes malaria, measured before and after the treatment. Using 1560 pairs of treated and control individuals matched by their age and gender, @Hsu2013effectmodification found that the treatment was much more beneficial for young children than for other individuals, if there is not unmeasured confounding.
More interestingly, they found that, despite the reduced sample size, the 447 pairs of young children exhibit a treatment effect that is far less sensitive to unmeasured confounding bias than the full sample of 1560 pairs. So from a policy perspective, it may be preferable to implement the treatment only for young children rather than the whole population. In the rest of this paper we will generalize this idea to selecting and ranking treatment rules. We will use the matched dataset in @Hsu2013effectmodification to illustrate the definitions and methodologies in the paper; see the original article for more information about the Garki Project dataset and the matched design. A different application concerning the effect of late retirement on health outcomes will be presented in near the end of this article.
Some notations and definitions {#sec:some-notat-defin}
------------------------------
We first introduce some notations in order to compare treatment rules when there is unmeasured confounding. Let ${X}$ denote all the pre-treatment covariates measured by the investigator. In the single-decision setting considered in this paper, an individualized treatment rule (or treatment regime) $d$ maps a vector of pre-treatment covariates $ X$ to the binary treatment decisions, $\{0, 1\}$ ($0$ indicates control and $1$ indicates treatment). In our running example, we shall consider six treatment rules, $r_0,r_1,\cdots,r_5$, where $r_i$ assigns treatment to the youngest $i
\times 20\%$ of the individuals. Specifically, the minimum, $20\%,
40\%, 60\%, 80\%$ quantiles, and maximum of age are $0 \,
(\text{newborn})$, $7$, $20$, $31$, $41$, and $73$ years old.
Let $Y$ be the outcome and $Y(0),Y(1)$ be the potential outcomes under control and treatment. The potential outcome under a treatment rule $d$ is defined, naturally, as $Y(d) = Y(0) 1_{\{d(X)=0\}} + Y(1)
1_{\{d(X)=1\}}$. A common way to compare treatment rule is to use its value function, defined as the expected potential outcome under that rule, $V(d) = \mathbb{E}[Y(d)]$. The *value difference* of two treatment rules, $r_1$ and $r_2$, is thus $$\label{eq:value-diff}
\begin{split}
V(r_2) - V(r_1) &= \mathbb{E}[Y(r_2) - Y(r_1) \,|\, r_2 \neq
r_1] \cdot \mathbb{P}(r_2 \neq r_1) \\
&= \mathbb{E}[Y(1) - Y(0) \,|\, r_2 > r_1] \cdot \mathbb{P}(r_2 > r_1)
- \mathbb{E}[Y(1) - Y(0) \,|\, r_2 < r_1] \cdot \mathbb{P}(r_2 < r_1),
\end{split}$$ where for simplicity the event $r_1( X) \ne r_2( X)$ is abbreviated as $r_1 \ne r_2$ (similarly for $r_1 < r_2$ and $r_1 >
r_2$). Note that the event $r_2 > r_1$ is the same as $r_2 = 1,r_1 =
0$ because the treatment decision is binary. One of the terms on the right hand side of will become $0$ if the treatment rules are nested. In the malaria example, $r_0 \le r_1 \le \cdots \le
r_5$, so the value difference of the rules $r_1$ and $r_2$ can be written as $$V(r_2) - V(r_1) = \mathbb{E}[Y(1)-Y(0)\, |\, \text{Age} \in [7, 20)] \cdot
\mathbb{P}(\text{Age} \in [7, 20)).$$ In this case, testing the sign of $V(r_2) - V(r_1)$ is equivalent to testing the sign of the conditional average treatment effect, $\mathbb{E}[Y(1)- Y(0)\,|\,r_2 > r_1]$.
The definition of the value function depends on the potential outcomes. To identify the value function using observational data, it is standard to make the following assumptions [@Precision_medicine]:
1. Positivity: $\mathbb{P}(A=a \,|\, X = x) > 0$ for all $a$ and $ x$;
2. Consistency (SUTVA): $Y = Y(A)$;
3. Ignorability (no unmeasured confounding): $Y(a)\, {\rotatebox[origin=c]{90}{$\models$}}\, A \, | \, X$ for all $a$.
Under these conditions, it is straightforward to show that the value function is identified by [@qian_murphy2011] $$V(d) = \mathbb{E} \bigg[ \frac{Y I(A=d(X))}{\pi(A,X)} \bigg],$$ where $I$ is the indicator function of an event and $\pi(a,x) =
\mathbb{P}(A=a|X=x)$ is the propensity score.
The value function gives a natural and total order to the treatment rules. If the above identification assumptions hold, the value functions can be identified and thus this order can be consistently estimated as the sample size increases to infinity. In general, it is impossible to recover this order when there is unmeasured confounding. However, if the magnitude of unmeasured confounding is bounded according to a sensitivity model (a collection of distributions of the observed variables and unobserved potential outcomes), it is possible to partially identify difference between the value of two treatment rules and thus obtain a partial order.
\[def: prec\_gamma\] Let $r_1$ and $r_2$ be two treatment rules that map a vector of pre-treatment covariates $X$ to a binary treatment decision $\{0,
1\}$, and $V(r_1)$, $V(r_2)$ their corresponding value functions. Given a sensitivity analysis model indexed by $\Gamma$, we say that the rule $r_1$ is dominated by $r_2$ with a margin $\delta$ if $V(r_2) - V(r_1) > \delta$ for all distributions in the sensitivity analysis model. We denote this relation as $r_1 \prec_{\Gamma,\delta} r_2$ and furthere abbreviate it as $r_1 \prec_\Gamma r_2$ if $\delta =
0$. We denote $r_1 \not \prec_{\Gamma} r_2$ if $r_1$ is not dominated by $r_2$ with margin $\delta = 0$.
Notice that the partial order should be defined in terms of the partially identified interval for $V(r_2) - V(r_1)$ instead of the partially identified intervals for $V(r_1)$ and $V(r_2)$. This is because the same distribution of the unobserved potential outcomes needs to be used when computing the partially identified interval for $V(r_2) - V(r_1)$, so it is not simply the difference between the partially identified intervals for the individual values (the easiest way to see this is to take $r_1 = r_2$). We thank an anonymous reviewer for pointing this out.
It is easy to see that $\prec_{\Gamma}$ is a strict partial order on the set of treatment rules because it satisfies irreflexivity (not $r_1
\prec_{\Gamma} r_1$), transitivity ($r_1 \prec_{\Gamma} r_2$ and $r_2 \prec_{\Gamma} r_3$ imply $r_1 \prec_{\Gamma} r_3$), and asymmetry ($r_1 \prec_{\Gamma}
r_2$ implies not $r_2 \prec_{\Gamma} r_1$). In Rosenbaum’s sensitivity model be introduced in the section below, $\Gamma = 1$ corresponds to no unmeasured confounding and thus the relationship $\prec_{\Gamma=1}$ is a total order.
Testing $r_1 \not \prec_\Gamma r_2$ using matched observational studies {#sec:test-r_1-prec_g}
-----------------------------------------------------------------------
With the goal of selecting and ranking treatment rules with unmeasured confounding in mind, in this section we consider the easier but essential task of comparing the value of two treatment rules, $r_1$ and $r_2$, under Rosenbaum’s sensitivity model. This test will then serve as the basic element of our procedures of selecting and ranking among multiple treatment rules below. We will first introduce the pair-matched design of an observational study and Rosenbaum’s sensitivity model, and then describe a studentized sensitivity analysis proposed by @fogarty2016studentized that tests Neyman’s null hypothesis of average treatment effect being zero under Rosenbaum’s sensitivity model. This test can be immediately extended to compare the value of treatment rules.
Suppose the observed data are $n$ pairs, $i = 1,2,...,n$, of two subjects $j = 1,2$. These $n$ pairs are matched for observed covariates $\bm X$ and within each pair, one subject is treated, denoted $A_{ij} = 1$, and the other control, denoted $A_{ij} = 0$, so that we have $\bm X_{i1} = \bm X_{i2}$ and $A_{i1} + A_{i2} = 1$ for all $i$. In a sensitivity analysis, we may fail to match on an unobserved confounder $U_{ij}$ and thus incur unmeasured confounding bias.
[@Rosenbaum1987; @Rosenbaum2002a] proposed a one-parameter sensitivity model. Let $\mathcal{F} =
\{(Y_{ij}(0),Y_{ij}(1), \allowbreak \bm X_{ij}, U_{ij}), i = 1,\dotsc,n,
j = 1,2\}$ be the collection of all measured or unmeasured variables other than the treatment assignment. Rosenbaum’s sensitivity model assumes that $\pi_i = P(A_{i1} = 1 | \mathcal{F})$ satisfies $$\frac{1}{1+\Gamma} \leq \pi_i \leq
\frac{\Gamma}{1+\Gamma},~i =
1,2,...,n.
\label{eqn: rosenbaum model}$$ When $\Gamma = 1$, this model asserts that $\pi_i = 1/2$ for all $i$ and thus every subject has equal probability to be assigned to treatment or control (i.e. no unmeasured confounding). In general, $\Gamma > 1$ controls the degree of departure from randomization. [@Rosenbaum2002a; @rosenbaum2011new] derived randomization inference based on signed score tests for Fisher’s sharp null hypothesis that $Y_{ij}(0) = Y_{ij}(1)$ for all $i,j$. The asymptotic properties of these randomization tests are studied in @rosenbaum2004design [@Rosenbaum2015] and @Zhao2018sens_value.
In the context of comparing individualized treatment rules, Fisher’s sharp null hypothesis is no longer suitable because we expect to have (and indeed are tasked to find) heterogeneous treatment effect. Recently, [@fogarty2016studentized] developed a valid studentized test for Neyman’s null hypothesis that the average treatment effect is equal to zero, $(2n)^{-1}\sum_{ij}
Y_{ij}(1) - Y_{ij}(0) = 0$, under Rosenbaum’s sensitivity model. We briefly describe Fogarty’s test. Let $D_i$ denote the treated-minus-control difference in the $i^{th}$ matched pair, $D_i = (A_{i1} - A_{i2}) (Y_{i1} - Y_{i2})$. Fix the sensitivity parameter $\Gamma$ and define $$D_{i, \Gamma} = D_i - \left(\frac{\Gamma - 1}{\Gamma +1}
\right)|D_i|,~\overline{D}_{\Gamma} = \frac{1}{n}
\sum_{i=1}^n D_{i,\Gamma},~\text{and}~
\text{se}(\overline{D}_\Gamma)^2 = \frac{1}{n(n - 1)} \sum_{i = 1}^n (D_{i,
\Gamma} - \overline{D}_{\Gamma})^2.$$ @fogarty2016studentized showed that the one-sided student-$t$ test that rejects Neyman’s hypothesis when $$\frac{\overline{D}_\Gamma}{\text{se}(\overline{D}_\Gamma)}
> \Phi^{-1}(1 - \alpha)$$ is asymptotically valid with level $\alpha$ under Rosenbaum’s sensitivity model and mild regularity conditions. This test can be easily extended to test the null that the average treatment effect is no greater than $\delta$ by replacing $D_i$ with $D_i - \delta$. @fogarty2016studentized also provided a randomization-based reference distribution in addition to the large-sample normal approximation.
The above test for the average treatment effect can be readily extended to comparing treatment rules. Recall that equation implies the value difference of two rules $r_1$ and $r_2$ is a weighted difference of two conditional average treatment effects on the set $r_1 > r_2$ and $r_2 > r_1$. When the two rules are nested (without loss of generality assume $r_2 \ge r_1$), testing the null hypothesis that $r_1 \not \prec_{\Gamma} r_2$ is equivalent to testing a Neyman-type hypothesis $\mathbb{E}[Y(1) - Y(0) |
r_2 > r_1] \le 0$ under the $\Gamma$-sensitivity model. We can simply apply Fogarty’s test to the matched pairs (indexed by $i$) that satisfy $r_2(\bm X_{i1}) > r_1(\bm X_{i1})$. When the two rules are not nested, we can flip the sign of $D_i$ for those $i$ such that $r_2(\bm X_{i1}) <
r_1(\bm X_{i1})$ and then apply Fogarty’s test. In summary, to test the null hypothesis that $r_1 \not \prec_{\Gamma} r_2$, we can simply apply Fogarty’s test to $\{D_i \cdot [r_2(\bm X_{i1}) - r_1(\bm X_{i1})],~\text{for}~i~\text{such
that}~r_1(\bm X_{i1}) \ne r_2(\bm X_{i1})\}$. To test the hypothesis $r_1 \not
\prec_{\Gamma,\delta} r_2$, we can use Fogarty’s test for the average treatment effect no greater than $\delta \cdot (n / m)$ where $m =
\big|\{i:\,r_1(\bm X_{i1}) \ne r_2(\bm X_{i2})\}\big|$.
Sensitivity value of treatment rule comparison
----------------------------------------------
A hallmark of Rosenbaum’s sensitivity analysis framework is its tipping-point analysis, and that extends to the comparison of treatment rules. When testing $r_1 \not \prec_\Gamma r_2$ with a series of $\Gamma$, there exists a smallest $\Gamma$ such that the null hypothesis cannot be rejected, that is, we are no longer confidence that $r_1$ is dominated by $r_2$ in that $\Gamma$-sensitivity model. This tipping point is commonly referred to as the *sensitivity value* [@Zhao2018sens_value]. Formally, we define the sensitivity value for $r_1 \prec r_2$ as $$\begin{split}
\Gamma_{\alpha}^\ast(r_1 \prec r_2) = \inf\{\Gamma \geq 1~:
&~\text{The hypothesis}~V(r_1)
\ge V(r_2) \mbox{ cannot be rejected} \\
&\mbox{ at level $\alpha$ under the
$\Gamma$-sensitivity model} \}.
\end{split}$$ Let $r_0$ be the null treatment rule (for example, assigning control to the entire population). The sensitivity $\Gamma_{\alpha}^\ast(r_0 \prec r_1)$ is further abbreviated as $\Gamma_{\alpha}^\ast(r_1)$.
@Zhao2018sens_value studied the asymptotic properties of the sensitivity value when testing Fisher’s sharp null hypothesis using a class of signed core statistics. Below, we will give the asymptotic distribution of $\Gamma_{\alpha}^\ast(r_1 \prec r_2)$ using Fogarty’s test as described in the last section. The result will be stated in terms of a transformation of the sensitivity value, $$\kappa^\ast_{\alpha}(r_1 \prec r_2) = \frac{\Gamma_{\alpha}^\ast(r_1 \prec
r_2) - 1}{\Gamma_{\alpha}^\ast(r_1 \prec r_2) + 1}.$$ Note that $\Gamma^\ast = 1$ is transformed to $\kappa^\ast = 0$ and $0 \le \kappa^{\ast} < 1$.
Assume the treatment rules are nested, $r_1(x) \le r_2(x)$, and let $\mathcal{I}$ be the set of indices $i$ where $r_1(\bm X_{i1}) <
r_2(\bm X_{i2})$. Assuming the moments of $|D_i|$ exist and $\mathbb{E}[D_i \mid r_1 < r_2] > 0$, then $$\label{eq:sen-value-asymp}
\begin{split}
\sqrt{|\mathcal{I}|}\left(\kappa_{\alpha}^\ast(r_1 \prec r_2) -
\frac{\mathbb{E}[D_i \mid r_1 < r_2]}{\mathbb{E}[|D_i| \mid
r_1 < r_2]}\right) \overset{d}{\to}
\text{N}\left(z_{\alpha} \mu,
~\sigma^2\right),~\text{as}~ |\mathcal{I}| \to \infty, \\
\end{split}$$ where $z_\alpha$ is the upper-$\alpha$ quantile of the standard normal distribution and the parameters $\mu$ and $\sigma^2$ depend on the distribution of $D_i$ (the expressions can be found in the Appendix). \[thm: asymp kappa\]
The proof of this proposition can be found in the Appendix. When the treatment rules are not nested, one can simply replace $D_i$ with $D_i
[r_2(\bm X_{i1}) - r_1(\bm X_{i1})]$ and the condition $r_1 < r_2$ with $r_1 \ne r_2$ in the proposition statement. The asymptotic distribution of $\Gamma^{\ast}_{\alpha}(r_1 \succ r_2)$ can be found by the delta method and we omit further detail.
The asymptotic distribution in is similar to the one obtained in @Zhao2018sens_value [Thm. 1]. When the treatment rules are nested $r_1 \le r_2$ and $|\mathcal{I}|
\to \infty$, the sensitivity value converges to a number that depends on the distribution of $D_i$, $$\Gamma_{\alpha}^{*}(r_1 \prec r_2) \overset{p}{\to}
\frac{\mathbb{E}[|D_i| \mid r_1 < r_2] + \mathbb{E}[D_i \mid r_1 <
r_2]}{\mathbb{E}[|D_i| \mid r_1 < r_2] -
\mathbb{E}[D_i \mid r_1 < r_2]}.$$ The limit on the right hand side is the *design sensitivity* [@rosenbaum2004design] of Fogarty’s test for comparing the treatment rules. As the sample size converge to infinity, the power of Fogarty’s test converges to $1$ at $\Gamma$ smaller than the design sensitivity and to $0$ at $\Gamma$ larger than the design sensitivity. The normal distribution in further approximates of the finite-sample behavior of the sensitivity value and can be used to compute the power of a sensitivity analysis by the fact that rejecting $r_1 \not
\prec_{\Gamma} r_2$ at level $\alpha$ is equivalent to $\Gamma_{\alpha}^{*}(r_1 \prec_{\Gamma} r_2) \ge \Gamma$.
Selecting and ranking treatment rules {#sec:multiple}
=====================================
Next we consider the problem of comparing multiple treatment rules with unmeasured confounding. To this end, we need to define the goal and the statistical error we would like to control. A problem related to this is the selecting and ordering of multiple subpopulations [@gupta1979multiple; @gibbons1999selecting], for example, given $K$ independent measurements $Y_i \sim \mathrm{N}(\mu_i,1)$ where $\mu_i$ is some characteristic of the $i$-th subpopulation. When comparing $\mu_i$, there are many goals we can define. In fact, @gibbons1999selecting [p. 4] gave a list of 7 possible goals for ranking and selection of subpopulations and considered them in the rest of their book. We believe at least $3$ out of their $7$ goals have practically meaningful counterparts in comparing treatment rules. Given $K+1$ treatment rules, $\mathcal{R} =
\{r_0, r_1, \dotsc, r_K\}$, we may ask, in terms of their values,
1. What is the ordering of all the treatment rules?
2. Which treatment rule is the best?
3. Which treatment rule(s) are better than the null/control $r_0$?
In a randomized experiment or an observational study with no unmeasured confounding, it may be possible to obtain estimates of the value that are jointly asymptotically normal and then directly use the methods in @gibbons1999selecting. However, as discussed in , this no longer applies when there is unmeasured confounding because the value function may only be partially identified.
Defining the inferential goals
------------------------------
When there is unmeasured confounding, the three goals above need to be modified because the value function only defines a partial order among the treatment rules (). We make the following definitions
In the $\Gamma$-sensitivity model, the *maximal rules* in $\mathcal{R}$ are the ones not dominated by any other rule, $$\mathcal{R}_{\text{max},\Gamma} = \{r_i\,:\,r_i \not \prec_{\Gamma}
r_j,~\forall j\}.$$ The *positive rules* are the ones which dominate the control and the *null rules* are the ones which don’t dominate the control, $$\mathcal{R}_{\text{pos},\Gamma} = \{r_i\,:\,r_0 \prec_{\Gamma} r_i
\},~\mathcal{R}_{\text{nul},\Gamma} = \mathcal{R} \setminus
\mathcal{R}_{\text{pos},\Gamma}.$$
The maximal set $\mathcal{R}_{\text{max},\Gamma}$ and the null set $\mathcal{R}_{\text{nul},\Gamma}$ are always non-empty (the latter is because $r_0 \in \mathcal{R}_{\text{nul},\Gamma}$), become larger as $\Gamma$ increases, and in general become the full set $\mathcal{R}$ as $\Gamma \to \infty$.
In the rest of this section, we will consider the following three statistical problems: for some pre-specified significance level $\alpha > 0$,
1. Can we give a set of ordered pairs of treatment rules, $\hat{\mathcal{O}}_{\Gamma} \subset
\{(r_i,r_j),\,i,j=0,\dotsc,K,\,i\ne j\}$, such that the probability that all the orderings are correct is at least $1 - \alpha$, that is, $\mathbb{P}(r_i \prec_{\Gamma}
r_j,\,\forall (r_i,r_j) \in \hat{\mathcal{O}}_{\Gamma}) \ge 1-\alpha$?
2. Can we construct a subset of treatment rules, $\hat{\mathcal{R}}_{\text{max},\Gamma}$, such that the probability that it contains all maximal rules is at least $1-\alpha$, that is, $\mathbb{P}(\mathcal{R}_{\text{max},\Gamma} \subseteq
\hat{\mathcal{R}}_{\text{max},\Gamma}) \ge 1-\alpha$?
3. Can we construct a subset of treatment rules, $\hat{\mathcal{R}}_{\text{pos},\Gamma}$, such that the probability that it does not cover any null rule is at least $1 - \alpha$, that is, $\mathbb{P}(\hat{\mathcal{R}}_{\text{pos},\Gamma} \cap
\mathcal{R}_{\text{null},\Gamma} = \not\emptyset) \ge 1-\alpha$?
Next, we will propose strategies to achieve the above statistical goals based on the test of two treatment rules with unmeasured confounding described in .
Goal 1: Ordering the treatment rules {#subsec: ranking}
------------------------------------
To start with, let’s consider the first goal—ordering the treatment rules, as the statistical inference is more straightforward. It is the same as the multiple testing problem where we would like to control the family-wise error rate (FWER) for the collection of hypotheses, $\{H_{ij}\,:\,r_i \not \prec_{\Gamma}
r_j,\,i,j=0,\dotsc,K,\,i\ne j\}$. In principle, we can apply any multiple testing procedure that controls the FWER. A simple example is Bonferroni’s correction for all the $K(K-1)$ tests.
In sensitivity analysis problems, we can often greatly improve the statistical power by reducing the number of tests using a planning sample [@heller2009split; @zhao2018cross]. This is because Rosenbaum’s sensitivity analysis considers the worst case scenario and is generally conservative when $\Gamma > 1$. The planning sample can be further used to order the hypotheses so we can sequentially test them, for example, using a fixed sequence testing procedure [@koch1996statistical; @westfall2001optimally].
There are many possible ways to screen out, order, and then test the hypotheses. Here we demonstrate one possibility:
- Split the data into two parts. The first part is used for planning and the second part for testing.
- For every pair of treatment rules $(r_i,r_j)$, use the planning sample to estimate population parameters in the asymptotic distribution of the sensitivity value .
- Compute the approximate power of testing $H_{ij}:~r_i
\not \prec_{\Gamma} r_j$ in the testing sample using . Order the hypotheses by the estimated power, from highest to lowest.
- Sequentially test the ordered hypotheses using the testing sample at level $\alpha$, until one hypothesis is rejected.
- Output a Hasse diagram of the treatment rules by using all the rejected hypotheses.
A Hasse diagram is an informative graph to represent a partial order (in our case, $\prec_{\Gamma}$). In this diagram, each treatment rule is represented by a vertex and an edge goes upward from rule $r_i$ to rule $r_j$ if $r_i \prec_{\Gamma} r_j$ and there exists no $r_k$ such that $r_i \prec_{\Gamma} r_k$ and $r_k \prec_{\Gamma} r_j$.
Due to transitivity of a partial order, an upward path from $r_i$ to $r_j$ in the Hasse diagram [(for example $r_0$ to $r_3$ in Figure 1, $\Gamma = 1.3$)]{} indicates that $r_i \prec_{\Gamma} r_j$, even if we could not directly reject $r_i \not \prec_{\Gamma} r_j$ in Step 4. The next proposition shows that the above multiple testing procedure also controls the FWER for all the apparent and implied orders represented by the Hasse diagram.
Let $\hat{\mathcal{O}}_{\Gamma} \subset \{(r_i,r_j),\,i\ne j\}$ be a random set of ordered treatment rules obtained using the procedure above or any other multiple testing procedure. Let $$\hat{\mathcal{O}}_{\Gamma,\text{ext}} = \hat{\mathcal{O}}_{\Gamma}
\bigcup \{(r_i,r_j)\,:\,\exists\, k_1,\dotsc,k_m~\text{such that}~
(r_i,r_{k_1}),(r_{k_1},r_{k_2}),\dotsc,(r_{k_m},r_j)
\in \hat{\mathcal{O}}_{\Gamma}\}$$ be the extended set implied from the Hasse diagram. Then FWER with respect to $\hat{\mathcal{O}}_{\Gamma,\text{ext}}$ is the same as FWER with respect to $\hat{\mathcal{O}}_{\Gamma}$: $$\mathbb{P}(r_i \prec_{\Gamma}
r_j,\,\forall (r_i,r_j) \in \hat{\mathcal{O}}_{\Gamma}) =
\mathbb{P}(r_i \prec_{\Gamma}
r_j,\,\forall (r_i,r_j) \in \hat{\mathcal{O}}_{\Gamma,\text{ext}}).$$
We show the two events are equivalent. The $\subseteq$ direction is trivial. For $\supseteq$, notice that any false positive in $\hat{\mathcal{O}}_{\Gamma,\text{ext}}$, say $r_i
\prec_{\Gamma} r_j$ implies that there is at least one false positive along the path from $r_i$ to $r_j$, that is, there is at least one false positive among $r_i \prec_{\Gamma} r_{k_1},
r_{k_1}\prec_{\Gamma} r_{k_2},\dotsc,r_{k_m} \prec_{\Gamma}
r_j$, which are all in $\hat{\mathcal{O}}_{\Gamma}$. Thus, any false positive in $\hat{\mathcal{O}}_{\Gamma,\text{ext}}$ implies that there is also at least one false positive in $\hat{\mathcal{O}}_{\Gamma}$.
We illustrate the proposed method using the malaria dataset. We first use half of the data to estimate the population parameters in for each pair of treatment rules $(r_i, r_j)$. For every value of $\Gamma$, we use to compute the asymptotic power for the test of $H_{ij}:r_i \not \prec_{\Gamma} r_j$ using the other half of the data. We then order the hypotheses by the estimated power, from the highest to the lowest. In the malaria example, when $\Gamma = 1$, the order is $$H_{01}, H_{02}, H_{03}, H_{04}, H_{05}, H_{13}, H_{12}, H_{14}, H_{15}, H_{23}, \dotsc.$$ When $\Gamma = 2$, the order becomes $$H_{02}, H_{01}, H_{03}, H_{04}, H_{05}, H_{12}, H_{13}, H_{14}, H_{15}, H_{45}, \dotsc.$$ Finally we follow Steps 4 and 5 above. We obtained Hasse diagrams for a variety of $\Gamma$, which are shown in . As a baseline for comparison, shows the Hasse diagrams obtained by a simple Bonferroni adjustment for all $K(K-1) = 30$ hypotheses using all the data. Although only half of the data is used to test, ordering the hypotheses not only identified all the discoveries that the Bonferroni procedure identified, but also made one extra discovery when $\Gamma = 1.3$, $1.5$, $2.5$, $3.5$, and $4$, and two extra discoveries when $\Gamma = 1$, $1.8$, $2$, and $3$.
\
\
\
\
Goal 2: Selecting the best rules
--------------------------------
Next we consider constructing a set that covers all the maximal rules. Our proposal is based on the following observation: if the hypothesis $r_i \not \prec_{\Gamma} r_j$ can be rejected, then $r_i$ is unlikely a maximal rule. More precisely, because $r_i \in
\mathcal{R}_{\text{max},\Gamma}$ implies that $r_i \not \prec_{\Gamma}
r_j$ must be true, by the definition of the type I error of a hypothesis test, $$\mathbb{P}(r_i \not \prec_{\Gamma} r_j~\text{is rejected} \,|\,r_i \in
\mathcal{R}_{\text{max},\Gamma}) \le \alpha.$$ This suggests that we can derive a set of maximal rules from an estimated set of partial orders: $$\label{eq:est-max}
\hat{\mathcal{R}}_{\text{max}, \Gamma} = \{r_i\,:\,(r_i,r_j) \not \in
\hat{\mathcal{O}}_{\Gamma},~\forall j\}.$$ In other words, $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ contains all the “leaves” in the Hasse diagram of $\hat{\mathcal{O}}_{\Gamma}$ (a leaf in the Hasse diagram is a vertex who has no edge going upward). For example, in Figure \[fig: hasse malaria hybrid\], the leaves are $\{r_3, r_4, r_5\}$ when $\Gamma = 1.0$ and $\{r_2, r_3, r_4, r_5\}$ when $\Gamma = 1.5$. Because $\big\{\mathcal{R}_{\text{max},\Gamma} \not \subseteq
\hat{\mathcal{R}}_{\text{max},\Gamma}\big\} = \big\{\exists\,
i \in \mathcal{R}_{\text{max},\Gamma}~\text{such that}~(r_i,r_j)\in
\hat{\mathcal{O}}_{\Gamma}~\text{for some}~j\big\}$, the estimated set of maximal rules satisfies $\mathbb{P}(\mathcal{R}_{\text{max},\Gamma} \not \subseteq
\hat{\mathcal{R}}_{\text{max},\Gamma}) \le \alpha$ as desired whenever $\hat{\mathcal{O}}_{\Gamma}$ strongly controls the FWER at level $\alpha$.
Equation suggests that only one hypothesis $r_i
\not \prec_{\Gamma} r_j$ needs to be rejected in order to exclude $r_i$ from $\hat{\mathcal{R}}_{\text{max}, \Gamma}$. This means that, when the purpose is to select the maximal rules, we do not need to test $r_i \not \prec_{\Gamma} r_j$ if another hypothesis $r_i \not
\prec_{\Gamma} r_k$ for some $k \ne j$ is already rejected. Therefore, we can modify the procedure of finding $\hat{\Omega}_{\Gamma}$ to further decrease the size of $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ obtained from . For example, in the five-step procedure demonstrated in , we can further replace Step 3 by:
- After ordering the hypotheses in Step 3, remove any hypothesis $H_{ij}:\,r_i \prec_{\Gamma} r_j$ if there is already a hypothesis $H_{ik}$ appearing before $H_{ij}$ for some $k \ne j$.
Again we use the malaria example to illustrate the selection of best treatment rules. As an example, when $\Gamma = 2.0$, Step 3’ reduced the original sequence of hypotheses to the following: $$H_{02}, H_{12}, H_{45}, H_{35}, H_{53}, H_{21}.$$ We used the hold-out samples to test the hypotheses sequentially at level $\alpha = 0.1$ and stopped at $H_{45}$. Therefore, a level $\alpha = 0.1$ confidence set of the set of maximal elements is $\{r_2, r_3, r_4, r_5\}$ when $\Gamma = 2$. Table \[tbl: malaria max set result\] lists the estimated maximal set $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ for $\Gamma = 1, 1.3, 1.5, 1.8, 2, \text{and }2.5$.
$\Gamma$ $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ $\Gamma$ $\hat{\mathcal{R}}_{\text{max}, \Gamma}$
---------- ------------------------------------------ ---------- ------------------------------------------
1.0 $\{r_3, r_4, r_5\}$ 2.5 $\{r_2, r_3, r_4, r_5\}$
1.3 $\{r_3, r_4, r_5\}$ 3.0 $\{r_1, r_2, r_3, r_4, r_5\}$
1.5 $\{r_2, r_3, r_4, r_5\}$ 3.5 $\{r_1, r_2, r_3, r_4, r_5\}$
1.8 $\{r_2, r_3, r_4, r_5\}$ 4.0 $\{r_1, r_2, r_3, r_4, r_5\}$
2.0 $\{r_2, r_3, r_4, r_5\}$ 6.0 $\{r_0, r_1, r_2, r_3, r_4, r_5\}$
: $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ for different choices of $\Gamma$.[]{data-label="tbl: malaria max set result"}
Goal 3: Selecting the positive rules
------------------------------------
Finally we consider how to select treatment rules that are better than a control rule. This can also be transformed to a multiple testing problem for the hypotheses $H_{0i}:\,r_0\not\prec_{\Gamma}r_i,~i=1,\dotsc,K$. Let $\hat{\mathcal{R}}_{\text{pos},\Gamma}$ be the collection of rejected hypotheses following some multiple testing procedure. By definition of FWER, $\mathbb{P}(\hat{\mathcal{R}}_{\text{pos},\Gamma} \cap
\mathcal{R}_{\text{nul},\Gamma}) \le \alpha$ if the multiple testing procedure strongly controls FWER at level $\alpha$. As an example, one can modify the procedure in to select the positive rules by only considering $H_{0i},~i=1,\dotsc,K$ in Step 3.
In practice, a small increase of the value function, though statistically significant, may not justify a policy change. In this case, it may be desirable to estimate the positive rules that dominate the control rule by margin $\delta$, $\mathcal{R}_{\text{pos},\Gamma,\delta} = \{r_i:\,r_0
\prec_{\Gamma,\delta} r_i\}$. To obtain an estimate of $\mathcal{R}_{\text{pos},\Gamma,\delta}$, one can further modify the procedure in by replacing the hypothesis $H_{0i}:\,r_0
\not \prec_{\Gamma} r_i$ with the stronger $r_0
\not\prec_{\Gamma,\delta} r_i$.
$\Gamma = 1$ $\Gamma = 1.3$ $\Gamma = 1.5$ $\Gamma = 1.8$
-------------- ------------------------------- ------------------------------- ------------------------------- -------------------------------
$\delta = 0$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$
$\delta = 1$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$
$\delta = 2$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$
$\delta = 4$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$
$\delta = 6$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_2, r_3, r_4,r_5\}$ $\{r_2\}$
$\Gamma = 2.0$ $\Gamma = 2.5$ $\Gamma = 3.0$
$\delta = 0$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$
$\delta = 1$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3\}$
$\delta = 2$ $\{r_1, r_2, r_3, r_4, r_5\}$ $\{r_1, r_2, r_3\}$ $\{r_1, r_2\}$
$\delta = 4$ $\{r_1, r_2, r_3\}$ $\emptyset$ $\emptyset$
$\delta = 6$ $\emptyset$ $\emptyset$ $\emptyset$
$\Gamma = 3.5$ $\Gamma = 4.0$ $\Gamma = 6.0$
$\delta = 0$ $\{r_1, r_2, r_3\}$ $\{r_1, r_2\}$ $\emptyset$
$\delta = 1$ $\{r_1, r_2\}$ $\{r_1\}$ $\emptyset$
$\delta = 2$ $\emptyset$ $\emptyset$ $\emptyset$
$\delta = 4$ $\emptyset$ $\emptyset$ $\emptyset$
$\delta = 6$ $\emptyset$ $\emptyset$ $\emptyset$
: Estimated positive rules $\hat{\mathcal{R}}_{\text{pos},\Gamma, \delta}$ for different choices of $\Gamma$ and $\delta$. []{data-label="tbl: malaria null set result"}
We construct $\hat{\mathcal{R}}_{\text{pos},\Gamma, \delta}$ with various choices of $\Gamma$ and $\delta$ for the malaria example. In this case, $\delta$ measures the decrease in the number of Plasmodium falciparum parasites per milliliter of blood samples averaged over the entire study samples. Table \[tbl: malaria null set result\] gives a summary of the results. As expected, the estimated set of positive rules becomes smaller as $\Gamma$ or $\delta$ increases. We observe that, although $r_1,r_2$—assigning treatment to those under $7$ and $20$—are unlikely the optimal rules if there is no unmeasured confounding (), they are more robust to unmeasured confounding than the others, dominating the control rule up till $\Gamma = 4.0$ ().
Simulations {#sec:simulations}
===========
We study and report the performance of three methods of selecting the positive rules $\mathcal{R}_{\text{pos},\Gamma, \delta}$ using numerical simulations in this section. Simulation results for selecting the maximal rules are reported in the Supplementary Materials. We constructed $5$ or $10$ cohorts of data where the treatment effect is constant within each cohort but different between the cohorts. After matching, the treated-minus-control difference in each cohort was normally distributed with mean
1. $\mu = (0.5, 0.25, 0.25, 0.15, 0.05)$,
2. $\mu = (0.5, 0.2, -1.0, 0.2, 0.5)$,
3. $\mu = (0.5, 0.5, 0.25, 0.25, 0.25, 0.25, 0.15, 0.15, 0.05,
0.05)$,
4. $\mu = (0.5, 0.3, 0.2, 0.0, -1.0, -1.0, 0.5, 0.5, 1.0, 1.0)$.
The size of each cohort was either $100$ or $250$.
Three methods of selecting positive rules were considered:
1. [**Bonferroni:**]{} The full data is used to test the hypotheses $H_{0i}:\,r_0 \not \prec_{\Gamma} r_i$ and the Bonferroni correction is used to adjust for multiple comparisons.
2. [**Ordering by power:**]{} This is the procedure described in using sample splitting and fixed sequence testing.
3. [**Ordering by value function:**]{} This is the same as above except that the hypotheses are ordered by their estimated value at $\Gamma = 1$.
For the second and third methods, we used either a half or a quarter of the matched pairs (randomly chosen) to order the hypotheses. Extra simulation results using different split proportions are reported in Supplementary Materials. This simulation was replicated $1000$ times to report the power and the error rate of the methods. The power is defined as the average size of the estimated set of positive rules $\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and the error rate is $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$ with nominal level $0.05$.
The results of this simulation study are reported in . The error rate is controlled under the nominal level in most cases and is usually quite conservative. The conservativeness is not surprising because Rosenbaum’s sensitivity analysis is a worst-case analysis. In terms of power, the five methods being compared performed very similarly assuming no unmeasured confounding ($\Gamma =
1$). Bonferroni is still competitive at $\Gamma = 1.5$, but ordering the hypotheses by (the estimated) power, though losing some sample for testing, can be much more powerful at larger values of $\Gamma$. For instance, in when $\Gamma = 3.0$, two power-based methods are more than twice as powerful as the Bonferroni method. We observe that only using a small planning sample ($25\%$) seems to work well in the simulations. This is not too surprising given our theoretical results. suggest that only the first two moments of $D$ and $|D|$ are needed to estimate the sensitivity value asymptotically.
-- ------------- ------------------------ ------------------------ --------------------- ---------------- ---------------- --
$\Gamma = 1.0$ $\Gamma = 1.8$ $\Gamma = 2.0$ $\Gamma = 2.3$ $\Gamma = 3.0$
$\{r_1, \dotsc, r_5\}$ $\{r_1, \dotsc, r_4\}$ $\{r_1, r_2, r_3\}$ $\{r_1,r_2\}$ $\{r_1\}$
Bonferroni 5.00 / 0.00 2.54 / 0.01 1.60 / 0.03 0.72 / 0.02 0.11 / 0.00
Value (50%) 5.00 / 0.00 0.51 / 0.07 0.08 / 0.04 0.00 / 0.00 0.00 / 0.00
Power (50%) 5.00 / 0.00 2.30 / 0.07 1.46 / 0.07 0.74 / 0.04 0.20 / 0.00
Value (25%) 5.00 / 0.00 0.73 / 0.07 0.18 / 0.05 0.03 / 0.01 0.00 / 0.00
Power (25%) 5.00 / 0.00 2.64 / 0.07 1.69 / 0.06 0.85 / 0.05 0.21 / 0.00
Bonferroni 4.99 / 0.00 1.39 / 0.02 0.75 / 0.02 0.37 / 0.02 0.08 / 0.00
Value (50%) 4.80 / 0.00 0.49 / 0.07 0.16 / 0.04 0.04 / 0.02 0.00 / 0.00
Power (50%) 4.77 / 0.00 1.15 / 0.05 0.75 / 0.05 0.38 / 0.03 0.15 / 0.02
Value (25%) 4.99 / 0.00 0.61 / 0.06 0.24 / 0.03 0.12 / 0.03 0.01 / 0.00
Power (25%) 4.99 / 0.00 1.33 / 0.05 0.80 / 0.05 0.52 / 0.05 0.12 / 0.00
-- ------------- ------------------------ ------------------------ --------------------- ---------------- ---------------- --
: Power and error rate (separated by/) of three methods estimating $\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of $\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the 5 cohorts is given by $\mu = (0.5, 0.25, 0.25, 0.15, 0.05)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value. []{data-label="tbl: simu res mu_1; 5 cohorts"}
-- ------------- ------------------------ --------------------- ---------------- ---------------- ---------------- --
$\Gamma = 1.0$ $\Gamma = 1.5$ $\Gamma = 2.0$ $\Gamma = 2.5$ $\Gamma = 3.5$
$\{r_1,r_2,r_4, r_5\}$ $\{r_1, r_2, r_5\}$ $\{r_1,r_2\}$ $\{r_1\}$ $\emptyset$
Bonferroni 3.14 / 0.00 2.36 / 0.00 0.78 / 0.00 0.45 / 0.01 0.01 / 0.01
Value (50%) 3.21 / 0.00 1.78 / 0.00 0.06 / 0.02 0.00 / 0.00 0.00 / 0.00
Power (50%) 3.21 / 0.00 2.03 / 0.00 0.75 / 0.02 0.47 / 0.02 0.07 / 0.07
Value (25%) 3.25 / 0.00 2.21 / 0.00 0.07 / 0.03 0.00 / 0.00 0.00 / 0.00
Power (25%) 3.25 / 0.00 2.35 / 0.00 0.83 / 0.02 0.54 / 0.02 0.07 / 0.07
Bonferroni 3.02 / 0.00 1.19 / 0.00 0.37 / 0.00 0.21 / 0.01 0.02 / 0.02
Value (50%) 3.03 / 0.00 0.71 / 0.00 0.04 / 0.02 0.00 / 0.00 0.00 / 0.00
Power (50%) 3.02 / 0.00 0.93 / 0.00 0.43 / 0.01 0.29 / 0.04 0.08 / 0.08
Value (25%) 3.10 / 0.00 1.12 / 0.00 0.04 / 0.02 0.00 / 0.00 0.00 / 0.00
Power (25%) 3.11 / 0.00 1.32 / 0.00 0.42 / 0.01 0.31 / 0.03 0.07 / 0.07
-- ------------- ------------------------ --------------------- ---------------- ---------------- ---------------- --
: Power and error rate (separated by/) of three methods estimating $\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of $\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the 5 cohorts is given by $\mu = (0.5, 0.2, -1.0, 0.2, 0.5)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value. []{data-label="tbl: simu res mu_2; 5 cohorts"}
-- ------------- -------------------------- ----------------------- ------------------------- ---------------- ---------------- --
$\Gamma = 1.0$ $\Gamma = 1.8$ $\Gamma = 2.2$ $\Gamma = 3.0$ $\Gamma = 3.5$
$\{r_1,\dotsc, r_{10}\}$ $\{r_1,\dotsc, r_9\}$ $\{r_1, \dotsc, r_6 \}$ $\{r_1, r_2\}$ $\{r_1\}$
Bonferroni 10.00 / 0.00 6.80 / 0.01 2.41 / 0.00 0.20 / 0.00 0.02 / 0.01
Value (50%) 10.00 / 0.00 0.88 / 0.06 0.00 / 0.00 0.00 / 0.00 0.00 / 0.00
Power (50%) 10.00 / 0.00 6.30 / 0.05 2.34 / 0.03 0.44 / 0.02 0.11 / 0.05
Value (25%) 10.00 / 0.00 1.12 / 0.01 0.00 / 0.00 0.00 / 0.00 0.00 / 0.00
Power (25%) 10.00 / 0.00 7.06 / 0.06 2.73 / 0.02 0.42 / 0.02 0.10 / 0.05
Bonferroni 9.99 / 0.00 3.97 / 0.01 1.14 / 0.00 0.12 / 0.00 0.03 / 0.02
Value (50%) 9.95 / 0.00 0.76 / 0.05 0.03 / 0.01 0.00 / 0.00 0.00 / 0.00
Power (50%) 9.91 / 0.00 3.18 / 0.04 1.17 / 0.02 0.28 / 0.03 0.10 / 0.05
Value (25%) 9.95 / 0.00 1.06 / 0.04 0.06 / 0.01 0.00 / 0.00 0.00 / 0.00
Power (25%) 9.99 / 0.00 3.93 / 0.04 1.39 / 0.02 0.25 / 0.02 0.09 / 0.05
-- ------------- -------------------------- ----------------------- ------------------------- ---------------- ---------------- --
: Power and error rate (separated by/) of three methods estimating $\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of $\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the 10 cohorts is given by $\mu = (0.5, 0.5, 0.25, 0.25, 0.25, 0.25, 0.15, 0.15, 0.05, 0.05)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value. []{data-label="tbl: simu res mu_1; 10 cohorts"}
-- ------------- ------------------------------------ ----------------------- ---------------------- ---------------- ---------------- --
$\Gamma = 1.0$ $\Gamma = 1.5$ $\Gamma = 2.0$ $\Gamma = 2.5$ $\Gamma = 3.0$
$\{r_1,\dotsc, r_4, r_9, r_{10}\}$ $\{r_1,\dotsc, r_4\}$ $\{r_1, r_2, r_3 \}$ $\{r_1, r_2\}$ $\{r_1\}$
Bonferroni 5.98 / 0.00 3.51/0.00 1.55 / 0.00 0.37 / 0.00 0.07 / 0.00
Value (50%) 5.97 / 0.02 0.10 / 0.02 0.00 / 0.00 0.00 / 0.00 0.00 / 0.00
Power (50%) 6.00 / 0.02 3.43 / 0.02 1.53 / 0.01 0.56 / 0.02 0.21 / 0.01
Value (25%) 6.02 / 0.03 0.23 / 0.04 0.03 / 0.00 0.01 / 0.00 0.00 / 0.00
Power (25%) 6.02 / 0.02 3.67 / 0.04 1.84 / 0.01 0.66 / 0.01 0.22 / 0.01
Bonferroni 5.60 / 0.01 2.42 / 0.00 0.68 / 0.00 0.19 / 0.00 0.06 / 0.00
Value (50%) 5.24 / 0.03 0.22 / 0.03 0.04 / 0.00 0.01 / 0.00 0.00 / 0.00
Power (50%) 5.48 / 0.03 2.23 / 0.03 0.86 / 0.02 0.31 / 0.02 0.16 / 0.02
Value (25%) 5.58 / 0.02 0.71 / 0.03 0.16 / 0.00 0.04 / 0.00 0.01 / 0.00
Power (25%) 5.71 / 0.02 2.61 / 0.03 0.98 / 0.01 0.36 / 0.02 0.14 / 0.02
-- ------------- ------------------------------------ ----------------------- ---------------------- ---------------- ---------------- --
: Power and error rate (separated by/) of three methods estimating $\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of $\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the 10 cohorts is given by $\mu = (0.5, 0.3, 0.2, 0.0, -1.0, -1.0, 0.5, 0.5, 1.0, 1.0)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value. []{data-label="tbl: simu res mu_2; 10 cohorts"}
Application: The effect of late retirement on senior health {#sec:application}
===========================================================
Finally we apply the proposed method to study the potentially heterogeneous effect of retirement timing on senior health. Many empirical studies have focused on the effect of retirement timing on short-term and long-term health status of the elderly people [@morrow2001productive; @alavinia2008unemployment; @borsch2006early]. One theory known as the “psychosocial-materialist” approach suggests that retiring late may have health benefits because work forms a key part of the identity of the elderly and provides financial, social and psychological resources [@Calvo2012]. However, the health benefits of late retirement may differ in different subpopulations [@Dave_2008_late_retirement; @Westerlund_late_retirement_2009].
We obtained the observational data from the Health and Retirement Study, an ongoing nationally representative survey of more than 30,000 adults who are older than 50 and their spouses in the United States. HRS is sponsored by the National Institute of Aging; Detailed information on the HRS and its design can be found in @sonnega_IJE_HRS. We use the RAND HRS Longitudinal File 2014 (V2), an easy-to-use dataset based on the HRS core data that consists of a follow-up study of $15,843$ elderly people [@RAND_HRS_data].
We defined the treatment as late retirement (retirement after $65$ years old and before $70$ years old) and asked how it impacted self-reported health status at the age of $70$ (coded by: 5 - extremely good, 4 - very good, 3 - good, 2 - fair, and 1 - poor). We included individuals who retired before $70$ and had complete measurements of the following confounders: year of birth, gender, education (years), race (white or not), occupation (1: executives and managers, 2: professional specialty, 3: sales and administration, 4: protection services and armed forces, 5: cleaning, building, food prep, and personal services, 6: production, construction, and operation), partnered, annual income, and smoking status. This left us with $1934$ treated subjects and $4831$ controls. Figure \[fig: age\_dist\] plots the distribution of retirement age in all samples and in the treatment group. The distribution of retirement age in the treatment group is right skewed, with a spike of people retiring shortly after $65$ years old. In the Supplementary Materials, we give a detailed account of data preprocessing and sample inclusion criteria.
Using optimal matching as implemented in the `optmatch` R package [@optmatch], we form $1858$ matched pairs, matching exactly on the year of birth, gender, occupation, and partnered or not, and balance the race, years of education, and smoking status. Table \[tbl: balance table retirement\] summarizes the covariate balance after matching. After matching, the treated and control groups are well-balanced (): the standardized differences of all covariates are less than 0.1. Additionally, the propensity score in the treated and control group have good overlap before and after matching (see the Supplementary Materials).
Control Treated std.diff
------------------------------------------------------------------ --------- --------- ---------- --
Year of birth 1936.27 1936.27 0.00
Female 0.53 0.53 0.00
Non-hispanic white 0.77 0.75 -0.04
Education (yrs) 12.52 12.53 0.00
Occupation: cleaning, building, food prep, and personal services 0.10 0.10 0.00
Occupation: executives and managers 0.16 0.16 0.00
Occupation: production construction and operation occupations 0.28 0.28 0.00
Occupation: professional specialty 0.19 0.19 0.00
Occupation: protection services and armed forces 0.02 0.02 0.00
Occupation: sales and admin 0.25 0.25 0.00
Partnered 0.74 0.74 0.00
Smoke ever 0.63 0.59 -0.08
: Covariate balance after matching.[]{data-label="tbl: balance table retirement"}
We considered two potential effect modifiers, namely gender and occupation. More complicated treatment rules can in principle be considered within our framework, though having more treatment rules generally reduces the power of multiple testing. We grouped the $6$ occupations into $2$ broad categories: white collar jobs (executives and managers and professional specialties) and blue collar jobs (sales, administration, protection services, personal services, production, construction, and operation). There were $4$ subgroups defined by these two potential effect modifiers: male, white-collar workers ($G_1$), female, white-collar workers ($G_2$), male, blue-collar workers ($G_3$), and female, blue-collar workers ($G_4$). Thus, there were a total of $2^4 =
16$ different regimes formed out of these two effect modifiers. We gave decimal as well as binary codings to the $16$ groups: $r_0$ ($r_{0000}$) assigns control to everyone, $r_8\,(r_{1000}), r_4\,
(r_{0100}), r_2\,(r_{0010}), r_1\,(r_{0001})$ assign treatment to one of the $4$ subgroups, and so forth. We split the matched samples and used $1/4$ of them to plan the test in the other $3/4$. Then we followed the procedures proposed in to rank and select the treatment rules.
reports the estimated Hasse diagram at $\Gamma = 1.2$; additional results can be found in the Supplementary Materials. The estimated maximal rules for various choices of $\Gamma$ and $\delta$ are reported in and the estimated positive rules are reported in the Supplementary Materials. According to , the maximal rules under the no unmeasured confounding assumption are $r_{11} \, (r_{1011})$ which assigns late retirement to all but female, white-collar workers, $r_{13} \, (r_{1101})$ which assigns late retirement to all but male, blue-collar workers, and $r_{15} \, (r_{1111})$ which assigns treatment to everyone. When $\Gamma$ increases to $1.2$, $r_9 \, (r_{1001})$ which assigns treatment to male, white-collar workers and female, blue-collar workers, further enters the set of maximal rules. The estimated positive rules suggest that $r_9\,(r_{1001})$ and $r_1 \, (r_{0001})$ which only assigns late retirement to female blue collar workers, though not among the maximal rules at $\Gamma = 1$ in , are the most robust to unmeasured confounding. This suggests that later retirement perhaps benefit female blue-collar workers more than others.
$\Gamma$ $\hat{\mathcal{R}}_{\text{max}, \Gamma}$
---------- -------------------------------------------------------
1.0 $\{r_{11}, r_{13}, r_{15}\}$
1.2 $\{r_9, r_{11}, r_{13}, r_{15}\}$
1.35 $\{r_1, r_3, r_5, r_7, r_9, r_{11}, r_{13}, r_{15}\}$
: The effect of late retirement on health: $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ for different choices of $\Gamma$.[]{data-label="tbl: HRS max set result"}
Does $\Gamma = 1.2$ represent a weak or strong unmeasured confounder? @Rosenbaum2009 proposed to *amplify* $\Gamma$ to a two-dimensional curve indexed by $(\Lambda, \Delta)$, where $\Lambda$ describes the relationship between the unmeasured confounder and the treatment assignment, and $\Delta$ describes the relationship between the unmeasured confounder and the outcome. For instance, $\Gamma = 1.2$ corresponds to an unmeasured confounder associated with a doubling of the odds of late retirement and a $75\%$ increase in the odds of better health status at the age of $70$ in each matched pair, i.e., $(\Delta, \Lambda) = (2.0,
1.75)$. @Hsu2013 further proposed to calibrate $(\Lambda,
\Delta)$ values to coefficients of observed covariates, however, their method only works for binary outcome and binary treatment. In the Supplementary Materials, we describe a calibration analysis that handle the ordinal self-reported health status level in our application that has $5$ levels.
We follow @Hsu2013 and use a plot to summarize the calibration analysis. In Figure \[fig: calibration plotb\], the blue curve represents @Rosenbaum2009’s two-dimensional amplification of $\Gamma = 1.2$ indexed by $(\Lambda, \Delta)$. The estimated coefficients of observed covariates are represented by black dots (after taking an exponential so they are comparable to $(\Lambda,
\Delta)$). We followed the suggestion in [@Gelman2008] and standardized all the non-binary covariates to have mean $0$ and standard deviation $0.5$, so the coefficient of each binary variable can be interpreted directly and the coefficients of each continuous/ordinal variable can be interpreted as the effect of a 2-SD increase in the covariate value, which roughly corresponds to flipping a binary variable from $0$ to $1$. Note that all coefficients are under the $\Gamma = 1.2$ curve. In fact, $\Gamma = 1.2$ roughly corresponds to a moderately strong binary unobserved covariate whose effects on late retirement and health status are comparable to a binary covariate $U$ constructed from smoking and education (red star in ).
Discussion {#sec:discussion}
==========
In this paper we proposed a general framework to compare, select, and rank treatment rules when there is a limited degree of unmeasured confounding and illustrated the proposed methods by two real data examples. A central message is that the best treatment rule (with the largest estimated value) assuming no unmeasured confounding is often not the most robust to unmeasured confounding. This may have important policy implications when individualized treatment rules are learned from observational data.
Because the value function only defines a partial order on the treatment rules when there is unmeasured confounding, there is a multitude of statistical questions one can ask about selecting and ranking the treatment rules. We have considered three questions that we believe are most relevant to policy research, but there are many other questions (such as in @gibbons1999selecting) one could ask.
In principle, our framework can be used with an arbitrary number of prespecified individualized treatment rules. However, to maintain a good statistical power in the multiple testing, the prespecified treatment rules should not be too many. This limitation makes our method most suitable as a confirmatory analysis to complement machine learning algorithms for individualized treatment rule discovery. Alternatively, if the number of decision variables is relatively low due to economic or practical reasons, our method is also reasonably powered for treatment rule discovery.
Acknowledgement {#acknowledgement .unnumbered}
===============
JW received funding from the Population Research Training Grant (NIH T32 HD007242) awarded to the Population Studies Center at the University of Pennsylvania by the NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development.
Appendix: Proof of
===================
To simplify the notation, suppose $r_1(\bm X_i) < r_2(\bm X_i)$ for all $1 \le i \le I$. Let $$D_{i, \Gamma} = D_i - \left(\frac{\Gamma - 1}{\Gamma + 1}\right)|D_{i, \Gamma}|, \quad \overline{D} = \frac{1}{I}\sum_{i = 1}^{I} D_i, \quad \overline{|D|} = \frac{1}{I}\sum_{i = 1}^{I} |D_i|,$$ $$\overline{D}_\Gamma = (1/I)\sum_{i = 1}^{I} D_{i, \Gamma} = \overline{D} - \left(\frac{\Gamma - 1}{\Gamma + 1}\right) \overline{|D|},$$ and $$se(\overline{D}_\Gamma)^2 = \frac{1}{I^2} \sum_{i = 1}^{I} (D_{i, \Gamma} - \overline{D}_\Gamma)^2.$$
When $\mathbb{E}[D_i] > 0$, $\Gamma^\ast(r_1, r_2)$ is obtained by solving the equation below in $\Gamma$: $$\frac{\overline{D}_\Gamma}{se(\overline{D}_\Gamma)} = \Phi^{-1}(1 - \alpha).
\label{eqn: sens value}$$ Square both sides of the equation above and plug in the expressions for $\overline{D}_\Gamma$ and $se(\overline{D}_\Gamma)^2$. Let $\kappa = (\Gamma - 1)/(\Gamma + 1)$ and $z_\alpha = \Phi^{-1}(1 - \alpha)$. Denote $$z_\alpha = \Phi^{-1}(1-\alpha), \quad \overline{|D|} = \frac{1}{I}\sum_{i = 1}^{I } |D_i|,\quad ,$$ and $$s^2_{D} = \frac{1}{I}\sum_{i = 1}^{I} (D_i - \overline{D})^2,\quad s^2_{|D|} = \frac{1}{I}\sum_{i = 1}^{I} (|D_i| - \overline{|D|})^2,\quad s_{D, |D|} = \frac{1}{I}\sum_{i = 1}^{I} (D_i - \overline{D})(|D_i| - \overline{|D|}).$$
One can show the sensitivity value $\Gamma^\ast(r_1, r_2)$ corresponds to $\kappa^\ast$ that solves the following quadratic equation: $$\left(\overline{|D|}^2 - \frac{1}{I} s^2_{|D|}z_\alpha^2 \right) \kappa^2 - 2\left(\overline{D}\overline{|D|} - \frac{1}{I} s_{D, |D|} z_\alpha^2\right)\kappa + \overline{D}^2 - \frac{1}{I} s^2_{D}c^2_\alpha = 0.$$
Specifically, we have $$\kappa^\ast = \frac{{\overline{D}\overline{|D|} - \frac{1}{I} s_{D, |D|} z_\alpha^2} \pm \sqrt{\Delta} }{\overline{|D|}^2 - \frac{1}{I} s^2_{|D|}z_\alpha^2},
\label{eqn: quadratic root}$$ where $\Delta = (\overline{D}\overline{|D|} - \frac{1}{I} s_{D, |D|} z_\alpha^2)^2 - (\overline{|D|}^2 - \frac{1}{I} s^2_{|D|}z_\alpha^2)(\overline{D}^2 - \frac{1}{I} s^2_{D}c^2_\alpha)$.
Note $$\sqrt{\Delta} = z_\alpha \sqrt{\frac{1}{I} \left(s^2_{|D|}\cdot \overline{D}^2 + s^2_{D}\cdot \overline{|D|}^2 -2 \overline{D}\overline{|D|} s_{D, |D|}\right) + \frac{1}{I^2}z_\alpha^2 \left(s_{D, |D|}^2 - s^2_{|D|}\cdot s^2_{D}\right)}.$$ Let us denote $A = \mathbb{E}[D]\cdot\mathbb{E}[|D|]$, $B = -z_\alpha \sqrt{\sigma^2_{|D|}\cdot \mathbb{E}[D]^2 + \sigma^2_{D}\cdot \mathbb{E}[|D|]^2 -2 \mathbb{E}[D]\mathbb{E}[|D|] \sigma_{D, |D|}}$, $C = \mathbb{E}[|D|]^2$, $R_1 = \sqrt{I}(\overline{D}\overline{|D|} - A)$, $R_2 = \sqrt{I}(\overline{|D|}^2 - C)$.
We have $$\kappa^\ast = \frac{A + \frac{1}{\sqrt{I}}R_1 + \frac{1}{\sqrt{I}} B}{C + \frac{1}{\sqrt{I}} R_2} + o_p\left(\frac{1}{\sqrt{I}}\right) = \frac{(A + \frac{1}{\sqrt{I}}R_1 + \frac{1}{\sqrt{I}} B)\cdot(1 - \frac{1}{\sqrt{I}}\frac{R_2}{C})}{C} + o_p\left(\frac{1}{\sqrt{I}}\right).$$
Scale both sides by $\sqrt{I}$ and rearrange the terms, we have$$\sqrt{I}\left(\kappa^\ast - \frac{A}{C}\right) = \frac{B}{C} + \frac{1}{C}R_1 - \frac{A}{C^2} R_2 + o_p(1).$$
Moreover, let $\phi: \mathbb{R}^2 \mapsto \mathbb{R}^2 = (xy, y^2)$: $$\sqrt{I}\begin{pmatrix}
\overline{D} - \mathbb{E}[D] \\
\overline{|D|} - \mathbb{E}[|D|]
\end{pmatrix} \sim N(0, \Sigma),
\quad \text{implies} \quad
\begin{pmatrix}
R_1 \\
R_2
\end{pmatrix} = \sqrt{I}\begin{pmatrix}
\overline{D}\overline{|D|} - \mathbb{E}[D]\cdot\mathbb{E}[|D|] \\
\overline{|D|}^2 - \mathbb{E}[|D|]^2
\end{pmatrix}\sim N(0, \phi'\Sigma (\phi')^T),$$ where $\Sigma = \begin{pmatrix}
\text{Var}[D], & \text{Cov}(D, |D|)\\
\text{Cov}(D, |D|), & \text{Var}[|D|]
\end{pmatrix}$ and $\phi' = \begin{pmatrix}
\mathbb{E}[|D|], &\mathbb{E}[D] \\
0, &2\mathbb{E}[|D|]
\end{pmatrix}$.
Plug in the expressions for $A$, $B$, and $C$ and compute the variance-covariance matrix of $(1/C, -A/C^2)(R_1, R_2)^T$: $$\sqrt{I}\left(\kappa^\ast - \frac{\mathbb{E}[D]}{\mathbb{E}[|D|]}\right) \sim N(z_\alpha\mu, ~\sigma^2)$$ where $$\mu = -\frac{\sqrt{\sigma^2_{|D|}\cdot \mathbb{E}[D]^2 + \sigma^2_{D}\cdot \mathbb{E}[|D|]^2 -2 \mathbb{E}[D]\mathbb{E}[|D|] \sigma_{D, |D|}}}{\mathbb{E}[|D|]^2},$$ and $$\sigma^2 = \frac{\text{Var}[D] \mathbb{E}^2[|D|] - \text{Var}[|D|] \mathbb{E}^2[D] - 2\mathbb{E}[D]\mathbb{E}[|D|]\text{Cov}(D, |D|) + 2\mathbb{E}^2[D]\text{Var}[|D|]}{\mathbb{E}^4[|D|]}.$$
The Supplementary Materials contain additional appendices about matching in observational studies and further simulation and real data results.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
If $M$ is an atomic monoid and $x$ is a nonzero non-unit element of $M$, then the set of lengths $\mathsf{L}(x)$ of $x$ is the set of all possible lengths of factorizations of $x$, where the length of a factorization is the number of irreducible factors (counting repetitions). In a recent paper, F. Gotti and C. O’Neil studied the sets of elasticities $\mathcal{R}(P) := \{\sup \mathsf{L}(x)/\inf \mathsf{L}(x) : x \in P\}$ of Puiseux monoids $P$. Here we take this study a step further and explore the local $k$-elasticities of the same class of monoids. We find conditions under which Puiseux monoids have all their local elasticities finite as well as conditions under which they have infinite local $k$-elasticities for sufficiently large $k$. Finally, we focus our study of the $k$-elasticities on the class of primary Puiseux monoids, proving that they have finite local $k$-elasticities if either they are boundedly generated and do not have any stable atoms or if they do not contain $0$ as a limit point.\
address: |
Department of Mathematics\
University of Florida\
Gainesville, FL 32611
author:
- Marly Gotti
title: 'On the local k-elasticities of Puiseux monoids'
---
Introduction {#sec:introduction}
============
Rings of integers of algebraic number fields are not necessarily factorial. We can use the class group of a ring of integers $R$ to measure to which extent elements in $R$ fail to have a unique factorization. In [@lC60], L. Carlitz characterized the half-factorial rings of integers in terms of their class groups. A friendly survey illustrating this characterization when $R = \mathbb{Z}[\sqrt{-5}]$ is provided in [@CGG17]. After the publication of Carlitz’s result, many authors attempted to characterize the class group of a general ring of integers in terms of further arithmetical properties describing the non-uniqueness of factorizations in such a ring (Rush [@dR83] was the first to give a complete characterization).
More generally, the algebraic invariants of several non-factorial Noetherian domains can be used to understand how far are such domains from being factorial. Because many of the factorization-related questions on integral domains are independent of the ring additive structure, in the last few decades the study of the phenomenon of non-unique factorization has been extended to the setting of atomic monoids. The monograph [@GH06b] by A. Geroldinger and F. Halter-Koch has significantly influenced the shape of the modern non-unique factorization theory, which considers not only integral domains but also atomic monoids.
The purpose of modern non-unique factorization theory is to measure how far an atomic monoid is from being factorial (or half-factorial). To carry out this measurement, we focus attention on several factorization invariants, which include the system of sets of lengths, the union of sets of lengths (first introduced in [@CS98]), and the elasticity. Roughly speaking, the elasticity of an atomic monoid $M$ is given by $$\rho(M) = \sup \{\rho(x) : x \in M\},$$ where $\rho(x)$ is the quotient of the maximum possible length of factorizations of $x$ by the minimum possible length of factorizations of $x$. Our aim in this paper is to study an $\mathbb{N}$-parametrized local version of the elasticity of Puiseux monoids, which are, up to isomorphism, the additive submonoids of $(\mathbb{Q},+)$ that are not groups.
This paper is organized as follows. In Section \[sec:preliminaries\], we review notation and introduce most of the concepts we shall be using later. In Section \[sec:general case\], we begin our exploration of the local elasticities of atomic Puiseux monoids. In particular, we find conditions under which Puiseux monoids have all their local elasticities finite as well as conditions under which they have infinite local $k$-elasticities for sufficiently large $k$. Lastly, in Section \[sec:primary case\], we target the class of primary Puiseux monoids, proving that they have finite local elasticities if they do not contain stable atoms or if they do not contain $0$ as a limit point.
Definitions & Notations {#sec:preliminaries}
=======================
Throughout this paper, we let $\mathbb{N}$ denote the set of positive integers, and we set $\mathbb{N}_0 := \mathbb{N} \cup \{0\}$. For each subset $A$ of $\mathbb{Q}$, we let $A^\bullet$ denote $A \setminus \{0\}$. Also, for all $q \in \mathbb{Q}$ such that $q > 0$, we let $\mathsf{n}(q)$ and $\mathsf{d}(q)$ denote the unique pair of relatively prime positive integers satisfying that $q = \mathsf{n}(q)/\mathsf{d}(q)$. In this case, we call $\mathsf{n}(q)$ and $\mathsf{d}(q)$ the *numerator* and *denominator* of $q$, respectively. Moreover, for a subset $A$ of positive rationals, we set $$\mathsf{n}(A) := \{\mathsf{n}(a) : a \in A \} \ \text{ and } \ \mathsf{d}(A) := \{\mathsf{d}(a) : a \in A\}. \vspace{2pt}$$ Although most of the definitions given in this section make sense in a much broader context, for the sake of simplicity we will present them in a particular setting which is enough for the treatment of Puiseux monoids. Every monoid here is tacitly assumed to be commutative, cancellative, and reduced (i.e., only the identity is a unit). As we shall be working in a commutative environment, unless otherwise specified we will use additive notation. Let $M$ be a monoid.
An element $a \in M \setminus \{0\}$ is called an *atom* (i.e., *irreducible*) provided that for all $x,y \in M$ the fact that $a = x+y$ implies that either $x=0$ or $y=0$. Let $\mathcal{A}(M)$ denote the set of all atoms of $M$.
If $A$ is a subset of $M$, then the minimal submonoid of $M$ containing $A$ is denoted by $\langle A \rangle$. If $M = \langle A \rangle$, then we say that $M$ is *generated* by $A$ or that $A$ is a *generating set* of $M$. The monoid $M$ is called *finitely generated* provided that it contains a finite generating set.
If $M = \langle \mathcal{A}(M) \rangle$, then we call the monoid $M$ atomic.
Clearly, every generating set of an atomic monoid $M$ contains $\mathcal{A}(M)$. In addition, it is not hard to prove that $M$ is atomic if and only if it contains exactly one minimal generating set, namely $\mathcal{A}(M)$; see, for instance, [@GH06b Proposition 1.1.7].
A *Puiseux monoid* is an additive submonoid of $(\mathbb{Q}, +)$ consisting of nonnegative rationals.
As we mentioned in the introduction, each additive submonoid of $(\mathbb{Q},+)$ that is not a group is isomorphic to a Puiseux monoid [@rG84 Theorem 2.9]. Puiseux monoids have a fascinating atomic structure. Some of them contain no atoms at all, as it is the case of $\langle 1/2^n : n \in \mathbb{N} \rangle$, while there are others whose sets of atoms are dense in the nonnegative real line [@GGP16 Theorem 3.5]. The atomicity of members of the family of Puiseux monoids has only been recently studied (see [@fG17a] and [@GG17]).
There are three classes of Puiseux monoids we shall be studying, namely the classes of bounded, strongly bounded, and primary Puiseux monoids. Let $P$ be a Puiseux monoid. We say that $P$ is *bounded* (respectively, *strongly bounded*) if it can be generated by a set $A$ of rational numbers such that $A$ is bounded (respectively, $\mathsf{n}(A)$ is bounded). In addition, $P$ is [*primary*]{} if it can be generated by a subset of positive rationals whose denominators are pairwise distinct prime numbers.
Bounded and strongly bounded Puiseux monoids are not necessarily atomic; see, for example, $\langle 1/2^n : n \in \mathbb{N} \rangle$. However, it is not hard to verify that primary monoids are always atomic. Indeed, it was proved in [@GG17] that every submonoid of a primary Puiseux monoid is atomic. The class of atomic Puiseux monoids is plentiful as the following theorem indicates.
[@GG17 Theorem 3.10]\[theo:sufficient condition for atomicity\] Let $P$ be a Puiseux monoid. If $0$ is not a limit point of $P$, then $P$ is atomic.
Given a set $S$, it is not hard to verify that the formal sums of elements of $S$ (up to permutation) is a monoid, which is called the *free commutative monoid* on $S$. For an atomic monoid $M$, we let $\mathsf{Z}(M)$ denote the free commutative monoid on $\mathcal{A}(M)$. The elements of $\mathsf{Z}(M)$ have the form $a_1 + \dots + a_n$ for some $a_1, \dots, a_n \in \mathcal{A}(M)$ and are called *factorizations*. It follows immediately that the function $\phi \colon \mathsf{Z}(M) \to M$ defined by $\phi(a_1 + \dots + a_n) = a_1 + \dots + a_n$ is a monoid homomorphism.
The homomorphism $\phi$ given above is called the *factorization homomorphism* of $M$.
If $x \in M$, then the *set of factorizations* of $x$, denoted by $\mathsf{Z}_M(x)$, is defined to be the preimage of $x$ by $\phi$, i.e., $$\mathsf{Z}_M(x) = \phi^{-1}(x) \subseteq \mathsf{Z}(M).$$ It follows that $M$ is atomic if and only if $\mathsf{Z}_M(x)$ is nonempty for all $x \in M$. If $z = a_1 + \dots + a_n$ for some $a_1, \dots, a_n$, then $n$ is called the *length* of $z$ and is denoted by $|z|$. For $x \in M$, the *set of lengths* of $x$ is the set $$\mathsf{L}_M(x) := \{|z| : z \in \mathsf{Z}_M(x)\}.$$ We write $\mathsf{Z}(x)$ and $\mathsf{L}(x)$ for the respective sets $\mathsf{Z}_M(x)$ and $\mathsf{L}_M(x)$ when there is no risk of ambiguity. In addition, the collection of sets $$\mathcal{L}(M) := \{ \mathsf{L}(x): x \in M \}$$ is called the *system of sets of lengths* of $M$. Systems of sets of lengths of many families of atomic monoids have been the focus of a great deal of research during the last few decades (see, for example, [@ACHP07; @GS16; @wS09]).
We proceed to introduce unions of sets of lengths and local elasticities. Similar to the system of sets of lengths, the elasticity is another arithmetical invariant used to measure up to what extent factorizations in monoids (or domains) fail to be unique. The concept of elasticity was introduced by R. Valenza [@rV90] in the context of algebraic number theory. The *elasticity* $\rho(M)$ of an atomic monoid $M$ is given by $$\rho(M) = \sup \{\rho(x) : x \in M\}, \ \text{where} \ \rho(x) = \frac{\sup \mathsf{L}(x)}{\min \mathsf{L}(x)}.$$ For $n \in \mathbb{N}_0$, we define $\mathsf{L}^{-1}(n) := \{x \in M : n \in \mathsf{L}(x)\}$. Now, the *union of sets of lengths* of $M$ containing $n$ is defined to be $$\mathcal{U}_n(M) = \{|z| : z \in \mathsf{Z}(x) \ \text{for some} \ x \in \mathsf{L}^{-1}(n) \}.$$
The *$n$-th local elasticity* of $M$ is defined by $$\rho_n(M) = \sup \mathcal{U}_n(M).$$
A numerical semigroup is a cofinite additive submonoid of $(\mathbb{N}_0, +)$. It is well known that every numerical semigroup is finitely generated and, therefore, atomic [@GH06b Proposition 2.7.8(4)]. See [@GR09] for an introduction to numerical semigroups. For a numerical semigroup $N$ with minimal generating set $A$, it was proved in [@CHM06 Section 2] that the elasticity of $N$ is given by $\max A/ \min A$. On the other hand, it is not hard to verify that $\mathcal{U}_n(N)$ is bounded and, therefore, every local elasticity of $N$ is finite. In the next two sections, we will generalize this fact in two different ways to Puiseux monoids.
The General Case {#sec:general case}
================
We begin this section proposing a sufficient condition under which most of the local elasticities of an atomic Puiseux monoid have infinite cardinality. On the other hand, we describe a subclass of Puiseux monoids (containing isomorphic copies of each numerical semigroup) whose local $k$-elasticities are finite.
If $P$ is a Puiseux monoid, then we say that $a_0 \in \mathcal{A}(P)$ is *stable* provided that the set $\{a \in \mathcal{A}(P) : \mathsf{n}(a) = \mathsf{n}(a_0)\}$ is infinite.
\[prop:union of sets of lengths: infinite case\] Let $P$ be an atomic Puiseux monoid. If $P$ contains a stable atom, then $\rho_k(P)$ is infinite for all sufficiently large $k$.
Suppose that for some $m \in \mathbb{N}$ the set $A := \{a \in \mathcal{A}(P) : \mathsf{n}(a) = m\}$ contains infinitely many elements. Let $\{a_n\}$ be an enumeration of the elements of $A$. Because the elements of $A$ have the same numerator, namely $m$, we can assume that the sequence $\{a_n\}$ is decreasing. Setting $d = \mathsf{d}(a_1)$, we can easily see that $d a_1 = m = \mathsf{d}(a_j) a_j$ for each $j \in \mathbb{N}$. Therefore $\mathsf{d}(a_j) \in \mathcal{U}_d(P)$ for each $j \in \mathbb{N}$. As $\mathsf{d}(A)$ is an infinite set so is $\mathcal{U}_d(P)$. The fact that $|\mathcal{U}_d(P)| = \infty$ immediately implies that $|\mathcal{U}_k(P)| = \infty$ for all $k \ge d$. Hence $\rho_k(P) = \sup \, \mathcal{U}_k(P) = \infty$ for every $k \ge d$.
Recall that a Puiseux monoid $P$ is strongly bounded if it can be generated by a set of rationals $A$ whose numerator set $\mathsf{n}(A)$ is bounded. As a direct consequence of Proposition \[prop:union of sets of lengths: infinite case\] we obtain the following result.
If $P$ is a non-finitely generated strongly bounded atomic Puiseux monoid, then $\rho_k(P)$ is infinite for all $k$ sufficiently large.
In contrast to the previous proposition, the next result gives a condition under which Puiseux monoids have finite $k$-elasticity for each $k \in \mathbb{N}$.
\[prop:union of sets of lengths: finite case\] Let $P$ be a Puiseux monoid that does not contain $0$ as a limit point. If $P$ is bounded, then $\rho_k(P) < \infty$ for every $k \in \mathbb{N}$.
Because $0$ is not a limit point of $P$, it follows by Theorem \[theo:sufficient condition for atomicity\] that $P$ is atomic As $P$ is a bounded Puiseux monoid, $\mathcal{A}(P)$ is a bounded set of rational numbers. Take $q, Q \in \mathbb{Q}$ such that $0 < q < a < Q$ for all $a \in \mathcal{A}(P)$. Now fix $k \in \mathbb{N}$, and suppose that $\ell \in \mathcal{U}_k(P)$. Then there exists $x \in \mathsf{L}^{-1}(k)$ such that $\ell \in \mathsf{L}(x)$. Because $x$ has a factorization of length $k$, it follows that $x < kQ$. Taking $a_1, \dots, a_\ell \in \mathcal{A}(P)$ such that $x = a_1 + \dots + a_\ell$, we find that $$q \ell < a_1 + \dots + a_\ell = x < kQ.$$ Therefore $\ell < kQ/q$. Because neither $q$ nor $Q$ depends on the choice of $x$, one obtains that $\mathcal{U}_k(P)$ is bounded from above by $kQ/q$. Hence $\rho_k(P) = \sup \mathcal{U}_k(P)$ is finite, and the proof follows.
With the following two examples, we shall verify that the conditions of containing a stable atom and not having $0$ as a limit point are not superfluous in Proposition \[prop:union of sets of lengths: infinite case\] and Proposition \[prop:union of sets of lengths: finite case\], respectively.
Let $\{p_n\}$ be a strictly increasing enumeration of the prime numbers, and consider the following Puiseux monoid: $$P = \langle A \rangle, \ \text{ where } \ A = \bigg\{ \frac{p_n - 1}{p_n} \ : \ n \in \mathbb{N} \bigg \}.$$ As the denominators of elements in $A$ are pairwise distinct primes, it immediately follows that $\mathcal{A}(P) = A$. Therefore $P$ is atomic. Clearly, $P$ does not contain stable atoms. Because $A$ is bounded so is $P$ (as a Puiseux monoid). On the other hand, $0$ is not a limit point of $P$. Thus, it follows by Proposition \[prop:union of sets of lengths: finite case\] that $\rho_k(P)$ is finite for every $k \in \mathbb{N}$. Notice also that
1. if $q \in P$ has at least two factorizations with no atoms in common, then $q \in \mathbb{N}$;
2. by Proposition \[prop:union of sets of lengths: finite case\], we have both a lower and an upper bound for any $q \in ~\mathsf{L}^{-1}(k)$.
Using the previous two observations, we have created an [R-script](https://www.github.com/marlycormar/find_u_k) that generates the sets $U_k$ for $k \in \{1, \dots, 15\}$. Each $U_k$ appears as the $k$-th column in Table \[fig:U\_k\].

Let $\{p_n\}$ be an enumeration of the prime numbers, and consider the Puiseux monoid $P = \big\langle 1/p_n : n \in \mathbb{N} \big\rangle$. It is not difficult to argue that $P$ is atomic with $\mathcal{A}(P) = \{1/p_n : n \in \mathbb{N}\}$. As $\mathcal{A}(P)$ is a bounded subset of positive rationals, the Puiseux monoid $P$ is bounded. Notice, however, that $0$ is a limit point of $P$. By Proposition \[prop:union of sets of lengths: infinite case\], it follows that the local elasticities $\rho_k(P)$ are infinite for all $k$ sufficiently large.
The condition of boundedness on Proposition \[prop:union of sets of lengths: finite case\] is also required, as shown by the following proposition.
\[prop:PM with all its local elasticities infinite\] There exist infinitely many non-isomorphic Puiseux monoids without $0$ as a limit point that have no finite local elasticities.
Let $\mathcal{P} = \{S_n : n \in \mathbb{N} \}$ be a family of disjoint infinite sets of odd prime numbers. For each set $S_n$, we will construct an atomic Puiseux monoid $M_n$. Then we will show that $M_i \cong M_j$ implies $i = j$.
Fix $j \in \mathbb{N}$ and take $p \in S_j$. To construct the Puiseux monoid $M_j$, let us inductively create a sequence $\{A_n\}_{n \in \mathbb{N}}$ of finite subsets of positive rationals with $A_1 \subsetneq A_2 \subsetneq \cdots$ such that, for each $k \in \mathbb{N}$, the following three conditions hold:
1. $\mathsf{d}(A_k)$ consists of odd prime numbers;
2. $\mathsf{d}(\max A_k) = \max \, \mathsf{d}(A_k)$;
3. $A_k$ minimally generates the Puiseux monoid $P_k = \langle A_k \rangle$.
Take $A_1 = \{1/p\}$, with $p$ an odd prime number, and assume we have already constructed the sets $A_1, \dots, A_n$ for some $n \in \mathbb{N}$ satisfying our three conditions. To construct $A_{n+1}$, we take $a = \max A_n$ and let $$b_1 = \frac{\mathsf{n}(a) \lfloor q/2 \rfloor}{q} \ \text{ and } \ b_2 = \frac{\mathsf{n}(a)\big(q - \lfloor q/2 \rfloor \big)}{q},$$ where $q$ is an odd prime in $S_j$ satisfying $q > \max \mathsf{d}(A_n)$ and $q \nmid \mathsf{n}(a)$. Using the fact that $q \ge 5$ and $\mathsf{d}(a) \ge 3$, one obtains that $$b_2 > b_1 = \frac{\lfloor q/2 \rfloor}{q} \mathsf{n}(a) > \frac 13 \mathsf{n}(a) \ge a.$$ Now set $A_{n+1} = A_n \cup \{b_1, b_2\}$. Notice that $b_1 + b_2 = \mathsf{n}(a)$. Clearly, $A_n \subsetneq A_{n+1}$, and condition (1) is an immediate consequence of our inductive construction. In addition, $$\mathsf{d}(\max A_{n+1}) = \mathsf{d}(b_2) = q = \max \mathsf{d}(A_{n+1}),$$ which is condition (2). Therefore it suffices to verify that $A_{n+1}$ minimally generates $P_{n+1} = \langle A_{n+1} \rangle$. Because both $b_1$ and $b_2$ are greater than every element in $A_n$, we only need to check that $b_1 \notin P_n$ and $b_2 \notin \langle A_n \cup \{b_1\} \rangle$. Let $d$ be the product of all the elements in $\mathsf{d}(A_n)$. Assuming that $b_1 = a_1 + \dots + a_r$ for some $a_1, \dots, a_r \in A_n$, and multiplying both sides of the same equality by $qd$, we would obtain that $q \mid \mathsf{n}(b_1)$, which contradicts that $q \nmid \mathsf{n}(a)$. Hence $b_1 \notin P_n$. Similarly, one finds that $b_2 \notin P_n$. Suppose, again by contradiction, that $b_2 \in \langle A_n \cup \{b_1\} \rangle$. Then there exist $a'_1 , \dots, a'_s \in A_n$ and $m \in \mathbb{N}$ such that $b_2 = mb_1 + a'_1 + \dots + a'_s$. Notice that $2b_1 = \mathsf{n}(a) (q-1)/q > b_2$, which implies that $m \le 1$. As $b_2 \notin P_n$, it follows that $m=1$. Then we can write $$\begin{aligned}
\label{eq:b_2}
\frac{\mathsf{n}(a)}q = b_2 - b_1 = \sum_{i=1}^s a'_i.
\end{aligned}$$ Once again, we can multiply the extreme parts of the equality (\[eq:b\_2\]) by $q \, \mathsf{d}(\{a'_1, \dots, a'_s\})$, to obtain that $q \mid \mathsf{n}(a)$, a contradiction. As a result, condition (3) follows.
Now set $M_j := \cup_{n \in \mathbb{N}} P_n$. As $P_1 \subsetneq P_2 \subsetneq \dots$, the set $M_j$ is, indeed, a Puiseux monoid. We can easily see that $M_j$ is generated by the set $A := \cup_{n \in \mathbb{N}} A_n$. Let us verify now that $\mathcal{A}(M_j) = A$. It is clear that $\mathcal{A}(M_j) \subseteq A$. To check the reverse inclusion, suppose that $a \in A$ is the sum of atoms $a_1, \dots, a_r \in \mathcal{A}(M_j)$. Take $t \in \mathbb{N}$ such that $a, a_1, \dots, a_r \in A_t$. Because $A_t$ minimally generates $P_t$ it follows that $r=1$ and $a = a_1$ and, therefore, that $a \in \mathcal{A}(M_j)$. Hence $\mathcal{A}(M_j) = A$, which implies that $M_j$ is an atomic monoid.
To disregard $0$ as a limit point of $M_j$, it is enough to observe that $\min \mathcal{A}(M_j) = 1/p$. We need to show then that $\rho_k(M_j) = \infty$ for $k \ge 2$. Set $a_n = \max A_n$. When constructing the sequence $\{A_n\}$, we observed that $\mathsf{n}(a_n) = b_{n_1} + b_{n_2}$, where $\{b_{n_1}, b_{n_2}\} = A_{n+1} \setminus A_n$. Because $\mathsf{n}(a_n) \in M_j$ and $$b_{n_1} + b_{n_2} =\mathsf{n}(a_n) = \mathsf{d}(a_n) a_n,$$ one has that the factorizations $z = b_{n_1} + b_{n_2}$ and $z' = \mathsf{d}(a_n) a_n$ are both in $\mathsf{Z}(\mathsf{n}(a_n))$. Since $|z| = 2$ and $|z'| = \mathsf{d}(a_n)$ it follows that $\mathsf{d}(a_n) \in \mathcal{U}_2(M_j)$. By condition (2) above, $\mathsf{d}(a_n) =\mathsf{d}(\max A_n) = \max \mathsf{d}(A_n)$. This implies that the set $\{\mathsf{d}(a_n) : n \in \mathbb{N} \}$ contains infinitely many elements. As $\{\mathsf{d}(a_n) : n \in \mathbb{N} \} \subseteq \mathcal{U}_2(M_j)$, we obtain that $\rho_2(M_j) = \infty$. Hence $\rho_k(M_j) = \infty$ for all $k \ge 2$.
We have just constructed an infinite family $\mathcal{F} := \{M_n : n \in \mathbb{N}\}$ of atomic Puiseux monoids with infinite $k$-elasticities. Let us show now that the monoids in $\mathcal{F}$ are pairwise non-isomorphic. To do this we use the fact that the only homomorphisms between Puiseux monoids are given by rational multiplication [@GGP16 Lemma 3.3]. Take $i,j \in \mathbb{N}$ such that $M_i \cong M_j$. Then there exists $r \in \mathbb{Q}$ such that $M_i = rM_j$. Let $m \in M_j$ such that $\mathsf{d}(m) = p$ and $p \nmid \mathsf{n}(r)$ for some prime $p$ in $S_j$. Since the element $rm \in M_i$ and $p \mid \mathsf{d}(rm)$, we must have that the prime $p$ belongs to $S_i$. Because the sets in $\mathcal{P}$ are pairwise disjoint, we conclude that $i = j$. This completes the proof.
Proposition \[prop:union of sets of lengths: infinite case\] (respectively, Proposition \[prop:union of sets of lengths: finite case\]) establishes sufficient conditions under which a Puiseux monoid has most of its local elasticities infinite (respectively, finite). In addition, we have verified that such conditions are not necessary. For the sake of completeness, we now exhibit a Puiseux monoid that does not satisfy the conditions of either of the propositions above and has no finite $k$-elasticity for any $k \ge 2$.
Consider the Puiseux monoid $$P = \left\langle \left(\frac{2}{3} \right)^n : \, n \in \mathbb{N} \right \rangle.$$ It was proved in [@GG17 Theorem 6.2] that $P$ is atomic and $\mathcal{A}(P) = \{(2/3)^n : n \in \mathbb{N}\}$. In addition, it is clear that $P$ is bounded, has $0$ as a limit point, and does not contain any stable atoms. So neither Proposition \[prop:union of sets of lengths: infinite case\] nor Proposition \[prop:union of sets of lengths: finite case\] applies to $P$. Now we argue that $\rho_k(P) = \infty$ for each $k \in \mathbb{N}$ such that $k \ge 2$.
Take $k \ge 2$ and set $x = k\frac{2}{3} \in P$. Notice that, by definition, $x \in \mathsf{L}^{-1}(k)$. We can conveniently rewrite $x$ as $$x = \big((k - 2) + 2\big) \frac{2}{3} = (k - 2)\frac{2}{3} + 3\cdot \left(\frac{2}{3}\right)^2\! \!,$$ which reveals that $z = (k-2)\frac 23 + 3(\frac 23)^2$ is a factorization of $x$ with $|z| = k+1$. Taking $k' = 3$ to play the role of $k$ and repeating this process as many times as needed, one can obtain factorizations of $x$ of lengths as large as one desires. The fact that $k$ was chosen arbitrarily implies now that $\rho_k(P) = \infty$ for each $k \ge 2$.\
The Primary Case {#sec:primary case}
================
Recall that a Puiseux monoid is said to be primary if it can be generated by a subset of rational numbers whose denominators are pairwise distinct primes. In Proposition \[prop:union of sets of lengths: finite case\], we established a sufficient condition on Puiseux monoids to ensure that all their local $k$-elasticities are finite. Here we restrict our study to the case of primary Puiseux monoids, providing two more sufficient conditions to guarantee the finiteness of all the local $k$-elasticities.
\[theo:sufficient conditions for finite elasticity in ppm\] For a primary Puiseux monoid $P$, the following two conditions hold.
1. If $0$ is not a limit point of $P$, then $\rho_k(P) < \infty$ for every $k \in \mathbb{N}$.
2. If $P$ is bounded and has no stable atoms, then $\rho_k(P) < \infty$ for every $k \in \mathbb{N}$.
Because every finitely generated Puiseux monoid is isomorphic to a numerical semigroup, and numerical semigroups have finite $k$-elasticities, we can assume, without loss of generality, that $P$ is not finitely generated.
To prove condition (1), suppose, by way of contradiction, that $\rho_k(P) = \infty$ for some $k \in \mathbb{N}$. Because $0$ is not a limit point of $P$ there exists $q \in \mathbb{Q}$ such that $0 < q < a$ for each $a \in \mathcal{A}(P)$. Let $$\ell = \min \{n \in \mathbb{N} : |\mathcal{U}_n(P)| = \infty\}.$$ Clearly, $\ell \ge 2$. Let $m = \max \, \mathcal{U}_{\ell - 1}(P)$. Now take $N \in \mathbb{N}$ sufficiently large such that, for each $a \in \mathcal{A}(P)$, $a > N$ implies that $\mathsf{d}(a) > \ell$. As $\mathcal{U}_\ell(P)$ contains infinitely many elements, there exists $k \in \mathcal{U}_\ell(P)$ such that $$k > \max\bigg\{\frac{\ell}{q}N, \, m + 1 \bigg\}.$$ In particular, $k-1$ is a strict upper bound for $\mathcal{U}_{\ell - 1}(P)$. As $k \in \mathcal{U}_\ell(P)$, we can choose an element $x \in P$ such that $\{k,\ell\} \subseteq \mathsf{L}(x)$. Take $A = \{a_1, \dots, a_k\} \subsetneq \mathcal{A}(P)$ and $B = \{b_1, \dots, b_\ell\} \subsetneq \mathcal{A}(P)$ with $$\begin{aligned}
\label{eq:different length factorizations 1}
a_1 + \dots + a_k = x = b_1 + \dots + b_\ell.
\end{aligned}$$ Observe that the sets $A$ and $B$ must be disjoint, for if $a \in A \cap B$, canceling $a$ in (\[eq:different length factorizations 1\]) would yield that $\{\ell - 1, k - 1\} \subseteq \mathsf{L}(x - a)$, which contradicts that $k-1$ is a strict upper bound for $\mathcal{U}_{\ell - 1}(P)$. Because $k > (\ell/q)N$, it follows that $$x > kq > \ell N.$$ Therefore $b := \max\{b_1, \dots, b_\ell\} > N$, which implies that $p = \mathsf{d}(b) > \ell$. Since $a_i \neq b$ for each $i = 1, \dots, k$, it follows that $p \notin \mathsf{d}(\{a_1, \dots, a_k\})$. We can assume, without loss of generality, that there exists $j \in \{1, \dots, \ell\}$ such that $b_i \neq b$ for every $i \le j$ and $b_{j+1} = \dots = b_\ell = b$. This allows us to rewrite (\[eq:different length factorizations 1\]) as $$\begin{aligned}
\label{eq:different length factorization 2}
(\ell - j)b = \sum_{i=1}^k a_i - \sum_{i=1}^j b_i.
\end{aligned}$$ After multiplying \[eq:different length factorization 2\] by $p$ times the product $d$ of all the denominators of the atoms $\{a_1, \dots, a_k, b_1, \dots, b_j\}$, we find that $p$ divides $d(\ell - j)b$. As $\gcd(p,d) = 1$ and $\ell - j < p$, it follows that $p$ divides $\mathsf{n}(b)$, which is a contradiction. Hence we conclude that $\rho_k(P) < \infty$ for every $k \in \mathbb{N}$.
Now we argue the second condition. Let $\{a_n\}$ be an enumeration of the elements of $\mathcal{A}(P)$ such that $\{\mathsf{d}(a_n)\}$ is an increasing sequence. Set $p_n = \mathsf{d}(a_n)$. Since $P$ has no stable atoms, $\lim \mathsf{n}(a_n) = \infty$. Let $B$ be an upper bound for $\mathcal{A}(P)$.
Suppose, by way of contradiction, that $\rho_n(P) = \infty$ for some $n \in \mathbb{N}$. Let $k$ be the smallest natural number such that $|\mathcal{U}_k(P)| = \infty$. Now take $\ell \in \mathcal{U}_k(P)$ large enough such that $\ell - 1 > \max \, \mathcal{U}_{k-1}(P)$ and for each $a \in \mathcal{A}(P)$ satisfying $a \le Bk/\ell$ we have that $\mathsf{n}(a) > Bk$. Take $x \in \mathsf{L}^{-1}(k)$ such that $a_1 + \dots + a_k = x = b_1 + \dots + b_\ell$ for some $a_1, \dots, a_k, b_1, \dots, b_\ell \in \mathcal{A}(P)$. Now set $b = \min\{b_1, \dots, b_\ell\}$. Then $$b \le \frac{b_1 + \dots + b_\ell}{\ell} = \frac{a_1 + \dots + a_k}{\ell} \le \frac{Bk}{\ell}.$$ Therefore $\mathsf{n}(b) > Bk$. We claim that $\mathsf{d}(b) \notin \mathsf{d}(\{a_1, \dots, a_k\})$. Suppose by contradiction that this is not the case. Then $b = a_i$ for some $i \in \{1, \dots, k\}$. This implies that $\{k - 1, \ell - 1 \} \subseteq \mathsf{L}(x-b)$, contradicting that $\ell - 1 > \max \, \mathcal{U}_{k-1}(P)$. Hence $\mathsf{d}(b) \notin \mathsf{d}(\{a_1, \dots, a_k\})$. Now assume, without loss of generality, that there exists $j \in \{1, \dots, \ell\}$ such that $b_i \neq b$ for each $i \le j$ and $b_{j+1} = \dots = b_\ell = b$. Write $$\begin{aligned}
\label{eq:different length factorization 3}
(\ell - j) b = \sum_{i=1}^k a_i - \sum_{i=1}^j b_i.
\end{aligned}$$ From (\[eq:different length factorization 3\]) we obtain that $p_\ell$ divides $\ell - j$. As a consequence, $$Bk \ge \sum_{i=1}^k a_i \ge \frac{\ell - j}{p_\ell} \mathsf{n}(b) \ge \mathsf{n}(b) > Bk,$$ which is a contradiction. Hence $\rho_k(P) < \infty$ for every $k \in \mathbb{N}$.
The sufficient conditions in part (1) of Theorem \[theo:sufficient conditions for finite elasticity in ppm\](1) and the condition of boundedness in part (2) of Theorem \[theo:sufficient conditions for finite elasticity in ppm\] are not necessary, as the following example illustrates.
\
1. Consider the primary Puiseux monoid $$P = \left \langle \frac{n}{p_n} : n \in \mathbb{N} \right \rangle,$$ where $\{p_n\}$ is the increasing sequence of all prime numbers. Since $\mathcal{A}(P) = \{n/p_n : n \in \mathbb{N}\}$, it follows that $P$ does not contain any stable atom. It is well known that the sequence $\{n/p_n\}$ converges to $0$, which implies that $P$ is bounded. Hence part (2) of Theorem \[theo:sufficient conditions for finite elasticity in ppm\] ensures that $\rho_k(P) < \infty$ for all $k \in \mathbb{N}$. Thus, the reverse implication of part (1) in Theorem \[theo:sufficient conditions for finite elasticity in ppm\] does not hold.
2. Consider now the Puiseux monoid $$P = \left \langle \frac{p_n^2 - 1}{p_n} : n \in \mathbb{N} \right \rangle,$$ where $\{p_n\}$ is any enumeration of the prime numbers. Since $0$ is not a limit point of $P$, we can apply part (1) of Theorem \[theo:sufficient conditions for finite elasticity in ppm\] to conclude that $\rho_k(P) < \infty$ for all $k \in \mathbb{N}$. Notice, however, that $P$ is not bounded. Therefore, the boundedness in part (2) of Theorem \[theo:sufficient conditions for finite elasticity in ppm\] is not a necessary condition.
Acknowledgments
===============
I would like to thank Salvatore Tringali for proposing some of the questions motivating this project and Felix Gotti for many enlightening conversations about Puiseux monoids. Finally, I would like to thank the anonymous referee, whose helpful suggestions lead to an improvement of the last version of this paper.
[20]{}
J. Amos, S. T. Chapman, N. Hine, and J. Paixao: *Sets of lengths do not characterize numerical monoids*, Integers [**7**]{} (2007) A50.
L. Carlitz: *A characterization of algebraic number fields with class number two*, Proc. Amer. Math. Soc. [**11**]{} (1960) 391–392.
S. T. Chapman, F. Gotti, and M. Gotti: *How do elements really factor in $\mathbb{Z}[\sqrt{-5}]$?* \[arXiv:1711.10842\]
S. T. Chapman, M. T. Holden, and T. A. Moore: *Full Elasticity In Atomic Monoids And Integral Domains*, Rocky Mountain J. Math. [**36**]{} (2006) 1437–1455.
P. A. García-Sánchez and J. C. Rosales: *Numerical Semigroups*, Developments in Mathematics Vol. 20, Springer-Verlag, New York, 2009.
A. Geroldinger and F. Halter-Koch: *Non-unique Factorizations: Algebraic, Combinatorial and Analytic Theory*, Pure and Applied Mathematics Vol. 278, Chapman & Hall/CRC, Boca Raton, 2006.
A. Geroldinger and W. Schmid: *The system of sets of lengths in Krull monoids under set addition*, Rev. Mat. Iberoam. [**32**]{} (2016) 571–588.
R. Gilmer: *Commutative Semigroup Rings*, Chicago Lectures in Mathematics, The University of Chicago Press, London, 1984.
F. Gotti: *On the atomic structure of Puiseux monoids*, J. Algebra Appl. [**16**]{} (2017) 20pp. \[arXiv:1607.01731v2\]
F. Gotti and M. Gotti: *Atomicity and boundedness of monotone Puiseux monoids*, Semigroup Forum [**95**]{} (2017) 1–17. \[arXiv:1608.04044\]
F. Gotti, M. Gotti, and H. Polo: *Three families of dense Puiseux monoids*. \[arXiv:1701.00058\]
W. A. Schmid: *A realization theorem for sets of lengths*, J. Number Theory [**129**]{} (2009), 990–999.
R. J. Valenza: *Elasticity of factorization in number fields*, J. Number Theory [**36**]{} (1990), 212–218.
S.T. Chapman and W.W. Smith: *Generalized sets of length*, J. Algebra [**200**]{} (1998), 449–471.
D. E. Rush.: *An arithmetic characterization of algebraic number fields with a given class group*, Math. Proc. Cambridge Phil. Soc [**94**]{} (1983), 23–28.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Thousands of risk variants underlying complex phenotypes (quantitative traits and diseases) have been identified in genome-wide association studies (GWAS). However, there are still two major challenges towards deepening our understanding of the genetic architectures of complex phenotypes. First, the majority of GWAS hits are in the non-coding region and their biological interpretation is still unclear. Second, accumulating evidence from GWAS suggests the polygenicity of complex traits, i.e., a complex trait is often affected by many variants with small or moderate effects, whereas a large proportion of risk variants with small effects remains unknown. The availability of functional annotation data enables us to address the above challenges. In this study, we propose a latent sparse mixed model (LSMM) to integrate functional annotations with GWAS data. Not only does it increase statistical power of the identification of risk variants, but also offers more biological insights by detecting relevant functional annotations. To allow LSMM scalable to millions of variants and hundreds of functional annotations, we developed an efficient variational expectation-maximization (EM) algorithm for model parameter estimation and statistical inference. We first conducted comprehensive simulation studies to evaluate the performance of LSMM. Then we applied it to analyze 30 GWAS of complex phenotypes integrated with 9 genic category annotations and 127 tissue-specific functional annotations from the Roadmap project. The results demonstrate that our method possesses more statistical power over conventional methods, and can help researchers achieve deeper understanding of genetic architecture of these complex phenotypes. The LSMM software is available at https://github.com/mingjingsi/LSMM.'
author:
- |
Jingsi Ming$^{1}$, Mingwei Dai$^{2,5}$, Mingxuan Cai$^{1}$,\
Xiang Wan$^{3}$, Jin Liu$^{4}$[^1] and Can Yang$^{5*}$
bibliography:
- 'referenceLSMM.bib'
title: 'LSMM: A statistical approach to integrating functional annotations with genome-wide association studies'
---
$^{1}$Department of Mathematics, Hong Kong Baptist University, Hong Kong\
$^{2}$School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China\
$^{3}$Department of Computer Science, Hong Kong Baptist University, Hong Kong\
$^{4}$Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore\
$^{5}$Department of Mathematics, The Hong Kong University of Science and Technology, Hong Kong
Introduction
============
Since the success of the first GWAS on age-related macular degeneration [@klein2015complement], more than 40,000 single-nucleotide polymorphisms (SNPs) have been reported in about 3,100 GWAS at the genome-wide significance level (see GWAS Catalog http://www.ebi.ac.uk/gwas/) [@welter2014nhgri]. Despite these fruitful discoveries, the emerging evidence from GWAS presents great challenges towards deeper understanding of the genetic architectures of complex phenotypes. First, more than 85$\%$ genome-wide significant hits are located in the non-coding region [@welter2014nhgri] and thus their functional roles are still largely elusive. Second, complex phenotypes are often highly polygenic, i.e., they are affected by a vast number of risk variants with individually small effects. For example, 70$\%$-80$\%$ of the variation in human height can be attributed to genetics [@visscher2008heritability]. However, @wood2014defining collected more than 250,000 samples and identified 697 variants at genome-wide significance level, and all these variants together can only explain 20$\%$ of heritability. A recent estimate [@boyle2017expanded] suggests that about 100,000 variants may be associated with human height. Given current sample sizes, a large proportion of risk variants underlying complex phenotypes remain unknown yet.
Fortunately, an increasing number of reports suggest that the functional importance of SNPs may not be equal [@schork2013all], which provides a direction to address the above challenges. On one hand, SNPs in or near genic regions can explain more heritability of complex phenotypes [@yang2011genome; @smith2011genome]. For example, the partition of genic category annotations for SNPs have revealed that SNPs in 5’ UTR, exon and 3’ UTR are significantly enriched across diverse complex traits [@schork2013all]. On the other hand, tissue-specific functional annotations can provide information that is complementary to genic category annotations, for dissecting genetic contribution to complex diseases in a tissue-specific manner. To name a few, genetic variants related to functions of immune cells are significantly enriched for immune diseases, such as rheumatoid arthritis, coeliac disease and type 1 diabetes; variants with liver functions are enriched for metabolic traits, such as LDL, HDL and total cholesterol; variants with pancreatic islet functions are enriched for fasting glucose [@kundaje2015integrative]. Additionally, SNPs in genes that are preferentially expressed in the central nervous system are significantly enriched in psychiatric disorders (e.g., schizophrenia and bipolar disorder) [@chung2014gpa].
A large amount of functional annotation data has become publicly available and the volume is still expanding. The Encyclopedia of DNA Elements (ENCODE) project [@encode2012integrated] have conducted more than 1,650 experiments on 147 cell lines to access functional elements across the human genome, such as DNase I hypersensitive sites and transcription factor binding. The NIH Roadmap Epigenomics Mapping Consortium [@kundaje2015integrative] is generating high-quality genome-wide human epigenomic maps of histone modifications, chromatin accessibility, DNA methylation and mRNA expression across more than one hundred of human cell types and tissues.
With the availability of rich functional annotations, we aim to (1) integrate genic category annotations and tissue-specific functional annotations with GWAS to increase the statistical power of the identification of risk SNPs, and (2) detect relevant tissue-specific functional annotations among a large amount of available annotation data to have a more biologically insightful interpretation of GWAS results. Statistical methods to incorporate genic category annotations have been proposed, e.g., stratified FDR methods [@schork2013all], cmfdr [@zablocki2014covariate], GPA [@chung2014gpa] and EPS [@liu2016eps]. However, these methods were designed to handle a few number of functional annotations and can not be scalable to a large-scale integrative analysis.
In this study, we propose a atent parse ixed odel (LSMM) to integrate genic category annotations and tissue-specific functional annotations with GWAS data. The “latent" statuses are used to connect the observed summary statistics from GWAS with functional annotations. “Mixed" models are designed to simultaneously consider both genic category and tissue-specific annotations, where genic category annotations are put into the design matrix of fixed effects, and tissue-specific annotations are encoded in the design matrix of random effects. We further impose a “sparse" structure on the random effects to adaptively select relevant tissue-specific annotations. We conducted comprehensive simulations to investigate the properties of LSMM and then applied LSMM to real data. We integrated summary statistics from 30 GWAS with 9 genic category annotations and 127 tissue-specific functional annotations from the Roadmap project. Compared with conventional methods, our method is able to increase the statistical power in the identification of risk variants and detection of tissue-specific functional annotations and providing a deeper understanding of genetic architecture of complex phenotypes.
Latent Sparse Mixed Model (LSMM)
================================
Model
-----
Suppose we have the summary statistics ($p$-values) of $M$ SNPs from GWAS. Consider the two-groups model [@efron2008microarrays], i.e., SNPs either belong to null or non-null group. Let $\gamma_{j}$ be the latent variable indicating the membership of the $j$-th SNP, i.e., $\gamma_j = 0$ or $\gamma_j = 1$ indicates the $j$-th SNP from null or non-null group, respectively. We further denote the proportion of null and non-null group as $\pi_0$ and $\pi_1$, respectively. Then we model the observed $p$-values as [@chung2014gpa], **$$p_{j}\sim\begin{cases}
U\left[0,1\right], & \gamma_{j}=0,\\
Beta\left(\alpha,1\right), & \gamma_{j}=1,
\end{cases}$$** where $U[0,1]$ denotes the uniform distribution on \[0,1\] and $Beta(\alpha,1)$ is the beta distribution with parameter $(\alpha,1)$. We constrain $0<\alpha<1$ to model the fact that $p$-values from the non-null group tend to be closer to 0 rather than 1.
Suppose that we have collected not only the $p$-values of $M$ SNPs from GWAS, but also functional annotations of these SNPs. To incorporate information from functional annotations for prioritization of risk variants and detection of tissue-specific functions for a complex phenotype, we consider the following latent sparse mixed model: $$\log\frac{\Pr\left(\gamma_{j}=1|\mathbf{Z}_{j},\mathbf{A}_{j}\right)}{\Pr\left(\gamma_{j}=0|\mathbf{Z}_{j},\mathbf{A}_{j}\right)}=\mathbf{Z}_{j}\mathbf{b}+\mathbf{A}_{j}\boldsymbol{\beta},\label{eq:logistic}$$ where $\mathbf{Z}\in\mathbb{R}^{M\times\left(L+1\right)}$ is the design matrix for fixed effects, comprised of an intercept and $L$ covariates, $\mathbf{b}\in\mathbb{R}^{L+1}$ is the vector of fixed effects, $\mathbf{A}\in\mathbb{R}^{M\times K}$ is the design matrix for random effects, $\boldsymbol{\beta}\in\mathbb{R}^{K}$ is the vector of random effects, and $K$ is the number of random effects. Both the $j$-th row of $\mathbf{Z}$ (i.e., $\mathbf{Z}_{j}$) and $\mathbf{A}$ (i.e., $\mathbf{A}_{j}$) corresponds to the $j$-th SNP. Note that $\gamma_{j}$ is a latent variable in model (\[eq:logistic\]) but its corresponding $p_{j}$ is observed. This makes our model different from the standard generalized linear mixed model.
Now we partition functional annotations into two categories: genic category annotations and tissue-specific annotations. According to [@schork2013all], genomic regions, such as exon, intron, 5’UTR and 3’UTR, are considered as genic category annotations. For tissue-specific annotations, we used epigenetic markers (H3k4me1, H3k4me3, H3k36me3, H3k27me3, H3k9me3, H3k27ac, H3k9ac, and DNase I Hypersensitivity) of multiple tissues from the Roadmap project. As we are more interested in the detection of tissue-specific results, we put genic category annotation data into $\mathbf{Z}$ and tissue-specific annotation data into $\mathbf{A}$, where each column of $\mathbf{Z}$ corresponds to a genic functional category and each column of $\mathbf{A}$ corresponds to a tissue-specific functional category. In the simplest case, the entries in $\mathbf{Z}$ and $\mathbf{A}$ are binary. For example, $Z_{jl} = 1$ means that the $j$-th SNP has a function in the $l$-th genic category and $Z_{jl} = 0$ otherwise. Our model also allows the entries in $\mathbf{Z}$ and $\mathbf{A}$ to be continuous variables, e.g., a score $Z_{jl}$ between 0 and 1 can be used to indicate the degree that the $j$-the SNP has a function in the $l$-th category. The closer to 1, the more likely it has a functional role. The entries in $\mathbf{A}$ are defined in the same way as those of $\mathbf{Z}$.
To adaptively select tissue-specific annotations, we assign a spike-slab prior on $\beta_{k}$: $$\beta_{k}\sim\begin{cases}
N\left(\beta_{k}|0,\sigma^{2}\right), & \eta_{k}=1,\\
\delta_{0}\left(\beta_{k}\right), & \eta_{k}=0,
\end{cases}\label{eq:betaprior}$$ where $N\left(\beta_{k}|0,\sigma^{2}\right)$ denotes the Gaussian distribution with mean $0$ and variance $\sigma^{2}$, $\delta_{0}$ denotes the Dirac delta function at zero, $\eta_{k}=1$ or $\eta_{k}=0$ means the $k$-th annotation is relevant or irrelevant to the given phenotype, respectively. Here $\eta_k$ is a Bernoulli variable with probability $\omega$ being 1: $$\eta_k \sim \omega^{\eta_k} (1-\omega)^{1-\eta_k },$$ where $\omega$ can be interpreted as the proportion of relevant annotations corresponding to this phenotype.
Let $\boldsymbol{\theta}=\left\{ \alpha,\mathbf{b},\sigma^{2},\omega\right\}$ be the collection of model parameters. The logarithm of the marginal likelihood can be written as $$\log\Pr\left(\mathbf{p}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)=\log\sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\int\Pr\left(\mathbf{p},\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)d\boldsymbol{\beta},\label{eq:LL}$$ where $$\Pr\left(\mathbf{p},\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)=\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\boldsymbol{\beta};\mathbf{b}\right)\Pr\left(\boldsymbol{\beta}|\boldsymbol{\eta};\sigma^{2}\right)\Pr\left(\boldsymbol{\eta}|\omega\right).$$ Our goal is to maximize the marginal likelihood to obtain the estimation $\hat{\boldsymbol{\theta}}$ of $\boldsymbol{\theta}$ and compute the posterior $$\Pr\left(\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\eta}|\mathbf{p},\mathbf{Z},\mathbf{A};\hat{\boldsymbol{\theta}}\right)=\frac{\Pr\left(\mathbf{p},\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\hat{\boldsymbol{\theta}}\right)}{\Pr\left(\mathbf{p}|\mathbf{Z},\mathbf{A};\hat{\boldsymbol{\theta}}\right)}.\label{eq:posterior}$$ Then we can infer the risk SNPs and relevant tissue-specific functional annotations for this phenotype and calculate the false discovery rate.
Algorithm
---------
Exact evaluation of posterior (\[eq:posterior\]) is intractable. One difficulty is due to the sigmoid function resulting from the logistic model. The other comes from the spike-slab prior. To address this issue, we propose a variational EM algorithm for parameter estimation and posterior approximation.
Before starting the derivation of our algorithm, we first re-parametrize the spike-slab prior (\[eq:betaprior\]) by introducing a new Gaussian variable $\tilde{\beta}_{k}\sim N\left(0,\sigma^{2}\right)$, then the product $\eta_{k}\tilde{\beta}_{k}$ has the same distribution with $\beta_{k}$ in model (\[eq:betaprior\]). So model (\[eq:logistic\]) can be written as $$\log\frac{\Pr\left(\gamma_{j}=1|\mathbf{Z}_{j},\mathbf{A}_{j}\right)}{\Pr\left(\gamma_{j}=0|\mathbf{Z}_{j},\mathbf{A}_{j}\right)}=\mathbf{Z}_{j}\mathbf{b}+\sum_{k=1}^{K}A_{jk}\beta_{k}=\mathbf{Z}_{j}\mathbf{b}+\sum_{k=1}^{K}A_{jk}\eta_{k}\tilde{\beta}_{k}.$$ Hence the complete-data likelihood $\Pr\left(\mathbf{p},\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)$ can be re-written as $$\Pr\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)=\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right),$$ where $$\mathbf{\Pr\left(p|\gamma;\alpha\right)} = \prod_{j=1}^{M}\Pr\left(p_{j}|\gamma_{j};\alpha\right)=\prod_{j=1}^{M}\left(\alpha p_{j}^{\alpha-1}\right)^{\gamma_{j}},$$ $$\begin{aligned}
& \Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\nonumber
= \prod_{j=1}^{M}\Pr\left(\gamma_{j}|\mathbf{Z}_{j},\mathbf{A}_{j},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\nonumber \\
= & \prod_{j=1}^{M}e^{\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)}S\left(-\mathbf{Z}_{j}\mathbf{b}-\sum_{k=1}^{K}A_{jk}\eta_{k}\tilde{\beta}_{k}\right),\label{eq:gammaprior}\end{aligned}$$ $$\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)=\Pr\left(\tilde{\boldsymbol{\beta}}|\sigma^{2}\right)\Pr\left(\boldsymbol{\eta}|\omega\right)=\prod_{k=1}^{K}N\left(\tilde{\beta}_{k}|0,\sigma^{2}\right)\omega^{\eta_{k}}\left(1-\omega\right)^{1-\eta_{k}},\label{eq:betaetaprior}$$ where $S\left(\cdot\right)$ is the sigmoid function and $S\left(x\right)=\left(1+e^{-x}\right)^{-1}$. With this reparameterization, we get rid of the Dirac delta function.
Due to the intractability caused by the sigmoid function inside integration (\[eq:LL\]), we consider the JJ bound [@jaakkola2000bayesian]: $$S\left(x\right)\ge S\left(\xi\right)\exp\left\{ \left(x-\xi\right)/2-\lambda\left(\xi\right)\left(x^{2}-\xi^{2}\right)\right\} ,\label{eq:JJbound}$$ where $\lambda\left(\xi\right)=\frac{1}{2\xi}\left[S\left(\xi\right)-\frac{1}{2}\right]$ and the right-hand-side of the inequality (\[eq:JJbound\]) is the JJ bound. Clearly, the JJ bound is in the exponential of a quadratic form. Applying this bound to (\[eq:gammaprior\]), we can get a tractable lower bound of $\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)$ , denoted as $h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)$, where $\boldsymbol{\xi}\in\mathbb{R}^{M}$ is variational parameter. Let $\boldsymbol{\Theta}=\left\{\alpha,\mathbf{b},\boldsymbol{\xi},\sigma^{2},\omega\right\}$. The lower bound of the complete-data likelihood is defined as $$f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)=\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right).$$
Next we derive the variational EM algorithm. Let $q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ be an approximation of the posterior $\Pr\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{p},\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)$. We can obtain a lower bound of the logarithm of the marginal likelihood $$\begin{aligned}
& \log\Pr\left(\mathbf{p}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)\nonumber \\
= & \log\sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\int\Pr\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)d\tilde{\boldsymbol{\beta}}\nonumber \\
\ge & \log\sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\int f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)d\tilde{\boldsymbol{\beta}}\nonumber \\
\ge & \sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}} q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\log\frac{f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)}{q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)}d\tilde{\boldsymbol{\beta}}\nonumber \\
= & \mathbf{E}_{q}\left[\log f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)-\log q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\right]\nonumber \\
\triangleq & L\left(q\right),\label{eq:Lq}\end{aligned}$$ where $L(q)$ is the lower bound. The first inequality is based on the JJ bound. The second inequality follows Jensen’s inequality. To make it feasible to evaluate the lower bound, we use the mean-field theory and assume that $q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ can be factorized as $$q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)=\left(\prod_{k=1}^{K}q\left(\tilde{\beta}_{k},\eta_{k}\right)\right)\left(\prod_{j=1}^{M}q\left(\gamma_{j}\right)\right),$$ where $q\left(\tilde{\beta}_{k},\eta_{k}\right)=q\left(\tilde{\beta}_{k}|\eta_{k}\right)q\left(\eta_{k}\right)$. It turns out that $q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ can be obtained analytically and thus the lower bound $L(q)$ can be exactly evaluated. By setting the derivative of $L(q)$ with respect to the parameters in $\boldsymbol{\Theta}$ be zero, we can obtain the updating equations for parameter estimation. The detailed derivation of the algorithm can be found in Section 1 of Supplementary Document.
We note that LSMM covers two special cases: (1) Two-groups model only (denoted as TGM), when all the coefficients in $\mathbf{b}$ (except the intercept term) and $\boldsymbol{\beta}$ are zero; (2) Two-groups model plus fixed effects model only (denoted as LFM for the abbreviation of latent fixed effect model), when all coefficients in $\boldsymbol{\beta}$ are zero. This motivates us developing a four-stage algorithm based on warm starts. More specifically, in the first stage, we run an EM algorithm to obtain the two parameters ($\alpha$ and the proportion of non-null group $\pi_{1}$) in the TGM. Then we use the estimated parameters as the starting point to run the second stage variational EM algorithm to fit the LFM and obtain the parameter $\alpha$, $\mathbf{b}$ and the posterior probability of $\boldsymbol{\gamma}$. In the third stage, we treat the obtained posterior as the value of $\boldsymbol{\gamma}$ and fit the logistic sparse mixed model to obtain the required initial value for the parameters in the next stage. Finally, in the fourth stage we run the above variational EM algorithm with the obtained parameters at the second and third stage until convergence. Since all the iterations are built upon the framework of EM algorithm, the lower bound is guaranteed to increase at each iteration. The details of the algorithm design are provided in Section 2 of Supplementary Document.
Identification of risk SNPs and Detection of relevant tissue-specific functional annotations
--------------------------------------------------------------------------------------------
After the convergence of the variational EM algorithm, the approximated posterior of latent variables $\boldsymbol{\gamma}$ and $\boldsymbol{\eta}$ can be obtained. Using this information, we are able to prioritize risk SNPs and relevant tissue-specific functional annotations.
Risk SNPs are identified based on $q\left(\gamma_{j}=1\right)$, an approximation of the posterior probability that the $j$-th SNP is associated with this phenotype. Accordingly, we can calculate the approximated local false discovery rate $fdr_{j}=1-q\left(\gamma_{j}=1\right)$. To control the global false discovery rate (FDR), we sort SNPs by $fdr$ from the smallest to the largest and regard the $j$-th re-ordered SNP as a risk SNP if $$FDR_{(j)}=\frac{\sum_{i=1}^{j}fdr_{(j)}}{j}\le\tau,$$ where $fdr_{(j)}$ is the $j$-th ordered $fdr$, $FDR_{(j)}$ is the corresponding global FDR, and $\tau$ is the threshold of global FDR. In simulations, we chose $\tau=0.1$.
Relevant tissue-specific functional annotations are inferred from $q\left(\eta_{k}=1\right)$, an approximation of the posterior probability that annotation $k$ is relevant to this phenotype. Similarly, we can calculate the approximated local false discovery rate $fdr_{k}=1-q\left(\eta_{k}=1\right)$ and convert it into the global false discovery rate. We can either control the local false discovery rate (e.g., $fdr_{k}\le0.1$) or global false discovery rate with $\tau=0.1$.
Results
=======
Simulation
----------
We conducted simulations to evaluate the performance of the proposed LSMM. The simulation data was generated as follows. The numbers of SNPs, fixed effects (genic category annotations) and random effects (tissue-specific functional annotations) were set to be $M=100,000$, $L=10$ and $K=500$ respectively. The entries in design matrices $Z_{jl}$ and $A_{jk}$ were generated from $Bernoulli\left(0.1\right)$, $j=1,...,M$, $l=1,...,L$ and $k=1,...,K$. Given the proportion of relevant tissue-specific functional annotations $\omega$, $\eta_{k}$ was drawn from $Bernoulli\left(\omega\right)$ and the corresponding nonzero entries of random effects $\boldsymbol{\beta}$ were simulated from $N\left(0,1\right)$. The first entry of the coefficients of fixed effects $\mathbf{b}$, i.e., the intercept in the logistic model, was fixed at $-2$ and other entries were generated from $N\left(0,1\right)$ and then kept fixed in multiple replications. After that, we simulated $\gamma_{j}$ from Bernoulli distribution with probability $S\left(\mathbf{Z}_{j}\mathbf{b}+\mathbf{A}_{j}\boldsymbol{\beta}\right)$, and then generated $p_{j}$ from $U\left[0,1\right]$ if $\gamma_{j}=0$ and $Beta\left(\alpha,1\right)$ otherwise.
We first evaluated the performance of LSMM in the identification of risk SNPs. We compared LSMM with two special cases, LFM (with fixed effects only) and TGM (without fixed effects and random effects). After prioritizing the risk SNPs using these methods, we made a comparison upon their empirical FDR, power, area under the receiver operating characteristic curve (AUC) and partial AUC. We varied the proportion of relevant random effects $\omega$ at $\left\{ 0,0.01,0.05,0.1,0.2\right\} $. Figure \[fig:risk SNP\] shows the performance of these three models with $\alpha=0.2$ and $K=500$ (results for other scenarios are shown in Figures S2-S9 in Supplementary Document). As shown in Figure \[fig:risk SNP\], the empirical FDRs are indeed controlled at the nominal level ($\tau=0.1$) for all these models. For TGM and LFM, the powers increase as the proportion of relevant functional annotations $\omega$ increases. This is because a larger $\omega$ could result in an increasing proportion of non-null group for SNPs. However, the AUC and partial AUC of LFM slightly decrease because the estimates of fixed effects using LFM would become less accurate when the impact of functional annotations becomes larger. LSMM can adaptively select relevant functional annotations to improve its performance. As expected, it outperforms both TGM and LFM in terms of the power, AUC and partial AUC. One may wonder what if we do not do variable selection and simply treat the effects of all covariates as fixed effects. We evaluated this approach and found that, without variable selection, the FDR would be inflated when the GWAS signal is relatively weak (See Figure S10 in Supplementary Document). In addition, LSMM assumed independence among SNPs, which greatly facilitates the computation and inference of LSMM. We evaluated the impact of this assumption on LSMM. The details of the simulations are given in Section 3 of Supplementary Document. Because GWAS only aim to identify the local genomic region in LD with true risk genetic variants, it is reasonable to consider the identified SNPs not as false positives if they are in the flanking region of the true risk SNPs. In this sense, the results (Figure S1 in Supplementary Document) suggest that LSMM can provide a satisfactory FDR control.
![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:risk SNP\]](plots//SNP_FDR_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:risk SNP\]](plots//SNP_power_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:risk SNP\]](plots//SNP_AUC_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:risk SNP\]](plots/SNP_pAUC_K500_alpha2 "fig:"){width="23.00000%"}\
Next we evaluated the performance of LSMM in the detection of relevant tissue-specific functional annotations in terms of the FDR, power, AUC and partial AUC. We varied the proportion of relevant tissue-specific functional annotations $\omega$ at $\left\{ 0.01,0.05,0.1,0.2\right\} $. The results with $\alpha=0.2$ and $K=500$ are given in Figure \[fig:relevant annotation\] (results for other scenarios are shown in Figures S11-S18 in Supplementary Document). The empirical FDR is controlled at 0.1 with conservativeness. This is because the variational approach is adopted to approximate the posterior, e.g., the JJ bound and mean-field approximation. The performance of LSMM in the detection of relevant functional annotations depends on the signal strength of the GWAS data. When the signal of the GWAS data is relatively strong, i.e., $\alpha$ is relatively small, LSMM has a very good performance of detecting relevant functional annotations, as indicated by power, AUC and partial AUC. We also conducted the following simulations to examine the role of adjusting covariates (i.e., genic category annotations) using fixed effects for detecting relevant tissue-specific annotations. We consider the case that genic category annotations and some tissue-specific annotations are correlated and $\mathbf{b}$, the vector of coefficients corresponding to genic category annotations, is nonzero. Without adjusting genic category annotations, some irrelevant tissue-specific annotations will be falsely included in the model due to their correlation with genic category annotations. To verify this, we simulated a case that 10 genic category annotations and first 50 tissue-specific annotations are correlated with correlation coefficient varied at $\left\{ 0,0.2,0.4,0.6,0.8\right\}$ and the remaining annotations are generated independently. To simulate the design matrices for genic category and tissue-specific annotations, we first simulated $M$ samples from a multivariate normal distribution with the correlation matrix among annotations and then made a cutoff so that 10$\%$ of the entries would be 1 and the others be 0. The results are shown in Figure S19 in Supplementary Document. In the presence of correlation, as expected, a larger FDR of detecting relevant tissue-specific annotations is observed without adjusting genic category annotations.
![FDR, power, AUC and partial AUC of LSMM for detection of relevant annotations with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:relevant annotation\]](plots//Anno_FDR_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM for detection of relevant annotations with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:relevant annotation\]](plots//Anno_power_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM for detection of relevant annotations with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:relevant annotation\]](plots//Anno_AUC_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM for detection of relevant annotations with $\alpha=0.2$ and $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:relevant annotation\]](plots//Anno_pAUC_K500_alpha2 "fig:"){width="23.00000%"}\
Regarding parameter estimation, LSMM provides a satisfactory estimate of $\alpha$, the parameter in Beta distribution (See Figures S31-S33 in Supplementary Document). When the signal strength of GWAS data is not very weak, the estimated fixed effects $\mathbf{b}$ (Figures S34-S44 in Supplementary Document) and the proportion of non-zero random effects $\omega$ (Figure S45 in Supplementary Document) are relatively accurate.
The computational time of LSMM depends on the strength of GWAS signal, the number of SNPs and the number of random effects. The left panel of Figure \[fig:time\_simulation\] shows that the computational time is nearly linear with respect to $M$ and $K$ with $\alpha=0.2$. In the right panel, we fixed $M=100,000$ and varied $K$ and $\alpha$. When the GWAS signal is relatively weak, e.g., $\alpha=0.6$, the timings of LSMM remain the same for different scales of random effects. This is because LSMM adopts a warm-start strategy and its last two stages start from the estimates at the second stage (i.e., fixed effects only) and converge in a few iterations because the GWAS signal is too weak to provide information for updating the random effects.
![Computational time of LSMM. Left panel: We varied the number of SNPs $M$ and the number of random effects $K$, with $\alpha=0.2$. Right panel: We varied the number of random effects $K$ and the strength of GWAS signal $\alpha$ with $M=100,000$. The results are summarized from 10 replications.\[fig:time\_simulation\]](plots//time_KM "fig:"){width="30.00000%"} ![Computational time of LSMM. Left panel: We varied the number of SNPs $M$ and the number of random effects $K$, with $\alpha=0.2$. Right panel: We varied the number of random effects $K$ and the strength of GWAS signal $\alpha$ with $M=100,000$. The results are summarized from 10 replications.\[fig:time\_simulation\]](plots//time_alphaK "fig:"){width="30.00000%"}\
To test the robustness of LSMM, instead of using generative model (\[eq:logistic\]), we conducted simulations based on probit model: $$y_{j}=\mathbf{Z}_{j}\mathbf{b}+\mathbf{A}_{j}\boldsymbol{\beta}+e_{j},\label{eq:probit}$$ where $e_{j}\sim N\left(0,\sigma_{e}^{2}\right)$. And we set $\gamma_{j}=1$ if $y_{j}>0$, $\gamma_{j}=0$ if $y_{j}\le0$. The first entry of the coefficients of fixed effects $\mathbf{b}$, i.e. the intercept term, was fixed at $-1$ and other entries were generated from $N\left(0,1\right)$ and fixed during multiple replications. We set $\alpha=0.2$ and varied the signal-noise ratio $r=\left\{ 4:1,1:1,1:4\right\}$. Figure \[fig:probit SNP\] shows the performance in identification of risk SNPs when $K=500$. We note that FDRs are all well-controlled at the nominal level and LSMM shows the best performance in power, AUC and partial AUC. The advantages of LSMM over LFM and TGM is more apparent as the signal-noise ratio increases. The performance of LSMM in the detection of relevant functional annotations is provided in Figure \[fig:probit annotation\]. Results for other scenarios are shown in Figures S20-S23 in Supplementary Document. Furthermore, we simulated the underlying distribution of $p$-values in non-null group from other distributions rather than the Beta distribution. The experimental results indicate that the FDR of LSMM is still well controlled at the nominal level, suggesting the robustness of LSMM and its potentially wide usage (results are shown in Figures S24-S26 in Supplementary Document).
![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs based on probit model (\[eq:probit\]) with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit SNP\]](plots//probit_SNP_FDR_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs based on probit model (\[eq:probit\]) with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit SNP\]](plots//probit_SNP_power_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs based on probit model (\[eq:probit\]) with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit SNP\]](plots//probit_SNP_AUC_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for identification of risk SNPs based on probit model (\[eq:probit\]) with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit SNP\]](plots//probit_SNP_pAUC_K500_alpha2 "fig:"){width="23.00000%"}\
![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for detection of relevant annotations based on probit model with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit annotation\]](plots//probit_Anno_FDR_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for detection of relevant annotations based on probit model with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit annotation\]](plots//probit_Anno_power_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for detection of relevant annotations based on probit model with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit annotation\]](plots//probit_Anno_AUC_K500_alpha2 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LFM and TGM for detection of relevant annotations based on probit model with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:probit annotation\]](plots//probit_Anno_pAUC_K500_alpha2 "fig:"){width="23.00000%"}\
We compared LSMM with GPA in the identification of risk variants and detection of tissue-specific annotations. As LSMM can integrate both genic category and functional annotations, we compared GPA with LSMM without fixed effects (integrate functional annotations only) for a fair comparison. From the model setup, one main difference between GPA and LSMM is that GPA assumes conditional independence among annotations, whereas in LSMM we do not make this assumption. To check the influence of correlated functional annotations, we simulated a case that the first 10 functional annotations were correlated and all the others were independent. We set $\alpha=0.2$ and varied the correlation among annotations $corr$ at $\left\{ 0,0.2,0.4,0.6,0.8\right\}$. To simulate the design matrices for correlated functional annotations, we first simulated $M$ samples from a multivariate normal distribution with the correlation matrix among annotations and then made a cutoff so that 10$\%$ of the entries would be 1 and the others be 0. Figure \[fig:GPA\] shows the results with $K=500$ (results for other scenarios are shown in Figures S27-S29 in Supplementary Document). We observe that the empirical FDRs of LSMM and LSMM without fixed effects are indeed controlled at 0.1, but the FDR of GPA inflates very much when annotations are correlated. As the FDR of GPA is not controlled, the power of GPA is not comparable to the other two models. According to the AUC and partial AUC, the performance of GPA becomes worse as the correlation among annotations increase, while the performance of LSMM is still stable and outstanding. It implies that LSMM is able to identify true relevant annotations among correlated misleading ones. We also conducted simulations to compare LSMM with cmfdr, a fully Bayesian approach to incorporate genic category annotations in GWAS using MCMC sampling algorithm. We find that cmfdr is not able to handle a large number of annotations and the MCMC sampling algorithm is very time-consuming. The result is shown in Figure S30 in Supplementary Document. Besides the computational time, we observe the empirical FDR of cmfdr is slightly inflated and its performance for prioritization of risk variants is inferior to LSMM in terms of AUC and partial AUC.
![FDR, power, AUC and partial AUC of LSMM, LSMM without fixed effects and GPA for identification of risk SNPs with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:GPA\]](plots//corr_FDR_K500 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LSMM without fixed effects and GPA for identification of risk SNPs with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:GPA\]](plots//corr_power_K500 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LSMM without fixed effects and GPA for identification of risk SNPs with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:GPA\]](plots//corr_AUC_K500 "fig:"){width="23.00000%"} ![FDR, power, AUC and partial AUC of LSMM, LSMM without fixed effects and GPA for identification of risk SNPs with $K=500$. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:GPA\]](plots//corr_pAUC_K500 "fig:"){width="23.00000%"}\
Real Data Analysis
------------------
We applied LSMM to analyze 30 GWAS of complex phenotypes. The source of the 30 GWAS is given in Table S2 in Supplementary Document. We used ANNOVAR [@wang2010annovar] to provide the genic category annotations: upstream, downstream, exonic, intergenic, intronic, ncRNA\_exonic, ncRNA\_intronic, UTR3 and UTR5, where ncRNA means variant overlaps a transcript without coding annotation in the gene definition. We obtained 127 tissue-specific functional annotations from GenoSkylinePlus [@lu2017systematic] (http://genocanyon.med.yale.edu/GenoSkyline). To avoid unusually large GWAS signals in the MHC region (Chromosome 6, 25Mb - 35Mb), we excluded the SNPs in this region.
We compared the number of identified risk SNPs using TGM, LFM and LSMM for 30 GWAS. Using LSMM as a reference, we calculated the ratio of the number of risk SNPs each method identified to that from LSMM under FDR thresholds $\tau=0.05$ and $\tau=0.1$. The results are shown in Figure \[fig:NoSNPs\]. For detecting the relevant tissue-specific functional annotations, we controlled the local fdr at $0.1$. Figure \[fig:heatmap\] shows the approximated posterior probability for annotations and phenotypes, where the darkness of the red entry implies the level of relevance between the corresponding tissue-specific functional annotation and the phenotype, the darker the more relevant.
![The number of risk variants identified by TGM, LFM and LSMM for 30 GWAS, under the same level of global FDR control (0.05 and 0.1). For visualization purpose, these numbers are normalized by dividing the corresponding number of variants identified by LSMM.\[fig:NoSNPs\]](plots/No_SNP "fig:"){width="60.00000%"}\
{width="76.00000%"}
Figure \[fig:NoSNPs\] shows that LSMM can identify more risk variants than TGM and LFM, under the same level of FDR control. The differences between TGM and LFM is due to the impact of genic category annotations and the differences between LFM and LSMM can be attributed to tissue-specific functional annotations. For HIV and bipolar disorder, a clear improvement in the identification of risk SNPs can be found from TGM to LFM, reflecting a large enrichment of genic category annotations. The contribution of tissue specific annotations can be clearly seen with the improvement from LFM to LSMM in several GWAS analyses, such as multiple sclerosis and coronary artery disease (CAD). For multiple sclerosis, genic category annotations do not show huge contributions, however, the contributions of tissue-specific annotations are substantial. As shown in Figure \[fig:heatmap\], its relevant tissue-specific annotations are related with immune system, GM12878 lymphoblastoid cells and primary B cells from peripheral blood. For CAD, both enrichment of genic category and tissue-specific annotations are estimated and its relevant cells are from a few different tissues, including blood, heart, lung and skin (See Figure \[fig:heatmap\]). As a cardiovascular disease, it is reasonable to discover the relevance of these cells to CAD, and @fernandez2016immune has shown its relationship with immune system. The annotations in lung and skin we detected may provide some new insights about the disease.
Among the 30 GWAS, we analyzed four GWAS of schizophrenia with different sample sizes, Schizophrenia1 (9,379 cases and 7,736 controls), Schizophrenia2 (9,394 cases and 12,462 controls), Schizophrenia3 (13,833 cases and 18,310 controls ) and Schizophrenia4 (36,989 cases and 113,075 controls). The detailed results are summarized in Table S3 in Supplementary Document. The Manhattan plots using TGM and LSMM are provided in Figure S46 in Supplementary Document. Clearly, LSMM steadily improves over TGM and LFM in the analysis of schizophrenia, a highly polygenic trait, with different sample sizes. In particular, for Schizophrenia3, LSMM identified 1,492 risk variants which could not be identified by TGM. Interestingly, the majority of them (872 variants) can be re-identified in Schizophrenia4 using TGM. This indicates that LSMM has a better power in prioritizing risk variants than TGM. For Schizophrenia4, four tissue-specific functional annotations are detected. In our analysis, both genetic variants related to functions of brain cells (brain angular gyrus) and blood cells (K562 leukemia cells) are detected to be relevant. This evidence not only connects Schizophrenia with brain, but also suggests the biological link between Schizophrenia and immune system [@ripke2014biological]. We also analyzed two GWAS of years of education (Years of Education 1 and 2). Compared with Years of Eduction 1, the GWAS data set for Years of Education 2 is based on a larger sample size, and thus it enables LSMM to detect relevant functional annotations in brain and immune system. Our results are consistent with @finucane2015partitioning.
More findings about the relevance between tissue-specific annotations and GWAS are shown in Figure \[fig:heatmap\]. Some are concordant with previous GWAS analyses. For example, we detect the functional annotation in liver to be relevant to the lipid-related phenotypes, including low-density lipoprotein, high-density lipoprotein, triglycerides and total cholesterol. Similar functional enrichment has been found by @kundaje2015integrative [@finucane2015partitioning] and @lu2017systematic. For height, more than 40 tissue-specific functional annotations are detected to be relevant using LSMM, which reflects its highly polygenic genetic architecture. These relevant annotations include cells in bone, vascular and skeletal muscle which were also shown significant enrichments for height by @finucane2015partitioning. Recent research has linked some neurodegenerative diseases, which were believed to be more related to brain and neural system, to the immune system, such as Alzheimer’s disease [@sims2017rare] and Parkinson’s disease [@Sulzer2017]. For Alzheimer’s disease, similar results have been found using LSMM. The relevant functional annotations are from blood cells, including monocytes-CD14+ and K562 leukemia cells. For autoimmune diseases including Crohn’s disease, ulcerative colitis, inflammatory bowel disease, rheumatoid arthritis, lupus, menopause, multiple sclerosis and primary biliary cirrhosis, the detected relevant functional annotations are mainly from the immune system and have many overlaps. Our results also provide the genomic level supports to previous medical literature, such as the relevance between spleen and inflammatory bowel disease [@muller1993splenic], between liver and menopause [@mucci2001age]. The result also provides several new insights. Lipid-related phenotypes including high-density lipoprotein and total cholesterol are also relevant to functional annotations in immune system and brain. Additionally, annotations in immune system are considered relevant to blood-related phenotypes including red cell count, mean cell haemoglobin and mean cell volume. The foreskin fibroblast primary cells in skin are relevant to ulcerative colitis, four lipid-related phenotypes and red cell count.
Regarding the computational time, LSMM takes less than six minutes to handle each of the 30 GWAS datasets. We also recorded timings of cmfdr as a comparison. As cmfdr is not scalable to a large number of covariates, we only integrated the 9 genic category annotations in cmfdr. The MCMC algorithm was suggested [@zablocki2014covariate] to run with 5,000 burn-in and 20,000 main iterations. According to our estimates, cmfdr takes more than ten days for most phenotypes. The detailed timing results are shown in Figure S47 of Supplementary Document.
If we did not adjust the genic category annotation, more relevant tissue-specific functional annotations would be detected (results are shown in Figure S48 in Supplementary Document). It indicates that LSMM could adjust covariates’ effects and provide a more reliable identification of relevant functional annotations.
Conclusion
==========
We have presented a statistical approach, LSMM, to integrate genic category annotations and a large amount of tissue-specific functional annotations with GWAS data. LSMM can not only improve the statistical power in the identification of risk SNPs, but also infer relevant tissue-specific functional annotations to the phenotype, offering new insights to explore the genetic architecture of complex traits or diseases. Through comprehensive simulations and real data analysis of 30 GWAS, LSMM is shown to be statistically efficient and computationally scalable. As more annotation data will become publicly available in the future, we believe LSMM is widely useful for integrative analysis of genomic data.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work was supported in part by grant NO. 61501389 from National Science Funding of China, grants NO. 22302815, NO. 12316116 and NO. 12301417 from the Hong Kong Research Grant Council, startup grant R9405 from The Hong Kong University of Science and Technology, and Duke-NUS Medical School WBS: R-913-200-098-263, and MOE2016-T2-2-029 from Ministry of Eduction, Singapore.
Supplementary Document {#supplementary-document .unnumbered}
======================
The variational EM algorithm
============================
E-step {#e-step .unnumbered}
------
Let $\boldsymbol{\theta}=\left\{ \alpha,\mathbf{b},\sigma^{2},\omega\right\} $ be the collection of model parameters. The logarithm of the marginal likelihood is
$$\log\Pr\left(\mathbf{p}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)=\log\sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\int\Pr\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)d\tilde{\boldsymbol{\beta}}.$$
Using the signoid function denoted as $S\left(x\right)=\frac{1}{1+e^{-x}}$, the complete-data likelihood can be written as
$$\Pr\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)=\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right),$$ .where
$$\begin{aligned}
\mathbf{\Pr\left(p|\boldsymbol{\gamma};\alpha\right)} & = & \prod_{j=1}^{M}\Pr\left(p_{j}|\gamma_{j};\alpha\right)=\prod_{j=1}^{M}\left(\alpha p_{j}^{\alpha-1}\right)^{\gamma_{j}},\\
\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right) & = & \prod_{j=1}^{M}\Pr\left(\gamma_{j}|\mathbf{Z}_{j},\mathbf{A}_{j},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\\
& = & \prod_{j=1}^{M}e^{\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)}S\left(-\mathbf{Z}_{j}\mathbf{b}-\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right),\\
\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right) & = & \prod_{k=1}^{K}\Pr\left(\tilde{\beta}_{k},\eta_{k}|\sigma^{2},\omega\right)=\prod_{k=1}^{K}N\left(\tilde{\beta}_{k}|0,\sigma^{2}\right)\omega^{\eta_{k}}\left(1-\omega\right)^{1-\eta_{k}}.\end{aligned}$$
We can use JJ bound [@jaakkola2000bayesian] to get the tractable lower bound of $\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)$ which is denoted by $h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)$:
$$\begin{aligned}
& & \Pr\left(\gamma_{j}|\mathbf{Z}_{j},\mathbf{A}_{j},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\\
& = & e^{\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)}S\left(-\mathbf{Z}_{j}\mathbf{b}-\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)\\
& \ge & e^{\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)}S\left(\xi_{j}\right)\exp\left(-\lambda\left(\xi_{j}\right)\left(\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)^{2}-\xi_{j}^{2}\right)-\frac{\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}+\xi_{j}}{2}\right)\\
& = & h\left(\gamma_{j}|\mathbf{Z}_{j},\mathbf{A}_{j},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\xi_{j}\right),\end{aligned}$$
where
$$\lambda\left(\xi_{j}\right)=\frac{1}{2\xi_{j}}\left(S\left(\xi_{j}\right)-\frac{1}{2}\right).$$
Let $\boldsymbol{\Theta}=\left\{ \alpha,\mathbf{b},\boldsymbol{\xi},\sigma^{2},\omega\right\} $. Then
$$f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)=\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)$$ is a lower bound of complete-data likelihood.
Next, let $q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ be an approximation of the posterior $\Pr\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{p},\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)$. Then we can obtain a lower bound of the logarithm of the marginal likelihood:
$$\begin{aligned}
& & \log\Pr\left(\mathbf{p}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)\\
& = & \log\sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\int\Pr\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)d\tilde{\boldsymbol{\beta}}\\
& \ge & \log\sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\int f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)d\tilde{\boldsymbol{\beta}}\\
& \ge & \sum_{\boldsymbol{\gamma}}\sum_{\boldsymbol{\eta}}\intop q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\log\frac{f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)}{q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)}d\tilde{\boldsymbol{\beta}}\\
& = & \mathbf{E}_{q}\left[\log f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)-\log q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\right]\\
& \triangleq & L\left(q\right),\end{aligned}$$
where $L(q)$ is the lower bound. The second inequality follows Jensen’s inequality. And
$$\begin{aligned}
& & \log\Pr\left(\mathbf{p}|\boldsymbol{\gamma},\alpha\right)\\
& = & \sum_{j=1}^{M}\left(\gamma_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}\right)\right),\\
\\
& & \log h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta},\mathbf{b},\boldsymbol{\xi}\right)\\
& = & \sum_{j=1}^{M}\left(\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)+\log S\left(\xi_{j}\right)\right)\\
& + & \sum_{j=1}^{M}\left(-\lambda\left(\xi_{j}\right)\left(\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)^{2}-\xi_{j}^{2}\right)-\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}+\xi_{j}\right)/2\right),\\
\\
& & \log\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)\\
& = & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\tilde{\beta}_{k}^{2}-\frac{K}{2}\log\left(2\pi\sigma^{2}\right)+\sum_{k=1}^{K}\eta_{k}\log\omega+\sum_{k=1}^{K}\left(1-\eta_{k}\right)\log\left(1-\omega\right).\end{aligned}$$
To make it feasible to evaluate the lower bound, we assume that $q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ can be factorized as
$$q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)=\left(\prod_{k=1}^{K}q\left(\tilde{\beta}_{k},\eta_{k}\right)\right)\left(\prod_{j=1}^{M}q\left(\gamma_{j}\right)\right),$$ where $q\left(\tilde{\beta}_{k},\eta_{k}\right)=q\left(\tilde{\beta}_{k}|\eta_{k}\right)q\left(\eta_{k}\right)$,$q\left(\gamma_{j}=1\right)=\pi_{j}$, $q\left(\eta_{k}=1\right)=\omega_{k}$.
We can obtain an approximation according to the mean-field method:
$$\begin{aligned}
& & \log q\left(\tilde{\beta}_{i},\eta_{i}\right)\\
& = & \mathbf{E}_{k\ne i}\mathbf{E}_{\boldsymbol{\gamma}}\left[\log f\left(\mathbf{p},\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\Theta}\right)\right]\\
& = & \left(-\frac{1}{2\sigma^{2}}-\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}\eta_{i}^{2}\right)\tilde{\beta}_{i}^{2}\\
& & +\sum_{j=1}^{M}\left(\left(\pi_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}\mathbf{b}\right)A_{ji}-2\lambda\left(\xi_{j}\right)A_{ji}\sum_{k\ne i}A_{jk}\mathbf{E}_{k}\left[\eta_{k}\tilde{\beta}_{k}\right]\right)\eta_{i}\tilde{\beta}_{i}\\
& & +\eta_{i}\log\omega+\left(1-\eta_{i}\right)\log\left(1-\omega\right)+const,\end{aligned}$$
where the expectation is taken under the distribtion $q\left(\boldsymbol{\boldsymbol{\gamma}}\right)$ and $q\left(\tilde{\beta}_{-i},\eta_{-i}\right)=\prod_{k\ne i}q\left(\tilde{\beta}_{k},\eta_{k}\right)$.
When $\eta_{i}=1$, we have
$$\begin{aligned}
& & \log q\left(\tilde{\beta}_{i}|\eta_{i}=1\right)\\
& = & \left(-\frac{1}{2\sigma^{2}}-\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}\right)\tilde{\beta}_{i}^{2}\\
& & +\sum_{j=1}^{M}\left(\left(\pi_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}\mathbf{b}\right)A_{ji}-2\lambda\left(\xi_{j}\right)A_{ji}\sum_{k\ne i}A_{jk}\mathbf{E}_{k}\left[\eta_{k}\tilde{\beta}_{k}\right]\right)\tilde{\beta}_{i}+const,\end{aligned}$$
where $\mathbf{E}_{k}$ denotes the expectation under $q\left(\tilde{\beta}_{k},\eta_{k}\right)$, and the constant doesn’t depend on $\tilde{\beta}_{i}$. Because $\log q\left(\tilde{\beta}_{i}|\eta_{i}=1\right)$ is a quadratic form,
$$q\left(\tilde{\beta}_{i}|\eta_{i}=1\right)=N\left(\mu_{i},s_{i}^{2}\right),$$ where
$$\begin{aligned}
\mu_{i} & = & s_{i}^{2}\sum_{j=1}^{M}\left(\pi_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k\ne i}A_{jk}\mathbf{E}_{k}\left[\eta_{k}\tilde{\beta}_{k}\right]\right)A_{ji}\right),\\
s_{i}^{2} & = & \frac{\sigma^{2}}{1+2\sigma^{2}\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}}.\end{aligned}$$
When $\eta_{i}=0$, we have
$$\begin{aligned}
\log q\left(\tilde{\beta}_{i}|\eta_{i}=0\right) & = & -\frac{1}{2\sigma^{2}}\tilde{\beta}_{i}^{2}+const.\end{aligned}$$
So
$$\begin{aligned}
q\left(\tilde{\beta}_{i}|\eta_{i}=0\right) & = & N\left(0,\sigma^{2}\right).\end{aligned}$$
Therefore we have
$$q\left(\tilde{\beta}_{i},\eta_{i}\right)=\left[\omega_{i}N\left(\mu_{i},s_{i}^{2}\right)\right]^{\eta_{i}}\left[\left(1-\omega_{i}\right)N\left(0,\sigma^{2}\right)\right]^{1-\eta_{i}}.$$
Now we evaluate the variational lower bound $L\left(q\right)$.
$$\begin{aligned}
& & \mathbf{E}_{q}\left[\log\Pr\left(\mathbf{p}|\boldsymbol{\gamma},\alpha\right)\right]\\
& = & \sum_{j=1}^{M}\left(\pi_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}\right)\right),\\
\\
& & \mathbf{E}_{q}\left[\log h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta},\mathbf{b},\boldsymbol{\xi}\right)\right]\\
& = & \sum_{j=1}^{M}\left(\pi_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)+\log S\left(\xi_{j}\right)-\lambda\left(\xi_{j}\right)\left(\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)^{2}-\xi_{j}^{2}\right)\right)\\
& & +\sum_{j=1}^{M}\left(-\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}+\xi_{j}\right)/2+\lambda\left(\xi_{j}\right)\sum_{k}A_{jk}^{2}\omega_{k}^{2}\mu_{k}^{2}-\lambda\left(\xi_{j}\right)\sum_{k}A_{jk}^{2}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)\right),\\
\\
& & \mathbf{E}_{q}\left[\log\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)\right]\\
& = & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)+\left(1-\omega_{k}\right)\sigma^{2}\right)-\frac{K}{2}\log\left(2\pi\sigma^{2}\right)+\sum_{k=1}^{K}\omega_{k}\log\omega+\sum_{k=1}^{K}\left(1-\omega_{k}\right)\log\left(1-\omega\right),\\
\\
& & -\mathbf{E}_{q}\left[\log q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\right]\\
& = & \sum_{k=1}^{K}\left(\frac{1}{2}\omega_{k}\left(\log s_{k}^{2}-\log\sigma^{2}\right)-\omega_{k}\log\omega_{k}-\left(1-\omega_{k}\right)\log\left(1-\omega_{k}\right)\right)+\frac{K}{2}\log\sigma^{2}+\frac{K}{2}+\frac{K}{2}\log\left(2\pi\right)\\
& & -\sum_{j=1}^{M}\left(\pi_{j}\log\pi_{j}+\left(1-\pi_{j}\right)\log\left(1-\pi_{j}\right)\right).\end{aligned}$$
We set the partial derivative of the lower bound $L(q)$ w.r.t to $\omega_{k},\pi_{j}$ and $\xi_{j}$ be 0 to get the variational parameters $\omega_{k},\pi_{j}$ and $\xi_{j}$:
$$\begin{aligned}
\omega_{k} & = & \frac{1}{1+\exp\left(-u_{k}\right)},\textrm{ where }u_{k}=\log\frac{\omega}{1-\omega}+\frac{1}{2}\log\frac{s_{k}^{2}}{\sigma^{2}}+\frac{\mu_{k}^{2}}{2s_{k}^{2}},\\
v_{j} & = & \log\alpha+\left(\alpha-1\right)\log p_{j}+\mathbf{Z}_{j}\mathbf{b}+\sum_{k=1}^{K}A_{jk}\omega_{k}\mu_{k},\\
\xi_{j}^{2} & = & \left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)^{2}+\sum_{k}A_{jk}^{2}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}^{2}\mu_{k}^{2}\right).\end{aligned}$$
The variational lower bound $L(q)$ is
$$\begin{aligned}
& & L(q)\\
& = & \sum_{j=1}^{M}\left(\pi_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}\right)\right)\\
& & +\sum_{j=1}^{M}\left(\pi_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)+\log S\left(\xi_{j}\right)-\lambda\left(\xi_{j}\right)\left(\left(\beta_{0}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)^{2}-\xi_{j}^{2}\right)\right)\\
& & +\sum_{j=1}^{M}\left(-\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}+\xi_{j}\right)/2+\lambda\left(\xi_{j}\right)\sum_{k}A_{jk}^{2}\omega_{k}^{2}\mu_{k}^{2}-\lambda\left(\xi_{j}\right)\sum_{k}A_{jk}^{2}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)\right)\\
& & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}\sigma^{2}\right)+\sum_{k=1}^{K}\omega_{k}\log\omega+\sum_{k=1}^{K}\left(1-\omega_{k}\right)\log\left(1-\omega\right)\\
& & +\sum_{k=1}^{K}\left(\frac{1}{2}\omega_{k}\left(\log s_{k}^{2}-\log\sigma^{2}\right)-\omega_{k}\log\omega_{k}-\left(1-\omega_{k}\right)\log\left(1-\omega_{k}\right)\right)\\
& & -\sum_{j=1}^{M}\left(\pi_{j}\log\pi_{j}+\left(1-\pi_{j}\right)\log\left(1-\pi_{j}\right)\right).\end{aligned}$$
[M-step]{} {#m-step .unnumbered}
----------
Now we update $\alpha$, $\mathbf{b}$, $\sigma^{2}$, $\omega$. We set the partial derivative of $L(q)$ w.r.t the parameters to be 0 and get
$$\begin{aligned}
\alpha & = & -\frac{\sum_{j=1}^{M}\pi_{j}}{\sum_{j=1}^{M}\pi_{j}\log p_{j}},\\
\sigma^{2} & = & \frac{\sum_{k=1}^{K}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)}{\sum_{k=1}^{K}\omega_{k}},\\
\omega & = & \frac{1}{K}\sum_{k=1}^{K}\omega_{k},\end{aligned}$$
and use Newton’s method to update $\mathbf{b}$:
$$\mathbf{b}=\mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g},$$ where
$$\begin{aligned}
\mathbf{g} & = & \sum_{j=1}^{M}\mathbf{Z}_{j}^{T}\left(\pi_{j}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)-\frac{1}{2}\right),\\
\mathbf{H} & = & -2\mathbf{Z}_{j}^{T}\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}.\end{aligned}$$
[Implementation]{} {#implementation .unnumbered}
------------------
- Initialize $\alpha$, $\sigma^{2}$, $\omega$, $\mathbf{b}$, $\left\{ \omega_{k},\mu_{k}\right\} _{k=1,...K}$, $\left\{ \xi_{j},\pi_{j}\right\} _{j=1,...,M}$. Let $\tilde{y}=\sum_{k}A_{jk}\omega_{k}\mu_{k}$.
- E-step: For $i=1,...,K$, first obtain $\tilde{y}_{i}=\tilde{y}-A_{ji}\omega_{i}\mu_{i}$, and then update $\mu_{i},s_{i}^{2},\omega_{i}$ and $\tilde{y}$ as follows
$$\begin{aligned}
s_{i}^{2} & = & \frac{\sigma^{2}}{1+2\sigma^{2}\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}},\\
\mu_{i} & = & s_{i}^{2}\sum_{j=1}^{M}\left(\left(\pi_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}_{i}\right)\right)A_{ji}\right),\\
\omega_{i} & = & \frac{1}{1+\exp\left(-u_{i}\right)},\textrm{ where }u_{i}=\log\frac{\omega}{1-\omega}+\frac{1}{2}\log\frac{s_{i}^{2}}{\sigma^{2}}+\frac{\mu_{i}^{2}}{2s_{i}^{2}},\\
\tilde{y} & = & \tilde{y}_{i}+A_{ji}\omega_{i}\mu_{i}.
\end{aligned}$$
Then for $j=1,...,M$, update $\pi_{j},\xi_{j}$ as follows
$$\begin{aligned}
\pi_{j} & = & \frac{1}{1+\exp\left(-v_{j}\right)},\textrm{ where }v_{j}=\log\alpha+\left(\alpha-1\right)\log p_{j}+\mathbf{Z}_{j}\mathbf{b}+\tilde{y},\\
\xi_{j}^{2} & = & \left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)^{2}+\sum_{k}A_{jk}^{2}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}^{2}\mu_{k}^{2}\right).
\end{aligned}$$
Calculate $L\left(q\right)$:
$$\begin{aligned}
& & L(q)\\
& = & \sum_{j=1}^{M}\pi_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}\right)-\sum_{j=1}^{M}\left(\pi_{j}\log\pi_{j}+\left(1-\pi_{j}\right)\log\left(1-\pi_{j}\right)\right)\\
& & +\sum_{j=1}^{M}\left(\pi_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)+\log S\left(\xi_{j}\right)-\frac{\mathbf{Z}_{j}\mathbf{b}+\tilde{y}+\xi_{j}}{2}\right)\\
& & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}\sigma^{2}\right)+\sum_{k=1}^{K}\omega_{k}\log\omega+\sum_{k=1}^{K}\left(1-\omega_{k}\right)\log\left(1-\omega\right)\\
& & +\sum_{k=1}^{K}\left(\frac{1}{2}\omega_{k}\left(\log s_{k}^{2}-\log\sigma^{2}\right)-\omega_{k}\log\omega_{k}-\left(1-\omega_{k}\right)\log\left(1-\omega_{k}\right)\right).
\end{aligned}$$
- M-step
$$\begin{aligned}
\alpha & = & -\frac{\sum_{j=1}^{M}\pi_{j}}{\sum_{j=1}^{M}\pi_{j}\log p_{j}},\\
\sigma^{2} & = & \frac{\sum_{k=1}^{K}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)}{\sum_{k=1}^{K}\omega_{k}},\\
\omega & = & \frac{1}{K}\sum_{k=1}^{K}\omega_{k},\\
\mathbf{g} & = & -\sum_{j=1}^{M}\mathbf{Z}_{j}^{T}\left(\pi_{j}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)-\frac{1}{2}\right),\\
\mathbf{H} & = & 2\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}^{T}\mathbf{Z}_{j},\\
\mathbf{b} & = & \mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g}.
\end{aligned}$$
- Evaluate $L(q)$ to track the convergence of the algorithm.
Details of the proposed algorithm
=================================
Stage 1: Two-groups model (TGM) {#stage-1-two-groups-model-tgm .unnumbered}
-------------------------------
Suppose we have the $p$-values of $M$ SNPs for a given a phenotype. Let $\gamma_{j}$ be the latent variables indicating whether the $j$-th SNP is associated with this phenotype. Here $\gamma_{j}=0$ means unassociated and $\gamma_{j}=1$ means associated. Then we have the following two-groups model:
**$$p_{j}\sim\begin{cases}
U\left[0,1\right], & \gamma_{j}=0,\\
Beta\left(\alpha,1\right), & \gamma_{j}=1,
\end{cases}$$** where $\mathbf{p}\in\mathbb{R}^{M}$ are the $p$-values, $0<\alpha<1$ and $\Pr\left(\gamma_{j}=1\right)=\pi_{1}$.
We can use EM algorithm to compute the posterior and parameter estimation.
Let $\boldsymbol{\theta}=\left\{ \alpha,\pi_{1}\right\} $ be the collection of model parameters. The logarithm of the marginal likelihood is $$\log\Pr\left(\mathbf{p}|\boldsymbol{\theta}\right)=\log\sum_{\boldsymbol{\gamma}}\Pr\left(\mathbf{p},\boldsymbol{\gamma}|\boldsymbol{\theta}\right)=\log\sum_{\boldsymbol{\gamma}}\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)\Pr\left(\boldsymbol{\gamma}|\pi_{1}\right),$$ where $$\begin{aligned}
\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right) & = & \prod_{j=1}^{M}\Pr\left(p_{j}|\gamma_{j};\alpha\right)=\prod_{j=1}^{M}\left(\alpha p_{j}^{\alpha-1}\right)^{\gamma_{j}},\\
\Pr\left(\boldsymbol{\gamma}|\pi_{1}\right) & = & \prod_{j=1}^{M}\pi_{1}^{\gamma_{j}}\left(1-\pi_{1}\right)^{1-\gamma_{j}}.\end{aligned}$$
In the E step, we compute the posterior: $$\tilde{\gamma}_{j}=q\left(\gamma_{j}=1\right)=\frac{\pi_{1}\alpha p_{j}^{\alpha-1}}{\pi_{1}\alpha p_{j}^{\alpha-1}+1-\pi_{1}},$$ and get the Q function:
$$\begin{aligned}
Q & = & \mathbf{E}_{q}\left[\log\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)+\log\Pr\left(\boldsymbol{\gamma}|\pi_{1}\right)\right]\\
& = & \sum_{j=1}^{M}\tilde{\gamma}_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}+\log\pi_{1}\right)+\sum_{j=1}^{M}\left(1-\tilde{\gamma}_{j}\right)\log\left(1-\pi_{1}\right).\end{aligned}$$
The incomplete log likelihood can be evaluated as: $$\begin{aligned}
L & = & \sum_{j=1}^{M}\tilde{\gamma}_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}+\log\pi_{1}-\log\tilde{\gamma}_{j}\right)+\sum_{j=1}^{M}\left(1-\tilde{\gamma}_{j}\right)\left(\log\left(1-\pi_{1}\right)-\log\left(1-\tilde{\gamma}_{j}\right)\right).\end{aligned}$$
In the M step, we update $\alpha$ and $\pi_{1}$ by maximizing the Q function. We have
$$\begin{aligned}
\alpha & = & -\frac{\sum_{j=1}^{M}\tilde{\gamma}_{j}}{\sum_{j=1}^{M}\tilde{\gamma}_{j}\log p_{j}},\\
\pi_{1} & = & \frac{1}{M}\sum_{j=1}^{M}\tilde{\gamma}_{j}.\end{aligned}$$
### Algorithm: {#algorithm-1 .unnumbered}
Input: $\mathbf{p}$, Initialize: $\alpha=0.1$, $\pi_{1}=0.1$, Output: $\alpha$, $\pi_{1}$, $\left\{ \tilde{\gamma}_{j}\right\} _{j=1,...,M}$.
- Initialize $\alpha=0.1$, $\pi_{1}=0.1$.
- E-step: For $j=1,...,M$, calculate $\tilde{\gamma}_{j}$ as follows
$$\tilde{\gamma}_{j}=\frac{\pi_{1}\alpha p_{j}^{\alpha-1}}{\pi_{1}\alpha p_{j}^{\alpha-1}+1-\pi_{1}}.$$
Calculate $L$:
$$\begin{aligned}
L & = & \sum_{j=1}^{M}\tilde{\gamma}_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}+\log\pi_{1}-\log\tilde{\gamma}_{j}\right)+\sum_{j=1}^{M}\left(1-\tilde{\gamma}_{j}\right)\left(\log\left(1-\pi_{1}\right)-\log\left(1-\tilde{\gamma}_{j}\right)\right).
\end{aligned}$$
- M-step:
$$\begin{aligned}
\alpha & = & -\frac{\sum_{j=1}^{M}\tilde{\gamma}_{j}}{\sum_{j=1}^{M}\tilde{\gamma}_{j}\log p_{j}},\\
\pi_{1} & = & \frac{1}{M}\sum_{j=1}^{M}\tilde{\gamma}_{j}.
\end{aligned}$$
- Check convergence.
Stage 2: Latent fixed-effect model (LFM) {#stage-2-latent-fixed-effect-model-lfm .unnumbered}
----------------------------------------
Suppose we have the $p$-values of $M$ SNPs for a given a phenotype. Similarly, we assume
**$$p_{j}\sim\begin{cases}
U\left[0,1\right], & \gamma_{j}=0,\\
Beta\left(\alpha,1\right), & \gamma_{j}=1,
\end{cases}$$** where $\mathbf{p}\in\mathbb{R}^{M}$ are the $p$-values, $\gamma_{j}=1$ indicates the $j$-th is associated with this phenotype and $\gamma_{j}=0$ otherwise, and $0<\alpha<1$.
To integrate more information, we consider the logistic fixed-effect model:
$$\log\frac{\Pr\left(\gamma_{j}=1|\mathbf{Z}_{j}\right)}{\Pr\left(\gamma_{j}=0|\mathbf{Z}_{j}\right)}=\mathbf{Z}_{j}\mathbf{b},$$ where $\mathbf{Z}\in\mathbb{R}^{M\times\left(L+1\right)}$ and $\mathbf{b}=\left[b_{0},b_{1},b_{2},...,b_{L}\right]^{T}$ is an unknown vector of fixed effects, $L$ is the number of covariates.
We can use EM algorithm to compute the posterior and parameter estimation.
Let $\boldsymbol{\theta}=\left\{ \alpha,\mathbf{b}\right\} $ be the collection of model parameters. The complete data likelihood can be written as $$\Pr\left(\mathbf{p},\boldsymbol{\gamma}|\mathbf{Z};\boldsymbol{\theta}\right)=\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right)\Pr\left(\boldsymbol{\gamma}|\mathbf{Z};\mathbf{b}\right),$$ where $$\begin{aligned}
\Pr\left(\mathbf{p}|\boldsymbol{\gamma};\alpha\right) & = & \prod_{j=1}^{M}\Pr\left(p_{j}|\gamma_{j};\alpha\right)=\prod_{j=1}^{M}\left(\alpha p_{j}^{\alpha-1}\right)^{\gamma_{j}},\\
\Pr\left(\boldsymbol{\gamma}|\mathbf{Z};\mathbf{b}\right) & = & \prod_{j=1}^{M}e^{\gamma_{j}\mathbf{Z}_{j}\mathbf{b}}S\left(-\mathbf{Z}_{j}\mathbf{b}\right).\end{aligned}$$
In the E step, we compute the posterior: $$\tilde{\gamma}_{j}=q\left(\gamma_{j}=1\right)=\frac{e^{\mathbf{Z}_{j}\mathbf{b}}\alpha p_{j}^{\alpha-1}}{e^{\mathbf{Z}_{j}\mathbf{b}}\alpha p_{j}^{\alpha-1}+1},$$ and get the Q function:
$$Q=\sum_{j=1}^{M}\tilde{\gamma}_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}+\mathbf{Z}_{j}\mathbf{b}\right)+\sum_{j=1}^{M}\log S\left(-\mathbf{Z}_{j}\mathbf{b}\right).$$
The incomplete log likelihood can be evaluated as: $$\begin{aligned}
L & = & \sum_{j=1}^{M}\tilde{\gamma}_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}+\mathbf{Z}_{j}\mathbf{b}-\log\tilde{\gamma}_{j}\right)-\sum_{j=1}^{M}\left(1-\tilde{\gamma}_{j}\right)\log\left(1-\tilde{\gamma}_{j}\right)+\sum_{j=1}^{M}\log S\left(-\mathbf{Z}_{j}\mathbf{b}\right).\end{aligned}$$
In the M step, we update $\alpha$ by maximizing the Q function. We have
$$\alpha=-\frac{\sum_{j=1}^{M}\tilde{\gamma}_{j}}{\sum_{j=1}^{M}\tilde{\gamma}_{j}\log p_{j}}.$$
We use Newton’s method to update $\mathbf{b}$:
$$\mathbf{b}=\mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g},$$ where
$$\begin{aligned}
\mathbf{g} & = & \sum_{j=1}^{M}\left(-\tilde{\gamma}_{j}+S\left(\mathbf{Z}_{j}\mathbf{b}\right)\right)\mathbf{Z}_{j},\\
\mathbf{H} & = & \sum_{j=1}^{M}S\left(\mathbf{Z}_{j}\mathbf{b}\right)S\left(-\mathbf{Z}_{j}\mathbf{b}\right)\mathbf{Z}_{j}^{T}\mathbf{Z}_{j}.\end{aligned}$$
### Algorithm: {#algorithm-2 .unnumbered}
Input: $\mathbf{p}$, $\mathbf{Z}$, $\alpha$, $b_{0}=\log\frac{\pi_{1}}{1-\pi_{1}}$, Output: $\alpha$, $\mathbf{b}$, $\left\{ \tilde{\gamma}_{j}\right\} _{j=1,...,M}$.
- Initialize $\alpha$, $\mathbf{b}=\left(b_{0},0,...,0\right)^{T}$.
- E-step: For $j=1,...,M$, calculate $\tilde{\gamma}_{j}$ as follows
$$\tilde{\gamma}_{j}=q\left(\gamma_{j}=1\right)=\frac{e^{\mathbf{Z}_{j}\mathbf{b}}\alpha p_{j}^{\alpha-1}}{e^{\mathbf{Z}_{j}\mathbf{b}}\alpha p_{j}^{\alpha-1}+1}.$$
Calculate $L$:
$$\begin{aligned}
L & = & \sum_{j=1}^{M}\tilde{\gamma}_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}+\mathbf{Z}_{j}\mathbf{b}-\log\tilde{\gamma}_{j}\right)-\sum_{j=1}^{M}\left(1-\tilde{\gamma}_{j}\right)\log\left(1-\tilde{\gamma}_{j}\right)+\sum_{j=1}^{M}\log S\left(-\mathbf{Z}_{j}\mathbf{b}\right).
\end{aligned}$$
- M-step
$$\begin{aligned}
\alpha & = & -\frac{\sum_{j=1}^{M}\pi_{j}}{\sum_{j=1}^{M}\pi_{j}\log p_{j}},\\
\mathbf{g} & = & \sum_{j=1}^{M}\left(-\tilde{\gamma}_{j}+S\left(\mathbf{Z}_{j}\mathbf{b}\right)\right)\mathbf{Z}_{j},\\
\mathbf{H} & = & \sum_{j=1}^{M}S\left(\mathbf{Z}_{j}\mathbf{b}\right)S\left(-\mathbf{Z}_{j}\mathbf{b}\right)\mathbf{Z}_{j}^{T}\mathbf{Z}_{j},\\
\mathbf{b} & = & \mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g}.
\end{aligned}$$
- Check convergence.
Stage 3: Logistic sparse mixed model {#stage-3-logistic-sparse-mixed-model .unnumbered}
------------------------------------
Suppose we the latent states $\boldsymbol{\gamma}$ of $M$ SNPs for a given phenotype is given. We consider a logistic mixed model:
$$\log\frac{\Pr\left(\gamma_{j}=1|\mathbf{Z}_{j},\mathbf{A}_{j}\right)}{\Pr\left(\gamma_{j}=0|\mathbf{Z}_{j},\mathbf{A}_{j}\right)}=\mathbf{Z}_{j}\mathbf{b}+\mathbf{A}_{j}\boldsymbol{\beta}=\sum_{l=0}^{L}\mathbf{Z}_{jl}b_{l}+\sum_{k=1}^{K}A_{jk}\beta_{k},$$ where $\mathbf{Z}\in\mathbb{R}^{M\times(L+1)}$, $A\in\mathbb{R}^{M\times K}$, $\mathbf{b}=\left[b_{0},b_{1},b_{2},...,b_{L}\right]^{T}$ is an unknown vector of fixed effects, $\boldsymbol{\beta}=\left[\beta_{1},\beta_{2},...,\beta_{K}\right]^{T}$ is a unknown vector of random effects with a sprike-slab prior:
$$\beta_{k}\sim\begin{cases}
N\left(0,\sigma^{2}\right), & \eta_{k}=1,\\
\delta_{0}, & \eta_{k}=0,
\end{cases}$$ where $\eta_{k}$ is another latent variable with $\Pr\left(\eta_{k}=1\right)=\omega$. Here $\eta_{k}=1$ means the $k$-th annotation is relevant to this phenotype and $\eta_{k}=0$ otherwise.
To handle the Dirac function, we reparemeterize the spike-slab prior as $\tilde{\beta}_{k}\sim N\left(0,\sigma^{2}\right),$ then $\beta_{k}=\eta_{k}\tilde{\beta}_{k}.$
We can use variational EM algorithm to compute the posterior and parameter estimation.
Let $\boldsymbol{\theta}=\left\{ \alpha,\mathbf{b},\sigma^{2},\omega\right\} $ be the collection of model parameters. Using the sigmoid function denoted as $S\left(x\right)=\frac{1}{1+e^{-x}}$, the complete data likelihood can be written as $$\Pr\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)=\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right),$$ where
$$\begin{aligned}
\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right) & = & \prod_{j=1}^{M}\Pr\left(\gamma_{j}|\mathbf{Z}_{j},\mathbf{A}_{j},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\\
& = & \prod_{j=1}^{M}e^{\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)}S\left(-\mathbf{Z}_{j}\mathbf{b}-\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right),\\
\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right) & = & \prod_{k=1}^{K}\Pr\left(\tilde{\beta}_{k},\eta_{k}|\sigma^{2},\omega\right)=\prod_{k=1}^{K}N\left(\tilde{\beta}_{k}|0,\sigma^{2}\right)\omega^{\eta_{k}}\left(1-\omega\right)^{1-\eta_{k}}.\end{aligned}$$
We can use JJ bound [@jaakkola2000bayesian] to bound the sigmoid function by
$$S\left(x\right)\ge S\left(\xi\right)\exp\left\{ \left(x-\xi\right)/2-\lambda\left(\xi\right)\left(x^{2}-\xi^{2}\right)\right\} ,$$ where $\lambda\left(\xi\right)=\frac{1}{2\xi}\left[S\left(\xi\right)-\frac{1}{2}\right]$. Using this bound, we have a tractable lower bound of $\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)$ which is denoted by $h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)$:
$$\begin{aligned}
& & h\left(\gamma_{j}|\mathbf{Z}_{j},\mathbf{A}_{j},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\xi_{j}\right)\\
& = & e^{\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)}S\left(\xi_{j}\right)\exp\left(-\lambda\left(\xi_{j}\right)\left(\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}\right)^{2}-\xi_{j}^{2}\right)-\frac{\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\eta_{k}\tilde{\beta}_{k}+\xi_{j}}{2}\right).\end{aligned}$$
Next, Let $q\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ be an approximation of the posterior $\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)$. Then we can obtain a lower bound of the logarithm of the marginal likelihood:
$$\begin{aligned}
& & \log\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)\\
& = & \log\sum_{\boldsymbol{\eta}}\int\Pr\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\mathbf{Z},\mathbf{A};\boldsymbol{\theta}\right)d\tilde{\boldsymbol{\beta}}\\
& = & \log\sum_{\boldsymbol{\eta}}\int\Pr\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)d\tilde{\boldsymbol{\beta}}\\
& \ge & \log\sum_{\boldsymbol{\eta}}\int h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)d\tilde{\boldsymbol{\beta}}\\
& \ge & \sum_{\boldsymbol{\eta}}\int q\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\log\frac{h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)}{q\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)}d\tilde{\boldsymbol{\beta}}\\
& = & \mathbf{E}_{q}\left[\log h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta};\mathbf{b},\boldsymbol{\xi}\right)+\log\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)-\log q\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)\right]\\
& \triangleq & L\left(q\right),\end{aligned}$$
where $L(q)$ is the lower bound. The second inequality follows Jensen’s inequality. We can maximize $L(q)$ instead of the marginal likelihood to get parameter estimations. To make it feasible to evaluate the lower bound, we assume that $q\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ can be factorized as $$q\left(\tilde{\beta},\eta\right)=\prod_{k=1}^{K}q\left(\tilde{\beta}_{k},\eta_{k}\right)=\prod_{k=1}^{K}q\left(\tilde{\beta}_{k}|\eta_{k}\right)q\left(\eta_{k}\right),$$ where $q\left(\eta_{k}=1\right)=\omega_{k}$.
We can obtain an approximation according to the mean-field method:
$$\begin{aligned}
\log q\left(\tilde{\beta}_{i},\eta_{i}\right) & = & \mathbf{E}_{k\ne i}\left[\log h\left(\boldsymbol{\gamma}|\mathbf{Z},\mathbf{A},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta},\mathbf{b},\boldsymbol{\xi}\right)+\log\Pr\left(\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}|\sigma^{2},\omega\right)\right],\end{aligned}$$
where the expectation is taken under the distribtion $q\left(\tilde{\beta}_{-i},\eta_{-i}\right)=\prod_{k\ne i}q\left(\tilde{\beta}_{k},\eta_{k}\right)$. Then we have
$$q\left(\tilde{\beta}_{i},\eta_{i}\right)=\left[\omega_{i}N\left(\mu_{i},s_{i}^{2}\right)\right]^{\eta_{i}}\left[\left(1-\omega_{i}\right)N\left(0,\sigma^{2}\right)\right]^{1-\eta_{i}},$$ where
$$\begin{aligned}
\mu_{i} & = & s_{i}^{2}\sum_{j=1}^{M}\left(\pi_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k\ne i}A_{jk}\mathbf{E}_{k}\left[\eta_{k}\tilde{\beta}_{k}\right]\right)\right)A_{ji},\\
s_{i}^{2} & = & \frac{\sigma^{2}}{1+2\sigma^{2}\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}}.\end{aligned}$$
Then we maximize $L\left(q\right)$ with respect to $\omega_{k}$ and $\xi_{j}$ and get
$$\begin{aligned}
\omega_{k} & = & \frac{1}{1+\exp\left(-u_{k}\right)},\textrm{ where }u_{k}=\log\frac{\omega}{1-\omega}+\frac{1}{2}\log\frac{s_{k}^{2}}{\sigma^{2}}+\frac{\mu_{k}^{2}}{2s_{k}^{2}},\\
\xi_{j}^{2} & = & \left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)^{2}+\sum_{k}A_{jk}^{2}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}^{2}\mu_{k}^{2}\right).\end{aligned}$$
Now we have evaluate $L(q)$:
$$\begin{aligned}
& & L(q)\\
& = & \sum_{j=1}^{M}\left(\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)+\log S\left(\xi_{j}\right)-\lambda\left(\xi_{j}\right)\left(\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)^{2}-\xi_{j}^{2}\right)\right)\\
& & +\sum_{j=1}^{M}\left(-\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}+\xi_{j}\right)/2+\lambda\left(\xi_{j}\right)\sum_{k}A_{jk}^{2}\omega_{k}^{2}\mu_{k}^{2}-\lambda\left(\xi_{j}\right)\sum_{k}A_{jk}^{2}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)\right)\\
& & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}\sigma^{2}\right)+\sum_{k=1}^{K}\omega_{k}\log\omega+\sum_{k=1}^{K}\left(1-\omega_{k}\right)\log\left(1-\omega\right)\\
& & +\sum_{k=1}^{K}\left(\frac{1}{2}\omega_{k}\left(\log s_{k}^{2}-\log\sigma^{2}\right)-\omega_{k}\log\omega_{k}-\left(1-\omega_{k}\right)\log\left(1-\omega_{k}\right)\right).\end{aligned}$$
With $q\left(\boldsymbol{\gamma},\tilde{\boldsymbol{\beta}},\boldsymbol{\eta}\right)$ obtained, we can evaluate the lower bound and then update the model parameters by maximizing $L(q)$.
In the M step, we update $\sigma^{2}$ and $\omega$ by maximizing $L(q)$. We have
$$\begin{aligned}
\sigma^{2} & = & \frac{\sum_{k=1}^{K}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)}{\sum_{k=1}^{K}\omega_{k}},\\
\omega & = & \frac{1}{K}\sum_{k=1}^{K}\omega_{k}.\end{aligned}$$
We use Newton’s method to update $\mathbf{b}$:
$$\mathbf{b}=\mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g},$$ where
$$\begin{aligned}
\mathbf{g} & = & -\sum_{j=1}^{M}\mathbf{Z}_{j}^{T}\left(\gamma_{j}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\sum_{k}A_{jk}\omega_{k}\mu_{k}\right)-\frac{1}{2}\right),\\
\mathbf{H} & = & 2\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}^{T}\mathbf{Z}_{j}.\end{aligned}$$
### Algorithm: {#algorithm-3 .unnumbered}
Input: $\mathbf{Z}$, $\mathbf{A}$, $\left\{ \gamma_{j}=\tilde{\gamma}_{j}\right\} _{j=1,...,M}$, $\mathbf{b}$, Initialize: $\sigma^{2}=1$, $\omega=0.5$, $\left\{ \omega_{k}=0,\mu_{k}=0\right\} _{k=1,...K}$, $\boldsymbol{\xi}=\mathbf{Zb}$, Output: $\mathbf{b}$, $\boldsymbol{\xi}$, $\sigma^{2}$, $\omega$, $\left\{ \omega_{k},\mu_{k}\right\} _{k=1,...K}$.
- Initialize $\mathbf{b}$, $\boldsymbol{\xi}=\mathbf{Zb}$, $\sigma^{2}=1$, $\omega=0.5$, $\left\{ \omega_{k}=0,\mu_{k}=0\right\} _{k=1,...K}$. Let $\tilde{y}=\sum_{k}A_{jk}\omega_{k}\mu_{k}$.
- E-step: For $i=1,...,K$, first obtain $\tilde{y}_{i}=\tilde{y}-A_{ji}\omega_{i}\mu_{i}$, and then update $\mu_{i},s_{i}^{2},\omega_{i}$ and $\tilde{y}$ as follows
$$\begin{aligned}
s_{i}^{2} & = & \frac{\sigma^{2}}{1+2\sigma^{2}\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}},\\
\mu_{i} & = & s_{i}^{2}\sum_{j=1}^{M}\left(\left(\gamma_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}_{i}\right)\right)A_{ji}\right),\\
\omega_{i} & = & \frac{1}{1+\exp\left(-u_{i}\right)},\textrm{ where }u_{i}=\log\frac{\omega}{1-\omega}+\frac{1}{2}\log\frac{s_{i}^{2}}{\sigma^{2}}+\frac{\mu_{i}^{2}}{2s_{i}^{2}},\\
\tilde{y} & = & \tilde{y}_{i}+A_{ji}\omega_{i}\mu_{i}.
\end{aligned}$$
Then for $j=1,...,M$, update $\xi_{j}$ as follows
$$\begin{aligned}
\xi_{j}^{2} & = & \left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)^{2}+\sum_{k}A_{jk}^{2}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}^{2}\mu_{k}^{2}\right).
\end{aligned}$$
Calculate $L\left(q\right)$:
$$\begin{aligned}
& & L(q)\\
& = & \sum_{j=1}^{M}\left(\gamma_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)+\log S\left(\xi_{j}\right)-\frac{\mathbf{Z}_{j}\mathbf{b}+\tilde{y}+\xi_{j}}{2}\right)\\
& & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}\sigma^{2}\right)+\sum_{k=1}^{K}\omega_{k}\log\omega+\sum_{k=1}^{K}\left(1-\omega_{k}\right)\log\left(1-\omega\right)\\
& & +\sum_{k=1}^{K}\left(\frac{1}{2}\omega_{k}\left(\log s_{k}^{2}-\log\sigma^{2}\right)-\omega_{k}\log\omega_{k}-\left(1-\omega_{k}\right)\log\left(1-\omega_{k}\right)\right).
\end{aligned}$$
- M-step
$$\begin{aligned}
\mathbf{g} & = & -\sum_{j=1}^{M}\mathbf{Z}_{j}^{T}\left(\pi_{j}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)-\frac{1}{2}\right),\\
\mathbf{H} & = & 2\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}^{T}\mathbf{Z}_{j},\\
\mathbf{b} & = & \mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g},\\
\sigma^{2} & = & \frac{\sum_{k=1}^{K}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)}{\sum_{k=1}^{K}\omega_{k}},\\
\omega & = & \frac{1}{K}\sum_{k=1}^{K}\omega_{k}.
\end{aligned}$$
- Check convergence.
Stage 4: LSMM {#stage-4-lsmm .unnumbered}
-------------
Input: $\mathbf{p}$, $\mathbf{Z}$, $\mathbf{A}$, $\alpha$,$\mathbf{b}$, $\boldsymbol{\xi}$, $\sigma^{2}$, $\omega$, $\left\{ \omega_{k},\mu_{k}\right\} _{k=1,...K}$, Initialize: $\left\{ \pi_{j}=\tilde{\gamma}_{j}\right\} _{j=1,...,M}$, Output: $\alpha$,$\mathbf{b}$, $\sigma^{2}$, $\omega$, $\left\{ \omega_{k},\beta_{k}=\mu_{k}\omega_{k}\right\} _{k=1,...K}$, $\left\{ \pi_{j}\right\} _{j=1,...,M}$
### Algorithm: {#algorithm-4 .unnumbered}
- Initialize $\alpha$, $\sigma^{2}$, $\omega$, $\mathbf{b}$, $\left\{ \omega_{k},\mu_{k}\right\} _{k=1,...K}$, $\left\{ \xi_{j},\pi_{j}\right\} _{j=1,...,M}$. Let $\tilde{y}=\sum_{k}A_{jk}\omega_{k}\mu_{k}$.
- E-step: For $i=1,...,K$, first obtain $\tilde{y}_{i}=\tilde{y}-A_{ji}\omega_{i}\mu_{i}$, and then update $\mu_{i},s_{i}^{2},\omega_{i}$ and $\tilde{y}$ as follows
$$\begin{aligned}
s_{i}^{2} & = & \frac{\sigma^{2}}{1+2\sigma^{2}\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)A_{ji}^{2}},\\
\mu_{i} & = & s_{i}^{2}\sum_{j=1}^{M}\left(\left(\pi_{j}-\frac{1}{2}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}_{i}\right)\right)A_{ji}\right),\\
\omega_{i} & = & \frac{1}{1+\exp\left(-u_{i}\right)},\textrm{ where }u_{i}=\log\frac{\omega}{1-\omega}+\frac{1}{2}\log\frac{s_{i}^{2}}{\sigma^{2}}+\frac{\mu_{i}^{2}}{2s_{i}^{2}},\\
\tilde{y} & = & \tilde{y}_{i}+A_{ji}\omega_{i}\mu_{i}.
\end{aligned}$$
Then for $j=1,...,M$, update $\pi_{j},\xi_{j}$ as follows
$$\begin{aligned}
\pi_{j} & = & \frac{1}{1+\exp\left(-v_{j}\right)},\textrm{ where }v_{j}=\log\alpha+\left(\alpha-1\right)\log p_{j}+\mathbf{Z}_{j}\mathbf{b}+\tilde{y},\\
\xi_{j}^{2} & = & \left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)^{2}+\sum_{k}A_{jk}^{2}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}^{2}\mu_{k}^{2}\right).
\end{aligned}$$
Calculate $L\left(q\right)$:
$$\begin{aligned}
& & L(q)\\
& = & \sum_{j=1}^{M}\pi_{j}\left(\log\alpha+\left(\alpha-1\right)\log p_{j}\right)-\sum_{j=1}^{M}\left(\pi_{j}\log\pi_{j}+\left(1-\pi_{j}\right)\log\left(1-\pi_{j}\right)\right)\\
& & +\sum_{j=1}^{M}\left(\pi_{j}\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)+\log S\left(\xi_{j}\right)-\frac{\mathbf{Z}_{j}\mathbf{b}+\tilde{y}+\xi_{j}}{2}\right)\\
& & -\frac{1}{2\sigma^{2}}\sum_{k=1}^{K}\left(\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)-\omega_{k}\sigma^{2}\right)+\sum_{k=1}^{K}\omega_{k}\log\omega+\sum_{k=1}^{K}\left(1-\omega_{k}\right)\log\left(1-\omega\right)\\
& & +\sum_{k=1}^{K}\left(\frac{1}{2}\omega_{k}\left(\log s_{k}^{2}-\log\sigma^{2}\right)-\omega_{k}\log\omega_{k}-\left(1-\omega_{k}\right)\log\left(1-\omega_{k}\right)\right).
\end{aligned}$$
- M-step
$$\begin{aligned}
\alpha & = & -\frac{\sum_{j=1}^{M}\pi_{j}}{\sum_{j=1}^{M}\pi_{j}\log p_{j}},\\
\sigma^{2} & = & \frac{\sum_{k=1}^{K}\omega_{k}\left(s_{k}^{2}+\mu_{k}^{2}\right)}{\sum_{k=1}^{K}\omega_{k}},\\
\omega & = & \frac{1}{K}\sum_{k=1}^{K}\omega_{k},\\
\mathbf{g} & = & -\sum_{j=1}^{M}\mathbf{Z}_{j}^{T}\left(\pi_{j}-2\lambda\left(\xi_{j}\right)\left(\mathbf{Z}_{j}\mathbf{b}+\tilde{y}\right)-\frac{1}{2}\right),\\
\mathbf{H} & = & 2\sum_{j=1}^{M}\lambda\left(\xi_{j}\right)\mathbf{Z}_{j}^{T}\mathbf{Z}_{j},\\
\mathbf{b} & = & \mathbf{b}_{old}-\mathbf{H}^{-1}\mathbf{g}.
\end{aligned}$$
- Evaluate $L(q)$ to track the convergence of the algorithm.
Simulation study for evaluating the LD effects on LSMM
======================================================
To study the influence of LD effects on our LSMM, we used the observed genotype data (1,500 individuals from the 1958 British Birth Cohort (58C)) from WTCCC (The Wellcome Trust Case Control Consortium, [-@2007]). For simplicity, we only consider 23874 SNPs in chromosome 1 after quality control. We simulated a risk SNP every 1000 SNPs. So we had 24 risk SNPs. We assumed the 24 risk SNPs can explain 5% phenotypic variance. We used GCTA to simulation phenotypes and used PLINK to get $p$-values for SNPs. Then we applied LSMM and detect risk SNPs.
As the presence of LD effects, SNPs in a local genomic region would be correlated and detection of risk SNPs would be difficult. We are just expected to identify the region which contains the risk SNPs. Here we used different distance threshold to define the region around true risk SNPs. The identified risk SNPs which in the region of true risk SNPs were considered as true positive.
We considered four cases. The first case, no effects, means we only used the $p$-values and didn’t use fixed effects and random effects. In the second case, fixed effects, we only add 10 fixed effects. In the fixed effects, SNPs within 1Mb of true risk SNPs are annotated with a probability of 0.6. In the third case, fixed + random effects, we further add 100 random effects in which SNPs are annotated randomly. In the fourth case, fixed + relevant random effects, we assume 20% of random effects are relevant to the phenotype and SNPs within 1Mb of true risk SNPs are annotated with a probability of 0.6 in the relevant random effects. The results of observed FDR were shown in Figure \[fig:LD\] based on 50 simulations. In the first case, when we used no effects, the observed FDR was quite stable at 0.1. When we added fixed effects and random effects, the observed FDR was just inflated a little with the smallest distance threshold and became conservative as the distance threshold increased. As a result, we believe that LSMM can provide a satisfactory FDR control in detecting a local genomic region of risk SNPs.
![FDR of LSMM for identification of risk SNPs with different distance thresholds. The red line indicates the threshold of global FDR $\tau=0.1$. \[fig:LD\]](Figure/FDR_LD)
More simulation results for different settings
==============================================
Performance in identification of risk SNPs
------------------------------------------








Performance in identification of risk SNPs if treat all covariates as fixed effects
-----------------------------------------------------------------------------------

Performance in identification of relevant annotations
-----------------------------------------------------








Performance in identification of relevant annotations when fixed effects and random effects are not independent
---------------------------------------------------------------------------------------------------------------

Simulations based on probit model
---------------------------------




Simulations if $p$-values are not from beta distribution
--------------------------------------------------------
In the model setting of the LSMM, we assume that $p$-values are from the mixture of uniform and Beta distributions. To check the robustness of our method, we conducted simulations as follows. We first generated $z$-scores and then converted them to $p$-values. Here $z$-values from the null group follow the standard normal distribution and $z$-values from the non-null group follow the alternative distributions in Table \[tab:distribution\]. In these simulations, the $p$-values in non-null group converted from $z$-scores will not from Beta distribution. We evaluated the FDR, power and AUC. The results are shown in Figures \[fig:robust1\]-\[fig:robust3\].
[Scenario]{} [Distribution]{}
----------------- -----------------------------------------------------------------------------------------------------------------------------------------------
[spiky]{} [$0.4N\left(0,0.25^{2}\right)+0.2N\left(0,0.5^{2}\right)+0.2N\left(0,1^{2}\right)+0.2N\left(0,2^{2}\right)$]{}
[near normal]{} [$\frac{2}{3}N\left(0,1^{2}\right)+\frac{1}{3}N\left(0,2^{2}\right)$]{}
[skew]{} [$\frac{1}{4}N\left(-2,2^{2}\right)+\frac{1}{4}N\left(-1,1.5^{2}\right)+\frac{1}{3}N\left(0,1^{2}\right)+\frac{1}{6}N\left(1,1^{2}\right)$]{}
[big-normal]{} [$N\left(0,4^{2}\right)$]{}
: Alternative distributions for $z$-scores. \[tab:distribution\]
![FDR of LSMM, LFM and TGM with $K=100$. We controlled global FDR at 0.1 to evaluate empirical FDR. The results are summarized from 50 replications.\[fig:robust1\]](Figure/probit_robust_K100)

![FDR of LSMM, LFM and TGM with $K=1000$. We controlled global FDR at 0.1 to evaluate empirical FDR. The results are summarized from 50 replications.\[fig:robust3\]](Figure/probit_robust_K1000)
Comparison between LSMM and GPA
-------------------------------



Comparison between LSMM and cmfdr
---------------------------------
We compared LSMM with cmfdr. As cmfdr is not able to handle a large number of covariates and the MCMC sampling algorithm it derived is time-consuming, we set $M=5000$, $L=5$, $K=5$ and run 2500 iterations with 2000 retained draws for cmfdr. The comparison between LSMM and cmfdr are shown in Figure \[fig:cmfdr\].
![FDR, power, AUC and partial AUC of LSMM and cmfdr for identification of risk SNPs. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:cmfdr\]](Figure/cmfdr_FDR "fig:")![FDR, power, AUC and partial AUC of LSMM and cmfdr for identification of risk SNPs. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:cmfdr\]](Figure/cmfdr_power "fig:")![FDR, power, AUC and partial AUC of LSMM and cmfdr for identification of risk SNPs. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:cmfdr\]](Figure/cmfdr_AUC "fig:")![FDR, power, AUC and partial AUC of LSMM and cmfdr for identification of risk SNPs. We controlled global FDR at 0.1 to evaluate empirical FDR and power. The results are summarized from 50 replications.\[fig:cmfdr\]](Figure/cmfdr_pAUC "fig:")
Estimation of parameters
------------------------
### Estimation of $\alpha$
We evaluate the performance of LSMM in estimation of parameter $\alpha$ in the beta distribution. We compare LSMM with the other three methods, TGM (without fixed effects and random effects), LFM (with only fixed effects) and LSMM without fixed effects. We varied $\omega$ at $\left\{ 0,0.25,0.5,0.75,1\right\} $. Figures \[fig:est alpha 0.2\]-\[fig:est alpha 0.6\] show the comparision among these methods with $\alpha=0.2$, $0.4$ and $0.6$ respectively.
![Perfermance in estimation of parameter $\alpha$ when the true $\alpha=0.2$.\[fig:est alpha 0.2\]](Figure/est_alpha_2)
![Perfermance in estimation of parameter $\alpha$ when the true $\alpha=0.4$.\[fig:est alpha 0.4\]](Figure/est_alpha_4)
![Perfermance in estimation of parameter $\alpha$ when the true $\alpha=0.6$.\[fig:est alpha 0.6\]](Figure/est_alpha_6)
### Estimation of $\boldsymbol{b}$
We evaluate the performance of LSMM in estimation of parameter $\beta_{0}$ and $b$. We varied $\omega$ at $\left\{ 0,0.25,0.5,0.75,1\right\} $. Figures \[fig:est b0\]-\[fig:est b10\] show the comparision between LSMM and LFM (with only fixed effects) with $\alpha=0.2$, $0.4$ and $0.6$.
![Perfermance in estimation of parameter $b_{0}$.\[fig:est b0\]](Figure/est_b0)
![Perfermance in estimation of parameter $b_{1}$.\[fig:est b1\]](Figure/est_b1)
![Perfermance in estimation of parameter $b_{2}$.\[fig:est b2\]](Figure/est_b2)
![Perfermance in estimation of parameter $b_{3}$.\[fig:est b3\]](Figure/est_b3)
![Perfermance in estimation of parameter $b_{4}$.\[fig:est b4\]](Figure/est_b4)
![Perfermance in estimation of parameter $b_{5}$.\[fig:est b5\]](Figure/est_b5)
![Perfermance in estimation of parameter $b_{6}$.\[fig:est b6\]](Figure/est_b6)
![Perfermance in estimation of parameter $b_{7}$.\[fig:est b7\]](Figure/est_b7)
![Perfermance in estimation of parameter $b_{8}$.\[fig:est b8\]](Figure/est_b8)
![Perfermance in estimation of parameter $b_{9}$.\[fig:est b9\]](Figure/est_b9)
![Perfermance in estimation of parameter $b_{10}$.\[fig:est b10\]](Figure/est_b10)
### Estimation of $\omega$
We evaluate the performance of LSMM in estimation of parameter $\omega$ which measures the proportion of relevant annotations. We varied $\omega$ at $\left\{ 0,0.25,0.5,0.75,1\right\} $. Figure \[fig:est omega\] shows the results with $\alpha=0.2$, $0.4$ and $0.6$.
![Perfermance in estimation of parameter $\omega$. \[fig:est omega\]](Figure/est_omega)
More about real data analysis
==============================
The source of the 30 GWAS
-------------------------
[Alzheimer]{} [[@lambert2013meta], Nature Genetics. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
-------------------------------- -------------------------------------------------------------------------------------------------------------------
[BMI]{} [[@speliotes2010association], Nature Genetics. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Bipolar Disorder]{} [[@psychiatric2011large], Nature Genetics]{}
[https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Coronary Artery Disease]{} [[@schunkert2011large], Nature Genetics. http://www.cardiogramplusc4d.org/data-downloads]{}
[Crohns Disease]{} [[@jostins2012host], Nature. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Height]{} [[@wood2014defining], Nature Genetics]{}
[http://portals.broadinstitute.org/collaboration/giant/index.php/GIANT\_consortium\_data\_files]{}
[High-density Lipoprotein]{} [[@global2013discovery], Nature Genetics]{}
[http://csg.sph.umich.edu//abecasis/public/lipids2013/]{}
[HIV]{} [[@mclaren2013association], PLoS Pathogens]{}
[http://journals.plos.org/plospathogens/article?id=10.1371%2Fjournal.ppat.1003515]{}
[Inflammatory Bowel Disease]{} [[@jostins2012host], Nature. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Low-density Lipoprotein]{} [[@global2013discovery], Nature Genetics]{}
[http://csg.sph.umich.edu//abecasis/public/lipids2013/]{}
[Lupus]{} [[@bentham2015genetic], Nature Genetics]{}
[https://www.immunobase.org/downloads/protected\_data/GWAS\_Data/]{}
[Mean Cell Haemoglobin]{} [[@pickrell2014joint], The American Journal of Human Genetics]{}
[https://ega-archive.org/studies/EGAS00000000132]{}
[Mean Cell Volume]{} [[@pickrell2014joint], The American Journal of Human Genetics]{}
[https://ega-archive.org/studies/EGAS00000000132]{}
[Menopause]{} [[@day2015large], Nature Genetics. http://www.reprogen.org/data\_download.html]{}
[Multiple Sclerosis]{} [[@sawcer2011genetic], Nature. https://www.immunobase.org/downloads/protected\_data/GWAS\_Data/]{}
[Neuroticism]{} [[@okbay2016genetic], Nature Genetics. http://ssgac.org/documents/Neuroticism\_Full.txt.gz]{}
[Primary Biliary Cirrhosis]{} [[@cordell2015international], Nature Communications]{}
[https://www.immunobase.org/downloads/protected\_data/GWAS\_Data/]{}
[Red Cell Count]{} [[@pickrell2014joint], The American Journal of Human Genetics]{}
[https://ega-archive.org/studies/EGAS00000000132]{}
[Rheumatoid Arthritis]{} [[@okada2014genetics], Nature. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Schizophrenia1]{} [[@cross2013identification], The Lancet.]{}
[https://www.med.unc.edu/pgc/results-and-downloads (SCZ subset)]{}
[Schizophrenia2]{} [[@schizophrenia2011genome], Nature Genetics.]{}
[https://www.med.unc.edu/pgc/results-and-downloads (SCZ1)]{}
[Schizophrenia3]{} [[@ripke2013genome], Nature Genetics. https://www.med.unc.edu/pgc/results-and-downloads (Sweden+SCZ1)]{}
[Schizophrenia4]{} [[@ripke2014biological], Nature. https://www.med.unc.edu/pgc/results-and-downloads (SCZ2)]{}
[Total Cholesterol]{} [[@global2013discovery], Nature Genetics]{}
[http://csg.sph.umich.edu//abecasis/public/lipids2013/]{}
[Triglycerides]{} [[@global2013discovery], Nature Genetics]{}
[http://csg.sph.umich.edu//abecasis/public/lipids2013/]{}
[Type 1 Diabetes]{} [[@bradfield2011genome], PLoS Genetics]{}
[https://www.immunobase.org/downloads/protected\_data/GWAS\_Data/]{}
[Type 2 Diabetes]{} [[@morris2012large], Nature Genetics. http://diagram-consortium.org/downloads.html]{}
[Ulcerative Colitis]{} [[@jostins2012host], Nature. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Years of Education1]{} [[@rietveld2013gwas], Science. https://data.broadinstitute.org/alkesgroup/sumstats\_formatted/]{}
[Years of Education2]{} [[@okbay2016genome], Nature. http://ssgac.org/documents/EduYears\_Main.txt.gz]{}
: The source of the 30 GWAS.
Four Schizophrenia GWAS with different sample sizes
---------------------------------------------------
---------------- ------- ----------------------- -------- -------- --------
Bonferroni correction TGM LFM LSMM
Schizophrenia1 0.677 2 470 527 527
Schizophrenia2 0.633 7 2,107 2,404 2,405
Schizophrenia3 0.562 126 6,811 7,541 7,545
Schizophrenia4 0.413 1110 48,802 50,481 50,990
---------------- ------- ----------------------- -------- -------- --------
: Summary of results for Schizophrenia. \[tab:SCZ\]
a\. The estimate $\hat{\alpha}$ is obtained using LSMM.
b\. The number of risk SNPs is reported based on global $FDR\le0.1$.
![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZsubset_TG "fig:")![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZ1_TG "fig:")![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZ1Sweden_TG "fig:")![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZ2_TG "fig:")
![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZsubset_LSMM "fig:")![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZ1 "fig:")![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZ1Sweden "fig:")![Manhattan plots of Schizophrenia1-4 using TGM and LSMM. The red lines indicate local $fdr=0.1$. The green points denote the additional SNPs LSMM identied with $FDR\le0.1$.\[fig:man SCZ\]](Figure/SCZ2 "fig:")
Computational time for 30 GWAS
------------------------------


Relevant functional annotations for 30 GWAS without fixed effects
-----------------------------------------------------------------

[^1]: Correspondence should be addressed to Can Yang ([email protected]) and Jin Liu ([email protected])
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Spectroscopy with the Keck II 10-meter telescope[^1] and Echelle Spectrograph and Imager is presented for six Virgo Cluster dwarf elliptical (dE) galaxies in the absolute magnitude range $-15.7\le{M_V}\le-17.2$. The mean line-of-sight velocity and velocity dispersion are resolved as a function of radius along the major axis of each galaxy, nearly doubling the total number of dEs with spatially-resolved stellar kinematics. None of the observed objects shows evidence of strong rotation: upper limits on $v_{\rm
rot}/\sigma$, the ratio of the maximum rotational velocity to the mean velocity dispersion, are well below those expected for rotationally-flattened objects. Such limits place strong constraints on dE galaxy formation models. Although these galaxies continue the trend of low rotation velocities observed in Local Group dEs, they are in contrast to recent observations of large rotation velocities in slightly brighter cluster dEs. Using surface photometry from [*Hubble Space Telescope*]{}[^2] Wide Field Planetary Camera 2 images and spherically-symmetric dynamical models, we determine global mass-to-light ratios $3\le\Upsilon_V\le6$. These ratios are comparable to those expected for an old to intermediate-age stellar population and are broadly consistent with the observed $(V-I)$ colors of the galaxies. These dE galaxies therefore do not require a significant dark matter component inside an effective radius. We are able to rule out central black holes more massive than $\sim10^7{\mbox{\,$M_{\odot}$}}$. For the five nucleated dEs in our sample, kinematic and photometric properties were determined for the central nucleus separately from the underlying host dE galaxy. These nuclei are as bright or brighter than the most luminous Galactic globular clusters and lie near the region of Fundamental Plane space occupied by globular clusters. In this space, the Virgo dE galaxies lie in the same general region as Local Group and other nearby dEs, although non-rotating dEs appear to have a slightly higher mean mass and mass-to-light ratio than rotating dEs; the dE galaxies occupy a plane parallel to, but offset from, that occupied by normal elliptical galaxies.
author:
- 'M. Geha'
- 'P. Guhathakurta'
- 'R. P. van der Marel'
title: 'Internal Dynamics, Structure and Formation of Dwarf Elliptical Galaxies: I. A Keck/HST Study of Six Virgo Cluster Dwarfs'
---
Introduction {#intro_sec}
============
Dwarf elliptical galaxies (dEs) are the most common galaxy type by number in the Local Universe, dominating the galaxy luminosity function of nearby clusters. Yet these galaxies remain among the most poorly studied galaxies due to their faint luminosities, $M_V \ge
-18$, and characteristic low effective surface brightness $\mu_{V,\rm
eff}>22$ mag arcsec$^{-2}$ [@fer94]. Unlike brighter, classical elliptical galaxies whose surface brightness profiles are well fit by the de Vaucouleurs $r^{1/4}$ law [@dev48], dEs have brightness profiles that are characterized by Sersic profiles [@ser68] with indices ranging between $n=1$–3 (where $n=1$ corresponds to an exponential law and $n=4$ to an $r^{1/4}$ law) making them appear more diffuse than classical ellipticals of the same total magnitude [@bin98]. In the Virgo Cluster, the majority of dEs brighter than $M_V \simlt -16$ contain compact central nuclei; fainter than $M_V \simgt -12$ most dEs show no sign of a nucleus [@san85]. Nuclei typically contain 5% to 20% of the total galaxy light and are slightly resolved at the distance of Virgo by [*Hubble Space Telescope*]{} ([*HST*]{}) imaging [@mil98].
In hierarchical models of galaxy formation, dwarf galaxies form out of small density fluctuations in the early Universe and are predicted to be less spatially clustered than normal elliptical or spiral galaxies [@dek86]. However, dwarf elliptical galaxies are preferentially found in dense cluster environments, more so than either ellipticals or spirals [@bts87]; there are few, if any, examples of isolated dEs. Thus, current models favor dE formation from a progenitor galaxy population. The proposed progenitors of dEs are spiral or irregular galaxies which are morphologically transformed into dEs through the processes of galaxy harassment and interaction [@moo98]. Detailed internal kinematics of dEs are a powerful observational tool with which to test these scenarios.
Until recently, radial velocity dispersion profiles were only available for the Local Group dEs and two of the brightest dEs in the Virgo Cluster [@ben90; @ben91; @hel90]. In addition, a handful of global velocity dispersion measurements existed for dEs in various environments outside the Local Group [@pet93]. These observations suggest that dEs have lower mass-to-light ratios than Local Group dwarf spheroidals (e.g., Draco, Fornax) and are flattened by velocity anisotropy rather than by rotation. However, @ped02 and @der01 recently presented kinematic profiles for a few dEs in the Virgo and Fornax Clusters, respectively, with rotation velocities comparable to that expected for a rotationally-flattened spheroid. These rotating dEs are more luminous on average than the non-rotating dEs we have observed, and hint at a possible association between dE luminosity and the presence of rotation.
The question of whether dEs have significant rotation compared to their velocity dispersion is particularly important in the context of dE formation scenarios. @moo98 have demonstrated that the process of galaxy harassment in cluster environments can morphologically transform a spiral galaxy into a dE. Although this process tends to increase the velocity dispersion in a system, it is less efficient at disrupting rotational motions and a significant fraction of the progenitor’s rotation is preserved. Thus, measuring the amount of internal angular momentum in dEs can constrain the progenitor galaxy type and/or the amount of disruption required.
We present internal kinematics as a function of radius for a sample of six dE galaxies in the Virgo Cluster based on Keck observations. These data are interpreted in conjunction with archival [*HST*]{} imaging. Preliminary results from this study were presented in @geh02. This paper is organized as follows: in §\[data\_sec\] we present the Keck spectroscopic and [*HST*]{} imaging observations of our target galaxies, along with an outline of data reduction procedures; in §\[res\_sec\] we present velocity and velocity dispersion profiles and describe the dynamical models that are applied to the data to derive mass-to-light ratios, constraints on orbital anisotropy, and limits on the central black hole mass; the broader implications of the results are discussed in §\[disc\_sec\].
The Data {#data_sec}
========
The Virgo Cluster is the closest large reservoir of dE galaxies beyond the Local Group and presents a significantly different environment in which to study such galaxies. The six dE galaxies presented below were drawn from the bright end of the Virgo dE luminosity function and have been imaged with [*HST*]{}. These objects lie at a variety of distances from the center of the Virgo Cluster as shown in Figure \[vcc\]. The positions and photometric properties of the observed galaxies are listed in Table 1. Foreground reddening values are taken from @sch98 assuming a standard Galactic extinction law with $R_V = 3.1$. Throughout this paper a Virgo Cluster true distance modulus of $(m - M)_0 = 30.92$ is adopted, i.e., a distance of 15.3 Mpc, as determined by the [*HST*]{} Key Project on the extragalactic distance scale [@fre01].
Spectroscopy {#spec_sec}
------------
### Observations {#spec_obs_sec}
Six dE galaxies were observed on 2001 March 20–21 using the Keck II 10-m telescope and the Echelle Spectrograph and Imager [ESI; @she02]. Observations were made in the echellette mode with continuous wavelength coverage over the range $\rm\lambda\lambda3900$–$11000\mbox{\AA}$ across 10 echelle orders with a spectral dispersion of 11.4[km s$^{-1}$]{} pixel$^{-1}$. The spectra were obtained through a $0.75'' \times 20''$ slit, resulting in an instrumental resolution of 23[km s$^{-1}$]{} (Gaussian sigma) over the entire spectrum, or $R \equiv (\lambda/\Delta\lambda) \approx 10,000$. The slit was positioned on the major axis of each galaxy, such that the galaxy’s center was displaced $\sim5''$ from the center of the slit along its $20''$ length. Three consecutive exposures of 20 minutes were obtained for each galaxy except VCC 1577, for which $5\times20$ minutes were obtained. A summary of the observing parameters is given in Table 2. High signal-to-noise ratio spectra of giant stars covering the range of spectral types G8III to M0III were taken with the same instrumental setup for use as templates in the kinematic profile fitting described in §\[vp\]. Standards stars were observed both centered on and trailed across the slit width in order to recover an accurate estimate of the instrumental broadening for point and extended sources, respectively.
### Data Reduction {#datared_sec}
The ESI data were reduced using a combination of IRAF echelle and long-slit spectral reduction tasks. First, the overscan and a dark frame, scaled by the exposure time, were subtracted from the data. The sum of a bright calibration star spectral exposure and a flat field spectral exposure were used to trace the ends of the slit as well as a fiducial spatial point (the location of the bright star) for each of the 10 curved echelle orders. This process yielded an empirical measurement of the spatial pixel scale for each order: it varies from $0.13''$ in the bluest order to $0.18''$ in the reddest order. Scattered light was subtracted from individual frames by fitting a smooth function to the areas outside these apertures (spaces between echelle orders) using the APSCATTER task. To preserve spatial information, the APALL task was used in “strip” mode to extract and rectify two-dimensional rectangular strips for each echelle order by shifting and aligning each spatial column based on the aperture trace information. The rectified orders were then interpolated to a common spatial pixel scale of $0.18''$ per pixel. Calibration frames such as flat field, arc lamp, and template star exposures were also extracted into rectified, aligned and spatially-corrected strips using the same procedure.
Data reduction was carried out on these rectified strips using procedures similar to those routinely used on long-slit spectra. Each strip was divided by its corresponding normalized flat-field image (from the same echelle order). Cosmic rays on individual exposures, identified on the basis of object sharpness and peak pixel brightness, were masked and the exposures were then combined. Each order was logarithmically binned in wavelength using a two-dimensional wavelength calibration (i.e., as a function of spatial position along the slit) determined from a combined Cu/Ar/Hg/Ne arc lamp spectrum. The rms of the residuals in the wavelength solution is $\rm
0.05\mbox{\AA}$ or less in each order. The sky spectrum was determined for each combined frame from a section near the end of the slit farthest from the galaxy center ($r\sim15''$), and subtracted from the rest of the two-dimensional spectrum. We recognize that this “sky” spectrum is contaminated by light from the outer parts of the target dE galaxy but this is unlikely to have a significant effect on our results.[^3] Bright, poorly-subtracted sky lines were masked out during the kinematic fitting discussed below. The galaxy continuum flux in each order was then individually normalized to unity using the IRAF CONTINUUM task. A noise frame was created for each galaxy spectrum which kept track of uncertainties in every pixel due to read noise and Poisson noise taking into account the CCD gain and number of readouts. This noise frame is used as input into the kinematic analysis of §\[vp\] as part of the formal error calculations on the velocity profile parameters. Finally, the strips from the different echelle orders were combined, weighted by the noise frame, to create a single two-dimensional long-slit spectrum. Due to low signal-to-noise in the reddest and bluest echelle orders, we do not include them in the analysis; the final combined spectrum covers $\rm\lambda\lambda4800-9200\mbox{\AA}$. The spectra show no evidence for gaseous emission lines at any radii. Representative combined galaxy spectra are shown in Figure \[spec\].
The seeing FWHM during each spectroscopic observation was determined by comparing the observed intensity profile along the ESI slit to high spatial resolution $V$-band surface brightness profiles derived from [*HST*]{} images (as discussed in §\[sb\]). These brightness profiles were convolved with a Gaussian seeing point spread function (PSF), integrated over the $0.75''$ slit width and binned into $0.18''$ pixels to match the ESI pixel scale in the spatial direction. This was then compared to the observed intensity profile along the ESI slit in the matching spectral region. The best-fit Gaussian FWHM seeing estimates are given in Table 2 for each galaxy. These are consistent with the less accurate estimates of the seeing FWHM determined by coadding short (few second) exposures taken with the ESI guider camera at intervals of approximately 5 minutes during the spectroscopic observations.
### Measurement of Line-of-Sight Velocity and Velocity Dispersions {#vp}
The mean line-of-sight velocity and velocity dispersion as a function of radius were determined using a pixel-fitting method first described in @vdm94. These quantities were determined by comparing the observed galaxy spectrum to a stellar template convolved with a series of Gaussian line profiles. The best-fitting Gaussian profile was determined by $\chi^2$ minimization in pixel space. The free parameters in this analysis are: mean line-of-sight velocity $v$, velocity dispersion $\sigma$, and a line-strength parameter $\gamma$, which measures the ratio of equivalent width in the galaxy to that of the template star and accounts for template mismatch (e.g., due to differences in effective temperature or metallicity). Night sky absorption features (A and B bands) and strong sky emission lines were masked out in the fitting procedure; masked pixels were not included in the calculation of $\chi^2$. Deviations from Gaussian profiles [@vdm93] were not fit as this requires higher signal-to-noise spectra than presented in this paper. In addition, an arbitrary continuum term was simultaneously fit to the data approximated by the sum of Legendre polynomials. In analyzing the combined ESI spectra ($\approx 20,000$ pixels), the continuum was fit with a 20th order polynomial.
We have tested this method on broadened template stars to determine the minimum signal-to-noise required to accurately recover velocity dispersions and to estimate our sensitivity to template mismatch. Stellar templates were broadened with Gaussian kernels of varying $\sigma$ and Poisson noise was added. These tests suggest that for a signal-to-noise level $\rm S/N \ge 10$ per spectral pixel, a galaxy’s internal velocity dispersions can be measured down to the instrumental resolution of 23 km s$^{-1}$ with an accuracy of 1% and down to 18.5[km s$^{-1}$]{} with an accuracy of 10%. We have spatially rebinned our galaxy data to achieve a signal-to-noise level of $\rm
S/N \ge 10$ per pixel at all radii, while ensuring that the spatial bin size is at least as large as the FWHM of seeing during the observations ($\sim0.9''$). Velocity profiles were recovered using template stars ranging in spectral type G8III to M0III. The best fitting template, the K1III star HD 40460 (\[Fe/H\] = $-0.42$), was used to recover all profiles presented here. The choice of template star did not affect the recovered profiles at a level in excess of the formal error bars. We also find no significant difference between the profiles presented here to those determined by separately recovering profiles for the eight ESI echelle orders and computing a profile based on the weighted mean.
### A Reliability Test: Comparison of ESI and HIRES Data {#esi_hires_comp_sec}
We have previously attempted to observe Virgo dE galaxies with Keck/HIRES in March 1998. The significantly higher spectral resolution of HIRES (2.1 [km s$^{-1}$]{} pixel$^{-1}$) and its lower throughput as compared to ESI made these observations prohibitively difficult. The dE galaxies VCC 1254, VCC 1073, VCC 452 and VCC 1876 were observed for 5, 3, 2 and 1.8 hours, respectively, through a custom-made $2.0''
\times 11.0''$ slit. We were able to determine the kinematics for VCC 1254 inside $r<2''$; however, data for the last three galaxies did not have sufficient signal-to-noise to recover velocity profiles. Although this HIRES spectrum does not provide additional information on VCC 1254, it does provide an excellent reliability check on our ESI observations. The ESI and HIRES data complement each other in that the ESI data have relatively high S/N but an instrumental resolution approaching the intrinsic dispersion of our target galaxies, whereas the instrumental resolution of HIRES is significantly higher at the price of low signal-to-noise. As shown in Figure \[hires\], the line profile shapes and velocity profiles determined with HIRES match that measured by ESI. The velocity profiles were calculated in the wavelength region $\rm\lambda\lambda5000$–$5250\mbox{\AA}$, however, we have no reason to believe that this argreement would be any less in other spectral regions.
Imaging {#img_sec}
-------
### Observations and Data Reduction {#sb}
[*HST*]{} Wide Field Planetary Camera 2 (WFPC2) imaging is available for each of our target galaxies. These images provide high spatial resolution surface brightness profiles needed for the dynamical modeling discussed in §\[models\], and allowed us to measure photometric properties for our target galaxies. The data, first presented in @mil98 and @sti01, consist of $2
\times 230$-s WFPC2 images in the F555W bandpass and a single 300-s exposure in F814W. The galaxies are centered on the WF3 CCD in the [*HST*]{} pointings, and we use only the F555W WF3 CCD image to determine surface brightness profiles. The images were cleaned of cosmic rays and combined. The instrumental F555W magnitudes were calibrated into $V$-band using the transformations of @hol95, assuming $(V-I)=1.0$ [@mil98]. Surface brightness profiles were determined for each galaxy using the IRAF ELLIPSE isophotal fitting routine down to a surface brightness of $\mu_{V} \sim 24$. The average ellipticity, $\epsilon$, determined between $r=1''-20''$ and the total integrated apparent magnitude, determined by integrating the total flux inside a $40''$ aperture, are given in Table 1. Our apparent magnitudes agree with those determined by @mil98. Unlike @mil98, we consider VCC 1577 to be a nucleated dE galaxy since its bright, central star cluster is within a few tenths of an arcsecond of the galaxy’s isophotal centroid position. The observed surface brightness profiles are shown in Figure \[sbprof\].
### Surface Brightness Profile Fitting {#sbfits}
In subsequent analysis, we differentiate between light from the central dE nucleus and the underlying galaxy. We therefore fit separate analytic profiles to the inner and outer surface brightness profiles. For the underlying galaxy, a Sersic profile is fit of the form: $I^{\rm gal}(r) = I_0^{\rm gal} {\rm exp}[(r/r_0)^{1/n}]$, where a Sersic index $n=1$ represents an exponential profile and $n=4$ is a de Vaucouleurs law. The best-fit Sersic profile is determined by non-linear least-squares fitting to the region $r=1''-20''$; in this region, contributions from the nucleus and effects of the [*HST*]{} WFPC2 PSF should be negligible. The resulting profiles are shown in Figure \[sbprof\], and Sersic indices and half-light effective radii are listed in Table 3. Best-fitting Sersic indices range between $n=0.8 - 2.9$. There is a slight trend towards smaller $n$ values (more closely exponential) at fainter magnitudes, consistent with that seen in the much larger sample of Virgo dE surface brightness profiles analyzed by @bin98.
At the distance of the Virgo Cluster, the nuclei of dE galaxies are slightly extended compared to the WFPC2 PSF. We have used the ISHAPE software developed by @lar99 to derive shape parameters for these nuclei. The intrinsic shape of the nuclei was modeled as a Plummer profile whose projected intensity scales as $I^{\rm nuc}(r) =
I_0^{\rm nuc}/[1 + (r/b_{\rm nuc})^2]^2$, where $b_{\rm nuc}$ is the scale radius of the nucleus which, for this profile, is also the half-light radius. The ISHAPE software convolves the analytic profile with the WPFC2 F555W PSF and a diffusion kernel generated by the TinyTim software [@kri97], and determines the best-fitting model parameters by minimizing residuals between the model and original two-dimensional image. Nuclear profiles were fit inside the central $1.0''$ (10 pixels) and were assumed to be circularly symmetric. The free model parameters are the effective radius, $b_{\rm nuc}$, the profile normalization, $I_0^{\rm nuc}$, and a constant background level. Although a constant background level is a good approximation for the non-nuclear component inside the fitting radius, it is more appropriate to subtract the underlying galaxy Sersic profile. Therefore, we use the ISHAPE software to fit only the profile shape, $b_{\rm nuc}$, and determine the profile normalization, $I_0^{\rm
nuc}$, such that the total nuclear magnitude equals the luminosity leftover after subtraction of the galaxy Sersic profile from the total observed surface brightness profile. For the five nucleated dEs, the resulting nuclear profiles are plotted as dotted lines in Figure \[sbprof\]; effective half-light radii and total nuclear magnitudes are given in Table 3.
Results {#res_sec}
=======
Results of the kinematic analysis of §\[vp\] are shown for all galaxies in Figure \[vp\_fig\]; the derived kinematic profiles are summarized in Table 4. The mean line-of-sight velocity and velocity dispersion are plotted as a function of major axis radius in arcseconds; the radius was measured relative to the peak position of the intensity profile along the ESI slit. The systemic radial velocity of each dE was determined from the mean of the velocity data points and subtracted from the velocity profile. The corrected heliocentric velocities are listed in Table 4 and agree, within measurement errors, with previously published radial velocity measurements for VCC 917, VCC 1073, VCC 1254 and VCC 1876 [@bin85].
Velocity Profiles: A Lack of Rotation {#rot}
-------------------------------------
The velocity profiles in Figure \[vp\_fig\] show no evidence for substantial rotation along the major axis of any of the six dE systems observed. To quantify the maximum rotation velocity allowed by the data, we have differenced the average velocities on either side of the major axis of the galaxy and divided by two ($v_{\rm rot}$ in Table 4). This quantity is not particularly meaningful for galaxies which do not show a coherent rotation curve; it merely represents an upper limit on rotational motion. Error bars on rotational motion were determined by adding in quadrature the error of the mean velocity on either side of the major axis. If the observed flattening of these galaxies were determined by rotational motion alone, the expected rotation velocity can be calculated directly from the tensor virial theorem, given the observed velocity dispersion [@bin87]. The ratio of the maximum rotational velocity to the average velocity dispersion ($v_{\rm rot}/\sigma$) is plotted versus ellipticity in Figure \[rotation\] and compared to the ratio expected from an isotropic, rotationally-flattened body. Ellipticity and average velocity dispersion are determined outside $r=1''$ in order to exclude any contributions from a central nucleus and are listed in Tables 2 and 3, respectively. The upper limits on $v_{\rm rot}/\sigma$ determined for these galaxies are significantly smaller than expected if the observed flattenings were due to rotation. Thus, from Figure \[rotation\] we conclude that these dEs are primarily flattened by anisotropic velocity dispersions.
The low rotation velocities of this study are consistent with previous measurements of dE velocity profiles in the Local Group [@ben90]. However, recent studies of dE kinematics suggest that a fraction of dE galaxies are rotationally supported. A rotational velocity of 15 [km s$^{-1}$]{} was measured by @der01 for FS 76, a dE in the Fornax Cluster, slightly below the value expected if this system is rotationally supported. @ped02 found that five of their six Virgo Cluster dEs rotated with maximum velocities between 15 to 30 [km s$^{-1}$]{}, placing them on or above the relation for rotational support. The sixth galaxy presented by @ped02, IC 794 or VCC 1073, is also in our sample. Their measurement, $v_{\rm rot} = 3.4 \pm 1.7$ [km s$^{-1}$]{}, is consistent with our observation of low rotation velocity, $v_{\rm rot} = 2.1 \pm 0.4$[km s$^{-1}$]{}. Thus, of the 11 Virgo dEs with measured velocity profiles, 5 have significant rotation velocities and 6 are non-rotating. We note that the average brightness of the rotating dE sample ($M_V = -17.6$, assuming (B-V) = 0.8 [@gav01]) is slightly brighter than that of our non-rotating dEs ($M_V =
-16.4$). In §\[fp\] we find that these two populations are also slightly separated in the Fundamental Plane. We have recently observed a larger sample of Virgo dE galaxies, some of which have significant rotation velocities and some which do not. We will explore in more depth the differences between these two classes in a forthcoming paper.
Interpreting Velocity Dispersion Profiles {#interp_vp_sec}
-----------------------------------------
The mean velocity dispersions of the six dE galaxies presented in Figure \[vp\_fig\] lie between 20 and 55 [km s$^{-1}$]{}, and show a wide range of profile shapes. Although Virgo dE nuclei are unresolved from the ground, the kinematics of the central nucleus appear to be distinguished from the underlying galaxy. Surprisingly, the central velocity dispersion can be either larger (VCC 1254) or smaller (VCC 1073, VCC 452) than the surrounding galaxy. Below, we construct dynamical models for each of the observed galaxies in order to explore the range of mass-to-light ratios, velocity dispersion anisotropy and central black hole masses allowed by the observed profiles.
### Dynamical Modeling {#models}
High spatial resolution WFPC2 imaging available for all of the observed dEs (§\[sb\]) allows dynamical modeling of the velocity dispersion profiles through the assumption that the stellar mass density is proportional to the luminosity density times some mass-to-light ratio at all radii. Solving the spherically symmetric Jeans equation, the predicted kinematics are convolved through the observational setup, allowing a direct comparison to the observations. The simplifying assumption of spherical symmetry is justified as more generalized models cannot be discriminated against without additional information such as minor axis kinematics or higher order velocity profile moments. We produce spherical models in which the radius is related to the observed semimajor/semiminor axes and ellipticity of the corresponding dE galaxy via the relation: $r=\sqrt{ab} = a \sqrt{1-\epsilon}$. The square root of the product of the semi-major and minor axis is a more appropriate quantity than the semi-major axis for these elliptical systems. The models are based on dynamical software described in more detail by @vdm94.
The luminosity density, $j(r)$, is determined for each galaxy by Abel transformation of the projected WFPC2 $V$-band surface brightness profile measured in §\[sb\]. To avoid noise amplification, the observed surface brightness profiles were first fit to an arbitrary function, a generalization of the “Nuker law” [@lau95], as shown in Figure \[sbprof\]. The total luminosity density is assumed to be composed of two parts, $j(r) = j_{\rm nuc}(r) + j_{\rm gal}(r)$, representing the central nucleus and underlying galaxy, respectively. We model the nuclear component as a Plummer model with fixed scale length, $b_{\rm nuc}$, and total nuclear luminosity, $L_{\rm nuc}$ in the $V$ band, as determined for each galaxy from the projected luminosity density in §\[sbfits\]. The luminosity density of a Plummer model, taken from @dej87, is: $$\label{jn}
j_{\rm nuc}(r) = \frac{3 L_{\rm nuc}}{4 \pi b_{\rm nuc}^3}~\Big{[}1 +
\frac{r^2}{b_{\rm nuc}^2}\Big{]}^{-5/2}$$ The luminosity density of the underlying galaxy, $j_{\rm gal}(r)$, is the total luminosity density minus the contribution from the nucleus. The mass density of stars, $\rho(r)$, is modeled as the luminosity density times a mass-to-light ratio. In our models, we allow two distinct mass-to-light ratios as free parameters, $\Upsilon_{\rm gal}$ and $\Upsilon_{\rm nuc}$, for the underlying galaxy and nuclear component. The mass density distribution of the galaxy component is then: $\rho_{\rm gal}(r) = \Upsilon_{\rm gal} \> j_{\rm gal}(r)$. The mass density distribution of the nucleus is similarly defined as: $\rho_{\rm nuc}(r) = \Upsilon_{\rm nuc} \> j_{\rm nuc}(r)$. The total mass density of the system is the sum of these densities: $\rho(r) =
\Upsilon_{\rm nuc} \> j_{\rm nuc}(r) + \Upsilon_{\rm gal} \> j_{\rm
gal}(r)$. Rearranging terms, the total density is modeled as: $$\label{rho}
\rho(r) = \Upsilon_{\rm gal} \> j(r) + [(\Upsilon_{\rm nuc} -
\Upsilon_{\rm gal}) \> j_{\rm nuc}(r)]$$ where the total luminosity density $j(r)$ is inferred from the [*HST*]{} WFPC2 surface brightness profile, $j_{\rm nuc}(r)$ is given in Eqn. (\[jn\]), and the mass-to-light ratios $\Upsilon_{\rm nuc}$ and $\Upsilon_{\rm gal}$ are free parameters. The case where $\Upsilon_{\rm nuc} = \Upsilon_{\rm gal}$ is equivalent to the mass density of the galaxy being equal to the luminosity density times a constant mass-to-light ratio.
The total gravitational potential, $\Phi$, is obtained by integrating over the mass density determined by Eqn. (\[rho\]) plus an added possible contribution, $GM_{\rm BH}/r$, from a central black hole. The velocity dispersion as a function of radius, $\sigma_r$, is then obtained by solving the spherically-symmetric Jeans equation: $$\frac{d(\rho \sigma_{r}^2)}{dr} + 2 \frac{\beta \rho \sigma_{r}^2}{r}
= -\rho \frac{d\Phi}{dr}$$ where $\beta = 1 - \sigma_{\theta}^2 / \sigma_{r}^2$ describes the velocity dispersion anisotropy. Models with $\beta =0$ are isotropic, $\beta < 0$ are tangentially anisotropic, and $0 < \beta \le 1$ are radially anisotropic. The radial velocity dispersion $\sigma_r$ is numerically evaluated for any combination of $\beta$, $M_{\rm BH}$, $\Upsilon_{\rm gal}$, $\Upsilon_{\rm nuc}$, and surface brightness profile. In order to compare to observations, the velocity dispersions are projected along the line-of-sight. The projected dispersions are convolved using Monte Carlo integration with a Gaussian kernel to take into account the seeing FWHM (as determined in §\[datared\_sec\]). The dispersions are then sampled using the slit width, pixel size, and rebinning scheme specific to each observation. The predicted dispersion $\sigma(r)$ can be compared directly to the observed velocity dispersions $\sigma_i$ over all radial bins $r_i$ by the defined quantity: $$\label{chi}
\chi_{\sigma}^2=\Big{[}\frac{\sigma_i-\sigma(r_i)}{\Delta\sigma_i}\Big{]}^2$$ The best-fitting model is determined by minimization of $\chi_{\sigma}^2$. Given the well known degeneracy between the mass profile and velocity anisotropy, and given the quality of our data, we choose not to explore the full range of allowed parameter space. Instead, we consider three limiting cases: (1) constant mass-to-light ratio and velocity anisotropy, (2) the same model plus a central black hole, and (3) models with separate mass-to-light ratios for the nucleus and underlying galaxy and no central black hole.
### Mass-to-Light Ratios and Orbital Anisotropy {#ml_aniso_sec}
We first consider models without a central black hole for which the free parameters are a single mass-to-light ratio $\Upsilon_V$ (i.e. $\Upsilon_{\rm gal} = \Upsilon_{\rm nuc}$) and velocity dispersion anisotropy, $\beta$, both independent of radius. For each galaxy, models were run for values of the velocity dispersion anisotropy ranging between $-3 \le \beta \le 0.75$. Best fitting values of $\beta$ and $\Upsilon_V$ were determined by overall minimization of $\chi_{\sigma}^2$ and are listed in Table 4. Formal $1\sigma$ (68% confidence) error bars are calculated for each individual free model parameter by the variation needed to increase $\chi_{\sigma}^2$ by 1 with respect to its minimum value [@pre92]. Best-fit models, as well as several representative $\beta$-value models are plotted over the observed data points in Figure \[aniso\_models\]. Although isotropic models ($\beta=0$) do not fit the observed profiles in detail, such models do in general reproduce the dip or rise in the central dispersion observed in all six dE galaxies. Most of the galaxies are best fit with tangential anisotropy. In some cases, the amount of anisotropy needed to fit the profiles is unphysically large, motivating the models described in §\[nuc\_vs\_gal\_sec\].
The $V$-band mass-to-light ratios determined for our six Virgo dEs range between $3~\le~\Upsilon_V~\le~6$. These mass-to-light ratios are plotted against the absolute total magnitude of each galaxy in Figure \[fig\_ml\] and compared to higher luminosity classical elliptical galaxies of @mag98. The @mag98 galaxies show a clear trend of decreasing mass-to-light ratio towards fainter magnitudes. The observed dEs tend to have larger mass-to-light ratios at a given absolute magnitude than expected by extrapolation of this relationship.
Combining the WFPC2 colors of these dE galaxies with the dynamically determined mass-to-light ratios, it is possible to roughly determine ages and metallicities for these galaxies. The colors of the six dEs lie in the range $1.0 \le (V-I) \le 1.2$, as measured by @sti01 from WFPC2 data averaged inside $r \le 10''$. Approximating the stellar populations of these dEs galaxies by a single-burst population [@wor94], the ages and metallicities implied by the above combined constraints lie between 5 to 12 Gyr and $-1 \le \rm [Fe/H] \le
0$ dex, respectively. This rough calculation suggests that the mass determined from the observed kinematics can be accounted for by stellar populations alone without the need for a significant dark matter component, at least inside the radius of our observations ($\approx 1$ effective radius). This does not rule out a significant dark matter component at larger radii. Accurate determination of the ages and metallicities of these galaxies requires a rigorous analysis of their line strengths and will be presented in a forthcoming paper.
### Upper Limits on the Mass of a Central Black Hole {#bh_sec}
We next allow an additional free model parameter in the form of a central black hole with the goal of placing upper limits on the black hole mass allowed by our kinematic data. If, for example, dEs are the morphologically-transformed remnants of larger progenitor galaxies, limits on the central black hole mass can place potentially interesting constraints on such a progenitor population. Models were run for a two-dimensional grid of velocity anisotropy versus black hole mass. For each grid point, the best fitting mass-to-light ratio is determined. Contours of constant $\Delta \chi_{\sigma}^2$ are shown for these two parameters in Figure \[bh\_models\]. Confidence intervals were assigned to $\Delta \chi_{\sigma}^2$ values in a two dimensional parameter space, as discussed in @pre92. Black hole masses greater than $M_{\rm BH} > 10^{7} {\mbox{\,$M_{\odot}$}}$ can be ruled out at the 99.9% confidence level. For most objects, a zero mass black hole model is either the best fitting model, or statistically similar to the best fit at the 90% confidence level ($1.7\sigma$). The galaxy VCC 1254 is the only dE in which a non-zero black hole mass, $M_{\rm BH} = 9\times10^{6} {\mbox{\,$M_{\odot}$}}$, is a significantly better fit to the data than models without a central black hole. This does not necessarily imply the presence of a black hole, as we will show below that this profile is equally well fit by a model in which the mass-to-light ratio of the nucleus is larger than the underlying galaxy. In addition, although the upper limits on black hole mass determined in this section are robust, actual black hole mass determinations would require more complicated models which allow $\beta$ variations with radius. Upper limits on central dE black hole mass are compared, in Figure \[fig\_sigBH\], to the black hole mass-bulge velocity dispersion relationship, $M_{\rm BH}-\sigma_e$, derived for bulge-dominated galaxies [@geb00; @fer00; @tre02]. Although this relationship may not be applicable to dE galaxies, which lack a bulge component, our upper limits are still consistent with the relationship. The implication of these upper limits on possible dE progenitor galaxies is discussed in §\[disc\_sec\].
### Nuclear versus Galaxy Mass-to-Light Ratios {#nuc_vs_gal_sec}
The kinematic profiles of the five nucleated dEs in Figure \[vp\_fig\] show that the nuclei tend to have velocity dispersions distinct from the surrounding galaxy. In addition, photometric studies suggest that dE nuclei tend to have different colors than the underlying light of the host dE galaxy [@sti01; @dur97]. Motivated by these observations, we consider models allowing two distinct mass-to-light ratios, one for the nucleus, $\Upsilon_{\rm
nuc}$, and another for the underlying galaxy, $\Upsilon_{\rm gal}$. We explore only isotropic models ($\beta = 0$) and search for combinations of $\Upsilon_{\rm gal}$, $\Upsilon_{\rm nuc}$ which minimize $\chi_{\sigma}^2$. For VCC 1254, in which the nuclear dispersion is larger than the surrounding galaxy, the best fit nuclear mass-to-light ratio is twice that for the surrounding galaxy ($\Upsilon_{\rm nuc} = 2.1 \Upsilon_{\rm gal}$). This model fits the data equally well as the models presented in the previous two sections. However, a larger nuclear mass-to-light ratio implies an older, and therefore redder, stellar population, contrary to observations that the nucleus of VCC 1254 is bluer than the surrounding galaxy [@sti01; @dur97]. The true dynamical state of VCC 1254 is likely to lie between the three extreme models presented in this and previous sections.
For the four nucleated galaxies in which the velocity dispersion dips in the central regions (VCC 452, VCC 1073, VCC 1577, and VCC 1876), a smaller nuclear mass-to-light ratio ($\Upsilon_{\rm nuc} <
\Upsilon_{\rm gal}$) is only a marginally better fit to the data. In these four systems, we are not able to directly constrain the nuclear mass-to-light ratio. Even the unphysical case in which the nucleus contributes no mass, isotropic models are inadequate fits to the profiles of VCC 452 and VCC 1073, implying that these galaxies must have some degree of tangential velocity anisotropy. Isotropic, single mass-to-light ratio models are adequate fits to the profiles of VCC 1577 and VCC 1876, and variations in $\Upsilon_{\rm nuc}$ do not significantly improve the fit. This can be understood because the nuclear component does not dominate the observed spectroscopic light of these galaxies, not even in the central data point. Inside $r<1''$, the nucleus contributes between 5% and 25% of the total light in this region, as compared to 60% for VCC 1254. Thus, the measured central velocity dispersion is not a good estimate of the velocity dispersion of the nucleus. In order to place these nuclei on the Fundamental Plane (see §\[fp\_nuc\]), we do need an estimate of the nuclear velocity dispersions. For this, we assume a nuclear mass-to-light ratio equal to the galaxy ($\Upsilon_{\rm nuc} =
\Upsilon_{\rm gal}$) and calculate the central projected velocity dispersion for a Plummer model [@dej87] with total luminosity and scale radius as determined in §\[sbfits\]. The resulting nuclear velocity dispersions are given in Table 4.
The Fundamental Plane {#fp}
---------------------
In the multivariate space defined by central velocity dispersion, $\sigma_0$, effective surface brightness, $\mu_{\rm eff}$, and effective radius, $r_{\rm eff}$, dE galaxies occupy a region of the so-called Fundamental Plane distinct from classical elliptical galaxies. The separation is best demonstrated by the $\kappa$-space projection of this parameter space defined by @ben92 as: $$\kappa_1 \equiv (\log [\sigma_0^{2}] + \log r_{\rm eff}) / \sqrt2$$ $$\kappa_2 \equiv (\log [\sigma_0^{2}] + 2 \log I_{\rm eff} - \log r_{\rm
eff})/\sqrt6$$ $$\kappa_3 \equiv (\log [\sigma_0^{2}] - \log I_{\rm eff} - \log r_{\rm
eff})/\sqrt3$$ where $I_{\rm eff}$ is defined as $10^{-0.4(\mu_{\rm eff}-27)}$ and is the mean intensity inside the radius $r_{\rm eff}$. These coordinates are related to physical quantities as follows: $\kappa_1$ is proportional to the logarithm of mass, $\kappa_2$ is proportional to the effective surface brightness times mass-to-light ratio and $\kappa_3$ is proportional to the logarithm of mass-to-light ratio. To compare the location of our dEs in the Fundamental Plane to other galaxy types, we plot data compiled by @bur97 for classical ellipticals, spiral bulges, previously observed dEs, dwarf spheroidals and globular clusters. These data have been compiled in the $B$-band. For comparison, we transform our dE data to the $B$-band assuming $(B-V) = 0.8$ [@gav01]. In addition, we add four of the five rotating dE galaxies presented by @ped02 for which photometric data is available. Photometric properties for UGC 7436/VCC 543 were determined from WFPC2 imaging as in §\[sbfits\]. The properties of the remaining objects were taken from @ben92.
### dE Galaxies in the Fundamental Plane {#fp_gal}
As seen in the edge-on, $\kappa_1$ vs. $\kappa_3$, view of the Fundamental Plane (lower left panel, Fig. \[fp\_fig\]) dwarf ellipticals appear to lie in a plane parallel to, but offset from classical ellipticals. In the face-on, $\kappa_1$ vs. $\kappa_2$, view (upper left panel, Fig. \[fp\_fig\]), dEs lie in a very different region of this plane, on a sequence perpendicular to the locus of classical ellipticals. The offset in $\kappa_3$ was first noted by @ben92 and can be interpreted as either non-homology between dwarf and classical ellipticals or as a difference in mass-to-light ratios. We have shown in Figure \[fig\_ml\] that dEs tend to have larger mass-to-light ratios at a given absolute magnitude compared to classical ellipticals, favoring the latter interpretation of the $\kappa_3$ offset. Comparing our non-rotating dEs to the rotating dEs of @ped02, these two groups lie in slightly different regions of the Fundamental Plane. The rotating dEs lie at larger $\kappa_1$ and $\kappa_3$ than the non-rotating sample, suggesting that they have both higher masses and mass-to-light ratios. From the location of the rotating dEs in the Fundamental Plane, these galaxies are not part of the low luminosity extension of classical ellipticals known to be rotationally-supported [@dav83]. Thus, dEs appear to have a wider range of rotational properties than previously assumed. The separation in both luminosity and Fundamental Plane space between these two samples suggests a correlation between rotation and another physical quantity, possibly mass. However, more data is required to establish such a correlation.
### dE Nuclei in the Fundamental Plane {#fp_nuc}
In the right panels of Figure \[fp\_fig\], the nuclei of the five observed nucleated dEs are plotted. Unlike the underlying dE galaxies in the Fundamental Plane, dE nuclei lie nearest to the region occupied by globular clusters. These $\kappa$-space parameters were calculated using central velocity dispersions determined directly from a Plummer model fit to the nucleus. This is a more accurate estimate of the nuclear velocity dispersion than the measured central velocity dispersion, but requires the assumption that the nuclear mass-to-light ratio equals that of the galaxy, as discussed in §\[nuc\_vs\_gal\_sec\]. This assumption most strongly affects values of $\kappa_3$. However, for any reasonable assumed nuclear mass-to-light ratio, the dE nuclei lie closest to globular clusters in the all three $\kappa$ indices. The absolute luminosities of these dE nuclei ($-8.5 \le M_{\rm V} \le -12.3$) are as bright or brighter than the most luminous Galactic globular clusters [@djo93; @har96]. The nuclear effective radii are also larger than an average Galactic globular cluster, but are smaller than the largest known globulars. The resulting central luminosity densities of dE nuclei, determined from the Plummer models, are comparable to the average globular cluster central luminosity density. The position of the best studied dE nucleus, that of the Local Group dE NGC 205, a well-resolved supermassive star cluster of intermediate age and absolute magnitude $M_V=-9.6$ [@jon96], is also plotted on the Fundamental Plane. This nucleus lies squarely in the region occupied by globular clusters. The offset position of the Virgo dE nuclei, particularly in the face-on view of the Fundamental Plane (top right panel Fig. \[fp\_fig\]), relative to Galactic globular clusters is most likely due to larger nuclear masses.
Discussion and Conclusions {#disc_sec}
==========================
Velocity and velocity dispersion profiles are presented for the major axes of six dE galaxies in the Virgo Cluster. These galaxies do not show evidence for substantial rotation; upper limits on rotation velocities are well below that expected if these objects were rotationally flattened. Dynamical models for these galaxies suggest mass-to-light ratios in the range $3\le \Upsilon_V \le6$. We argue that such ratios are expected for intermediate to old stellar populations and thus these dEs do not require significant dark matter inside an effective radius. Our observations do not rule out significant dark matter in dEs at larger radii as demonstrated by giant elliptical galaxies which exist in massive dark halos, but are not necessarily dark matter dominated at small radii [@ger01]. In Fundamental Plane space, we find that the Virgo dE galaxies, similar to previously observed dEs, lie in a plane parallel to, but offset from, that occupied by normal elliptical galaxies. In this space, dE nuclei lie near the region occupied by Galactic globular clusters.
The origin of nuclei in dE galaxies remains an open question. In the present sample, there is no obvious difference between the single non-nucleated dE galaxy (VCC 917) and the underlying galaxies of the observed nucleated dEs. The mass-to-light ratio, anisotropy, and photometric parameters measured for VCC 917 are indistinguishable from those determined outside the nucleus of the other five dEs. However, as a population, non-nucleated dE galaxies in the Virgo Cluster do have different properties. They are observed to be less spatially concentrated, have lower specific globular cluster frequencies, and, on average, have flatter shapes as compared to nucleated dE galaxies [@san85; @mil98; @ryd99]. Proposed scenarios for the origin of dE nuclei include the remnant cores of larger stripped galaxies [@ger83], the results of gas infall and star formation or the coalescence of several globular clusters whose orbits have decayed to the dE center [@oh00]. We have shown that the observed dE nuclei share many properties with globular clusters, suggesting similar formation processes.
Since dE galaxies are preferentially found in dense environments, it is likely that galaxy interactions play a large role in their formation and evolution. The models of @moo98 suggest that galaxy harassment in clusters can morphologically transform a spiral galaxy into a dwarf elliptical. Harassment tends to increase internal velocity dispersions, but is less efficient in disrupting rotational motion and is not obviously reconciled with the low rotational velocities observed in the present dE sample. If dEs are the morphologically-transformed remnants of larger progenitor galaxies, a constraint on such a progenitor population is provided by the central black hole mass limits determined in §\[bh\_sec\]. The upper limit of $\sim10^7{\mbox{\,$M_{\odot}$}}$ for the observed dE galaxies implies that any dE progenitor must have had a bulge dispersion less then 100[km s$^{-1}$]{}, assuming the $M_{\rm BH}-\sigma_e$ relation [@tre02]. Although this is not a stringent constraint on dE galaxy formation models, higher spatial resolution kinematics, and therefore more stringent mass limits, could be a significant constraint on such models.
As the number of dE galaxies with measured internal kinematics increases, their position in the Fundamental Plane strengthens the conclusion that dwarf and classical elliptical galaxies evolve via very different physical processes. To determine whether dwarf ellipticals as a galaxy class evolve under homogeneous conditions requires more observations. A critical question is understanding the apparent dichotomy between the anisotropy-supported dEs presented in this paper and the rotationally-supported dEs presented by @ped02 and @der01. The fact that rotating and non-rotating dEs appear to form a “sequence” in Fundamental Plane space, with the latter having somewhat lower mean luminosity, mass, and mass-to-light ratio, suggests that these are not two distinct types of dE galaxies but rather are part of a continuous family. Larger samples are required to establish what, if any, physical property correlates with the observed rotational velocities and what this implies for dE galaxy formation.
We would like to thank Dennis Zaritsky, Ruth Peterson and Doug Lin for help with the Keck/HIRES data discussed in §\[esi\_hires\_comp\_sec\]. We are grateful to Bryan Miller for making his reduced [*HST*]{} WFPC2 images available to us and to Soeren Larsen for help with the ISHAPE software. M.G. acknowledges support from the STScI Director’s Discretionary Research Fund.
Bender, R., Burstein, D., & Faber, S. M. 1992, , 399, 462
Bender, R., & Nieto, J.-L. 1990, A&A, 239, 97
Bender, R., Paquet, A., & Nieto, J.-L. 1991, A&A, 246, 349
Bingelli, B., & Jerjen, H. 1998, A&A, 333, 17
Bingelli, B., Sandage, A., & Tammann, G. A. 1985, , 90, 1681
Bingelli, B., Tammann, G. A., & Sandage, A. 1987, , 94, 251
Binney, J., & Tremaine, S.1987, Galactic Dynamics, Princeton University Press, Princeton
Burstein, D., Bender, R., Faber, S. M., & Nolthenius, R. 1997, , 114, 1365
Davies, R. L., Efstathiou, G., Fall, S. M., Illingworth, G., & Schechter, P. L. 1983, , 266, 41
Dejonghe, H. 1987, , 224, 13
Dekel, A., & Silk, J. 1986, , 303, 39
de Vaucouleurs, G. 1948, Ann. d’Astrophys., 11, 247
Djorgovski, S. G. 1993, in “Structure and Dynamics of Globular Clusters”, ed. S. G. Djorgovski and G. Meylan (San Fransisco: ASP), 373
Durell, P. R. 1997, AJ, 113, 531
Ferrarese, L., & Merritt, D. 2000, , 539, L9
Ferguson, H. C., & Binggeli, B.1994, A&A Rev., 6, 67
Freedman, W. L., Madore, B. F., Gibson, B. K., Ferrarese, L., Kelson, D. D., Sakai, S., Mould, J. R., Kennicutt, R. C., Jr., Ford, H. C., Graham, J. A., Huchra, J. P., Hughes, S. M. G., Illingworth, G. D., Macri, L. M., & Stetson, P. B. 2001, , 553, 47
Gavazzi, G., Zibetti, S., Boselli, A., Franzetti, P., Scodeggio, M., & Martocchi, S. 2001, A&A, 372, 29
Gebhardt, K., Bender, R., Bower, G., Dressler, A., Faber, S. M., Filippenko, A. V., Green, R., Grillmair, C., Ho, L. C., Kormendy, J., Lauer, T., Magorrian, J., Pinkney, J., Richstone, D., & Tremaine, S. 2000, , 539, L13
Geha, M., Guhathakurta P., & van der Marel, R. P. 2002, in “The Shapes of Galaxies and their Halos”, ed. P. Natarajan, World Scientific, in press (astro-ph/0107010)
Gerhard, O., Kronawitter, A., Saglia, R. P., & Bender, R. 2001, , 121,1936
Gerola, H., Carnevali, P., & Salpeter, E. E. 1983, , 268, L75
Harris, W. E. 1996, , 112, 1487
Held, E. V., Mould, J. R., & de Zeeuw, P. T. 1990, , 100, 415
Holtzman, J. A., Burrows, C. J., Casertano, S., Hester, J. J., Trauger, J. T., Watson, A. M., & Worthey, G.1995, , 107, 1065
Jones, D. H., Mould, J. R., Watson, A. M., Grillmair, C., Gallagher, J. S., Ballester, G. E., Burrows, C. J., Casertano, S., Clarke, J. T., Crisp, D., Griffiths, R. E., Hester, J. J., Hoessel, J. G., Holtzman, J. A., Scowen, P., Stapelfeldt, K. R., Trauger, J. T., & Westphal, J. A. 1996, , 466, 742
Krist, J., & Hook, R. 1997, The TinyTim User’s Guide (Baltimore: STScI)
Larsen, S. S. 1999, A&A Suppl., 139, 393
Lauer, T. R., Ajhar, E. A., Byun, Y.-I., Dressler, A., Faber, S. M., Grillmair C., Kormendy, J., Richstone, D., & Tremaine, S. 1995, , 110, 2622
Magorrian, J., Tremaine, S., Richstone, D., Bender, R., Bower, G., Dressler, A., Faber, S. M., Gebhardt, K., Green, R., Grillmair, C., Kormendy, J., & Lauer, T. 1998, , 115, 2285
Mayer, L., Governato, F., Colpi, M., Moore, B., Quinn, T., Wadsley, J., Stadel, J., & Lake, G. 2001, , 559, 754
Miller, B. W., Lotz, J. M., Ferguson, H. C., Stiavelli, M., & Whitmore, B. C. 1998, , 508, L133
Moore, B., Lake, G., & Katz, N. 1998, , 495, 139
Oh, K. S., & Lin, D. N. C. 2000, , 543, 620
Pedraz, S., Gorgas, J., Cardiel, N., Sanchez-Blazquez, P., & Guzman, R. 2002, , 332, L59
Peterson, R. C., & Caldwell, N. 1993, , 105, 1411
Press, W. H., Teukolsky, S. A., Vetterling, W. T., Flannery, B. P. 1992, Numerical Recipes, Cambridge University Press, Cambridge, §15.6
de Rijcke, S., Dejonghe, H., Zeilinger, W. W., & Hau, G. K. T. 2001, , 559, L21
Ryden, B., Terndrup, D. M., Pogge, R. W., & Lauer, T. R. 1999, , 517, 650
Sandage, A., Binggeli, B., & Tammann, G. 1985, , 90, 1759
Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525
Sérsic, J. L. 1968, Atlas de Galaxias Australes (Córdoba: Obs. Astron., Univ. Nac. Córdoba)
Sheinis, A. I., Bolte, M., Epps, H. W., Kibrick, R. I., Miller, J. S., Radovan M. V., Bigelow, B. C., & Sutin, B. M. 2002, , 114, 851
Stiavelli, M., Miller, B. W., Ferguson, H. C., Mack, J., Whitmore, B. C., & Lotz, J. M. 2001, , 121, 1385
Tremaine, S., Gebhardt, K., Bender, R., Bower, G., Dressler, A., Faber, S. M., Filippenko, A. V., Green, R., Grillmair, C., Ho, L. C., Kormendy, J., Lauer, T. R., Magorrian, J., Pinkney, J., & Richstone, D. 2002, , 574, 740
van der Marel, R. P. 1994, , 270, 271
van der Marel, R. P., & Franx, M. 1993, , 407, 525
Worthey, G. 1994, , 95, 107
[lccccccc]{} VCC 452 & 12:21:04.7 & 11:45:18 & dE4,N & 0.08 & 15.34 & 0.09 & $-15.67$\
VCC 917/IC 3344 & 12:26:32.4 & 13:34:43 & dE6 & 0.45 & 13.93 & 0.11 & $-17.10$\
VCC 1073/IC 794 & 12:28:08.6 & 12:05:36 & dE3,N & 0.30 & 13.82 & 0.09 & $-17.19$\
VCC 1254 & 12:30:05.3 & 08:04:29 & dE0,N & 0.05 & 14.58 & 0.07 & $-16.41$\
VCC 1577 & 12:34:38.4 & 15:36:10 & dE4,N & 0.17 & 15.24 & 0.09 & $-15.77$\
VCC 1876/IC 3658& 12:41:20.4 & 14:42:02 & dE5,N & 0.23 & 14.65 & 0.10 & $-16.37$\
[lcrc]{} VCC 452 & 3600 & $-34$ & 1.1\
VCC 917/IC 3344 & 3600 & 52 & 0.9\
VCC 1073/IC 794 & 3600 & $-60$ & 0.8\
VCC 1254 & 3600 & 0 & 0.8\
VCC 1577 & 6000 & 20 & 1.0\
VCC 1876/IC 3658& 3600 & 68 & 1.0\
[lccccccc]{} VCC 452 &$\>$ 9.6 (0.71)& 22.3 & 1.6 & 0.15 (0.011) & 20.7 & 22.54 & $\,-8.47$\
VCC 917/IC 3344 & 12.2 (0.90) & 21.4 & 2.9 & ... & ... & ... & ...\
VCC 1073/IC 794 & 11.1 (0.82) & 21.1 & 1.9 & 0.13 (0.010) & 17.4 & 19.86 & $-11.15$\
VCC 1254 & 14.4 (1.07) & 22.4 & 2.9 & 0.17 (0.013) & 16.7 & 18.67 & $-12.32$\
VCC 1577 & 10.5 (0.78) & 22.4 & 1.1 & 0.16 (0.011) & 20.1 & 22.23 & $\,-8.78$\
VCC 1876/IC 3658& 10.5 (0.78) & 21.8 & 0.8 & 0.11 (0.008) & 17.9 & 20.92 & $-10.10$\
[lcccccc]{} VCC 452 & 1380& $1.0\pm1.7$ & 23.8 &$\,$ 7.7& $5.28\pm0.82$ & $-
0.27\pm0.23$\
VCC 917/IC 3344 & 1186& $0.4\pm0.4$ & 31.1 & ... & $3.41\pm0.14$ & $-
0.87\pm0.15$\
VCC 1073/IC 794 & 1862& $2.1\pm0.4$ & 44.6 & 31.7& $5.83\pm0.17$ & $-
0.64\pm0.08$\
VCC 1254 & 1220& $0.9\pm0.9$ & 31.0 & 48.6& $5.99\pm0.17$ & $-
2.56\pm0.98$\
VCC 1577 &$\,$ 361& $1.3\pm0.7$ & 26.8 &$\,$ 9.8& $5.76\pm0.86$ & $-
0.07\pm0.17$\
VCC 1876/IC 3658&$\,$ 95& $1.2\pm1.4$ & 25.7 & 16.5& $3.33\pm0.83$ &$\>~~0.10\pm0.23$\
[lcccccc]{} VCC 452 & 1.84 & 2.45 & 0.78 & $-0.11$ & 3.33 & 0.87\
VCC 917/IC 3344 & 2.07 & 2.80 & 0.63 & ... & ... & ...\
VCC 1073/IC 794 & 2.29 & 3.05 & 0.79 & 0.71 & 4.93 & 0.85\
VCC 1254 & 2.13 & 2.45 & 0.83 & 1.04 & 5.26 & 0.86\
VCC 1577 & 1.93 & 2.45 & 0.82 & 0.03 & 3.61 & 0.85\
VCC 1876/IC 3658& 1.94 & 2.64 & 0.70 & 0.24 & 4.53 & 0.71\
1 cm
0.5 cm
0.5 cm
0.5 cm
1 cm
0.5 cm
1 cm
1 cm
1 cm
0.75 cm 0.6 cm
[^1]: Data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
[^2]: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
[^3]: Based on the surface brightness profiles in Figure \[sbprof\] we estimate that the “sky” spectrum is contaminated by galaxy light at the level of $\lesssim1\%$ and $\approx5\%$ relative to the dE’s center, with and without the nuclear contribution, respectively. In the outer parts of the dE profile ($r=5''$), the contamination level is higher (20%–30%), but even this should not be a problem since there are no strong radial gradients in the mean velocity and velocity dispersion in the outer parts of the galaxies.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: '[In this work, analyzing the propagation of electromagnetic waves in the field of gravitational waves, we show the presence and significance of the so called surfing effect for pulsar timing measurements. It is shown that, due to the transverse nature of gravitational waves, the surfing effect leads to enormous pulsar timing residuals if the speed of gravitational waves is smaller than speed of light. This fact allows to place significant constraints on parameter $\epsilon$, which characterizes the relative deviation of the speed of gravitational waves from the speed of light. We show that the existing constraints from pulsar timing measurements already place stringent limits on $\epsilon$ and consequently on the mass of graviton $m_g$. These limits on $m_g$ are three orders of magnitude stronger than the current constraints from Solar System tests. The current constraints also allow to rule out massive gravitons as possible candidates for cold dark matter in galactic halo. In the near future, the gravitational wave background from extragalactic super massive black hole binaries, along with the expected sub-microsecond pulsar timing accuracy, will allow to achieve constrains of $\epsilon\lesssim0.4\%$ and possibly stronger.]{}'
author:
- 'D. Baskaran'
- 'A. G. Polnarev'
- 'M. S. Pshirkov'
- 'K. A. Postnov'
title: Limits on the speed of gravitational waves from pulsar timing
---
Introduction \[Introduction\]
=============================
Gravitational wave astronomy is an active field of research which promises to open up a new window into the physical universe [@Thorne1987], [@Allen1997], [@glpps2001],[@CutlerThorne2001], [@Hughes2003], [@Grishchuk2003], [@sathya2005]. The current and future laser interferometric gravitational wave detectors, high precision pulsar timing, along with measurements of the anisotropies in the temperature and polarization of the Cosmic Microwave Background have the potential to discover gravitational waves in a broad range of frequencies in the near future (see [@LIGOwebsite], [@Jenetetal2006], [@PLANCKbluebook] for recent discussions).
In this paper we shall be mainly interested in pulsar timing as a laboratory for gravitational wave physics. Propagation of pulsar signal through space-time perturbed by gravitational waves results in appearance of anomalous timing residuals (i.e. differences between observed and theoretically predicted times of arrival). Pulsar timing provides a unique tool for observing gravitational waves in low-frequency band ($10^{-7} ~\mathrm{Hz}<f_{gw}<10^{-9} ~\mathrm{Hz} $) [@Sazhin1978], [@Detweiler1979], [@Bertotti1983], [@Cordes2004], [@Hobbs2005], [@Jenetetal2005], [@Jenetetal2006]. The main sources of gravitational waves at these frequencies are expected to be of extragalactic origin. The strongest sources would be supermassive black hole binaries in the center of galaxies [@WyitheLoeb2003], [@JaffeBacker2003], [@Enoki2004], [@Sesana2008]. Relic gravitational waves, which are the remnants from the early history of the universe, may also contribute a significant fraction to the gravitational wave background at these frequencies [@Grishchuk1974], [@Grishchuk2005]. Pulsar timing could also measure gravitational waves from superstrings [@Maggiore2000], as well as several other exotic sources [@Hogan2006].
The main methods to detect gravitational waves are based on the analysis of their interaction with electromagnetic fields [@LandauLifshitz], [@mtw], [@GrishchukPolnarev1980]. The interaction of gravitational waves with electromagnetic waves leaves measurable imprints on the latter. For example, the phase variations in the electromagnetic wave propagating in the field of a gravitational wave, and its implications for space radio interferometry were studied in [@bkpn1990] (see also [@bkpn1992]). In [@PolnarevBaskaran2008], analyzing these phase variations in a situation when the speed of gravitational waves could be smaller than the speed of light, the authors introduced the concept of “surfing effect" and studied its implications for the precision interferometry measurements. In this paper we shall consider the implications of the surfing effect for pulsar timing measurements. As we shall show, due to the transverse nature of gravitational waves, the surfing effect can lead to enormous observable pulsar timing residuals if the speed of gravitational waves is smaller than the speed of electromagnetic waves. We shall use this fact, along with the expected precision of pulsar timing measurements, to place stringent upper limits on the parameter $\epsilon=(c-v_{gw})/c$ which characterizes the deviation of speed of gravitational waves from the speed of light. We show that, for a realistic gravitational wave background and a reasonable time duration of observations, the achievable limits are $\epsilon\lesssim 0.4 \%$. Constraining the speed of gravitational waves is an interesting experimental challenge attracting much theoretical and experimental interest [@WillBook], [@Will2001], [@Kopeikin2004]. We argue that the constraint on $\epsilon$ from pulsar timing would provide the strongest current limitations on the deviation of speed of gravitational waves from speed of light.
It is worth mentioning that the surfing effect considered in this paper is quite generic. The surfing effect occurs in any physical situation where the phase speed of gravitational waves is smaller than the phase speed of electromagnetic waves [@bkpn1990], [@PolnarevBaskaran2008]. For example, this is the case in theories which predict a non vanishing rest mass for graviton [@MassiveGravity], [@WillBook], [@BabakGrishchuk2003]. Although, generically, these theories predict extra polarization states for gravitational waves, in our work we shall restrict our analysis to effects caused only by transverse traceless (TT) gravitational waves. Another possible scenario for the surfing effect to arise is to consider the interaction of gravitational waves and electromagnetic waves in the presence of plasma. In this case the phase speed of gravitational waves remains unchanged and is equal to $c$ (i.e. the speed of light in vacuum), while the phase speed of electromagnetic waves becomes generally greater than $c$ [@Jackson].
The plan of the paper is as follows. We shall begin in Section \[SingleWave\] with the analysis of propagation of an electromagnetic wave in the field of a single monochromatic plane gravitational wave. We shall calculate the timing residuals due to a single gravitational wave and discuss the manifestations and physical consequences of the surfing effect. In Section \[ArbitraryWaveField\] we generalize the surfing effect for the case of an arbitrary gravitational wave field. We derive the statistical properties of the timing residual signal based on the statistical properties of the gravitational wave field. In Section \[upperlimits\] we calculate the achievable constraints on $\epsilon$ depending on the strength of the gravitational wave background characterized by energy density parameter $\Omega_{gw}$. In Section \[physicalconsequences\] we study the physical consequences of the surfing effect in pulsar timing. We show that the gravitational wave background from extragalactic black holes allows to place strong limits on $\epsilon$. Furthermore we show that the surfing effect can also place a strong upper bound on the mass of graviton. Finally, we conclude the paper in Section \[conclusions\] with a summary of the main results of this work.
Pulsar timing residuals for a single monochromatic gravitational wave\[SingleWave\]
===================================================================================
In this paper we shall be working in the framework of a slightly perturbed Minkowski space time with coordinates $x^\mu = (ct,x^i)$ and the metric given by ds\^2 = -c\^2dt\^2+(\_[ij]{}+h\_[ij]{})dx\^idx\^j, \[metric\] where $h_{ij}$ is the gravitational wave perturbation. For clarity and in order to gain physical insight into the problem, in this section, we shall consider the case of a single monochromatic plane gravitational wave. In the next section, we shall generalize our analysis to the case of an arbitrary gravitational wave field. For a monochromatic gravitational wave the metric perturbation $h_{ij}$ takes the form [@LandauLifshitz], [@mtw] h\_[ij]{} = h p\_[ij]{}e\^[ik\_x\^]{} = h p\_[ij]{}e\^[-i(k\_0ct-k\_ix\^i)]{}, \[singlegwmetric\] where $h$ is the amplitude of the gravitational wave, $k_\mu = \left(k_0,k_i\right)$ is the wave vector, and $p_{ik}$ is the polarization tensor of the gravitational wave. Introducing a set of two mutually orthogonal unit vectors $l_i$ and $m_i$ orthogonal to the wave vector $k_i$, the polarization tensor $p_{ik}$ has the form [@LandauLifshitz], [@mtw] p\_[ik]{} = ( l\_im\_i )( l\_km\_k ), \[defpolten\] where $\pm$ corresponds to the two independent states of circular polarization. Due to the transverse and traceless nature of gravitational waves, the polarization tensor satisfies the following conditions p\_[ik]{}k\^i = 0, p\_[ik]{}\^[ik]{} = 0. \[TTconditions\] For further discussion, it is convenient to introduce the wavenumber $k=\left(\delta_{ij}k^ik^j\right)^{1/2}$, and a unit vector in the direction of wave propagation $\tilde{k}^i=k^i/k$. The wavelength of the gravitational wave is related to the wavenumber by the equality $k = 2\pi/\lambda_{gw}$. The frequency of the gravitational wave ${f_{gw}}$ is related to the time component of the wave vector through the relation $k_0=2{\pi}{f_{gw}}/c$.
The speed of a gravitational wave is determined by relationship $v_{gw}= {{f_{gw}}}\lambda_{gw}$. In General Relativity gravitational waves travel at the speed of light, i.e. $v_{gw}=c$, which implies a relationship (dispersion relationship) $k = k_0$. In order to analyze the possibility $v_{gw}\neq c$, let us introduce a phenomenological parameter $\epsilon$ describing the relative deviation of $v_{gw}$ from speed of light $c$ , v\_[gw]{} \_[gw]{} = = c( 1-). \[epsilon\] The quantity $\epsilon$ has been introduced as a phenomenological parameter, and thus the analysis that follows is valid for any theory that predicts gravitational waves with $v_{gw}\neq c$. Particularly, of interest are modifications of General Relativity that predict massive gravitons. For these models, $\epsilon$ can be related to the rest mass of the graviton $m_g$ through the relation = 1 - = . \[mgravitondef\]
Let us move our attention to pulsar timing measurements. The effect of a gravitational wave upon the measured frequency of pulsar signal is given by [@Sazhin1978], [@Detweiler1979], = \_0\^D ds. (e\^ie\^j)|\_[path]{}, \[deltanu1\] where $\nu_0$ is the unperturbed pulsar frequency in the absence of gravitational waves and $\Delta\nu(t) = \nu(t) - \nu_0$ is the variation of pulsar frequency due to the presence of a gravitational wave. $D$ is the distance from the pulsar to the observer, integration variable $s$ is the distance parameter along the unperturbed light ray path from pulsar to the observer, $e^i$ is the unit vector tangent along this path (i.e. unit vector in the direction from pulsar to the observer), and the subscript indicates the integration along this path. The unperturbed light ray path is given by t(s) = t - , x\^i(s) = x\^i - e\^is, \[lightraypath\] where $t$ and $x^i$ determine the time and position of the observation. Without loss of generality we can set $x^i=0$ by choosing a spatial coordinate system with observer at its origin.
Substituting the path (\[lightraypath\]) into (\[deltanu1\]), taking into account (\[singlegwmetric\]) and (\[epsilon\]), after straight forward integration we arrive at = (1-)h e\^ie\^jp\_[ij]{} e\^[-ik(1-)ct]{} . \[deltanu2\]
The pulsar timing measurements customarily measure the timing residuals, i.e. the difference between the actual pulse arrival times and times predicted from a spin-down model for a pulsar [@Detweiler1979], [@Hobbs2005]. The variations in the measured frequency, due to the presence of a gravitational wave, will cause an anomalous timing residual $R(t)$ in the pulse arrival time given by [@Detweiler1979] R(t) = \_[t-T]{}\^t dt , \[residual1\] where $T$ is the time of observations, and the residual $R(t)$ is measured in seconds. Substituting expression (\[deltanu2\]) into (\[residual1\]), we get for the timing residual due to a single monochromatic gravitational wave R(t) = h e\^ie\^jp\_[ij]{} e\^[-ik(1-)ct]{} (1-e\^[ik(1-)cT]{}) . \[residual2\] Before proceeding further, let us analyze the above expression. The expression in the square brackets on the right side of (\[residual2\]) becomes large (proportional to $kD\sim D/\lambda_{gw}$) when $\left(1-\epsilon-\tilde{k}_ie^i\right)\rightarrow 0$, i. e. R(t) h e\^ie\^jp\_[ij]{} e\^[-ik(1-)ct]{}(1-e\^[ik(1-)cT]{}), [for]{} (1--\_ie\^i)kD 1. \[residual2a\] Hence, for gravitational waves traveling in a direction at a sufficiently small angle to the direction from the pulsar, i.e. $\tilde{k}_ie^i \approx \left(1-\epsilon\right)$, there is a resonance inrease in the expression for timing residual. In the case when $\epsilon =0$ this does not lead to a growth of the timing residual $R(t)$ itself, due to the transverse nature of the gravitational wave (since $e^ie^jp_{ij}\rightarrow 0$ when $\tilde{k}_ie^i\rightarrow1$, see expression (\[eep\])). On the other hand, if $\epsilon\neq0$, the expression for $R(t)$ increases significantly for $\tilde{k}_ie^i \approx \left(1-\epsilon\right)$. The resonance occurs when the signal from the pulsar “surfs" along the gravitational wave, i.e. travels at a small angle $\cos{\theta}\approx \left(1-\epsilon\right)$ to the gravitational wave. This picture is reminiscent of wave surfing, so for this reason following [@PolnarevBaskaran2008] we call this effect, of a resonant increase in $R(t)$, as the surfing effect. It is worth noticing that the above analysis closely resembles considerations in [@PolnarevBaskaran2008], where the surfing effect manifested itself in the resonance growth of the phase variation of electromagnetic waves, leading to an observable angular displacement of distant quasars. In the current work, we are analyzing the signature of the surfing effect in pulsar timing residuals.
Pulsar timing residuals for an arbitrary gravitational wave field\[ArbitraryWaveField\]
=======================================================================================
In the previous section we calculated the timing residual due to a single plane monochromatic gravitational wave. In this section we shall generalize our analysis to an arbitrary gravitational wave field. In general, an arbitrary gravitational wave field can be decomposed into spatial Fourier modes h\_[ij]{}(t,x\^i) = d\^3[**[k]{}**]{} \_[s=1,2]{} , \[fouriergw\] where $d^3{\bf k}$ denotes the integration over all possible wave vectors, and $s=1,2$ corresponds to the two linearly independent modes of polarization satisfying the orthogonality condition \_[ij]{}\^[ij\*]{} = \_[ss’]{} \[poltenorthog\] The mode function $h_s(k^i,t)$ correspond to plane monochromatic waves h\_s(k\^i,t) = h\_s(k\^i) e\^[-ik(1-)ct]{} \[hmodefunctions\] Due to the linear nature of the problem, following the decomposition (\[fouriergw\]), the total timing residual due to an arbitrary gravitational wave field, can be presented in the following manner R(t) = d\^3[**[k]{}**]{} \_[s=1,2]{} . \[fourierR\] Using the results of the previous section, the contribution from a single Fourier component $\tilde{R}(t;k^i,s)$ is given by (t;k\^i,s) = e\^ie\^jp\_[ij]{} e\^[-ik(1-)ct]{} (1-e\^[ik(1-)cT]{}) , \[residual3\] where the tilde over $R$ in the above expression is introduced to indicate explicit factoring out of the gravitational wave amplitude $h$ compared with (\[residual2\]).
In general, if we have the information about the mode functions $h_s(k^i)$, using expressions (\[fourierR\]) and (\[residual3\]) we can calculate the expected timing residual for an arbitrary gravitational wave field. In most of the practically interesting cases we do not have such a complete knowledge of the gravitational wave field, but are restricted to the knowledge of its statistical properties. To proceed, let us assume the following statistical properties = 0, <h\_s(k\^i) h\_[s’]{}\^[\*]{}(k’\^i)> = \_[ss’]{}\^3(k\^i-k’\^i), \[gwstatprop\] where the brackets denote ensemble averaging over all possible realizations, and $P_h(k)$ is the metric power spectrum per logarithmic interval of $k$. These conditions correspond to a stationary statistically homogeneous and isotropic gravitational wave field.
The positing of the statistical properties of the gravitational wave field (\[gwstatprop\]) allows us to calculate the statistical properties of the timing residual $R(t)$. Using (\[fourierR\]) and (\[gwstatprop\]), and taking into account the orthogonality property (\[poltenorthog\]), after straight forward calculations, we arrive at the following statistical properties for the timing residual $R$
&=& 0, \[Rmean\]\
<R\^2(t)> &=& P\_h(k) \^2(k), \[Rsquaremean\]
where we have introduced the transfer function \^2(k) = d\_s | (t;k\^i,s) |\^2. \[transferfunction\] In the above expression $d\Omega$ represents integration over the possible directions of gravitational wave (i.e. $d^3{\bf k} = k^2dkd\Omega$). From (\[residual3\]) and (\[transferfunction\]) it follows that the transfer function $\tilde{R}^2(k)$ does not depend on time variable $t$, which is a reflection of the stationarity of the underlying gravitational wave field.
The expression for the transfer function can be explicitly calculated. In order to do this, let us firstly introduce a spherical coordinate system $(\theta,\phi)$ related to the spatial coordinates $\{x^i\}$ (following notations of [@Goldstein]). Without loss of generality, we can assume that our spatial coordinate system is chosen such that the unit vector from the pulsar to the observer points in the north-pole direction, i.e. $e^i = (0,0,1)$. Let us also introduce the quantity $\mu = \cos{\theta} = e_i\tilde{k}^i$, characterizing the angle between the direction of gravitational wave propagation and the direction from pulsar to the observer. Furthermore, let $\phi$ denote the azimuthal angle that is subtended by $\tilde{k}^i$ projected onto the $(x^1,x^2)$-plane, i.e. $\tilde{k}^1=\cos{\phi}\sin{\theta}$ and $\tilde{k}^2=\sin{\phi}\sin{\theta}$. Introducing $e^{\theta}_i$ and $e^{\phi}_i$ which are the meridian and azimuthal unit vectors perpendicular to the gravitational wave wavevector $k_i$ respectively, the polarization tensors for gravitational waves (\[defpolten\]) take the form $\stackrel{s}{p}_{ij}(k^i) = (e^{\theta}_i\pm i e^{\phi}_i)(e^{\theta}_j\pm i e^{\phi}_j)/2$, with $\pm$ corresponding to the two independent circularly polarized degrees of freedom $s=1,2$ (for a detailed discussion see for example [@dgp2006], [@Baskaran2004]). Taking into account the relation e\^ie\^j\_[ij]{} = (1-\^2)e\^[2i]{}, \[eep\] substituting (\[residual3\]) into (\[transferfunction\]) and setting $d\Omega = d\mu d\phi$, after integration over $\phi$, we arrive at the expression for the transfer function \^2(k) = \^2((1-)) \_[-1]{}\^[+1]{} d(1-\^2)\^2 . \[transferfunction2\] The integrand under the integral in the above expression is illustrated in Figure \[figure1\]. As can been seen, when $\epsilon\neq 0$, the predominant contribution to the integral comes from the resonance region $\mu \approx \left(1 - \epsilon\right)$. Thus, in this case, the predominant contribution to the timing residual $<R^2>$ comes from “surfing" gravitational waves, i.e. waves for which $\mu \approx \left(1 - \epsilon\right)$. In the physically interesting limit $\epsilon\rightarrow 0$ and $kD\rightarrow \infty$ we can calculate the integral in (\[transferfunction2\]) explicitly. We refer the reader to Appendix \[AppendixA\] for details of this calculation. The result is as follows \^2(k) \^2((1-)) . \[transferfunction3\] The above expression allows us to simply quantify the condition for the surfing effect to be dominant, $\epsilon^2kD \gg1$. As we shall show in the next section, given the precision level of the current and planned pulsar timing measurements, the surfing effect allows to place significant constraints on the $\epsilon$ parameter.
Before proceeding, it is instructive to compare the results of this section with the results of [@PolnarevBaskaran2008]. More specifically, it is interesting to compare expression (\[transferfunction3\]) for the transfer function of timing residuals with its counterpart expression (29) in [@PolnarevBaskaran2008] for the transfer function of angular displacement \^2(k) . Apart from the differing factors in front of the square brackets in the two expression, the crucial difference is the differing powers of $\epsilon$. In the present work, the surfing effect manifests in the term $\epsilon^2kD$ in the square brackets of (\[transferfunction3\]). In [@PolnarevBaskaran2008], the surfing effect manifests in the term $\epsilon^3kD$ term in the square brackets of (29). The extra factor of $\epsilon$ arose due to the geometrical specificity of interferometric observations of phase difference at the ends the interferometric system (see [@PolnarevBaskaran2008] for details). The main consequences of this difference are twofold. Firstly, equivalent constraints on $\epsilon$ require smaller distance to the source in the case of pulsar timing compared with interferometric observations. This is reflected in the fact that in the present work we focus on galactic pulsars, where as [@PolnarevBaskaran2008] focused on high redshift quasars. Secondly, the condition for surfing effect to dominate is different in the two contexts. This condition, characterized by the value of $\epsilon_*$ (see expression (\[epsilonstar\]) below and expression (32) in [@PolnarevBaskaran2008]), places the lower limit on the potentially possible bounds on $\epsilon$. This limiting bound is lower for interferometry measurements ($\epsilon_*\approx 2.3\times10^{-4}$) than for pulsar timing measurements ($\epsilon_*\approx3.2\times10^{-3}$). Even so, due to exceptional precision, the experimentally achievable bounds on $\epsilon$ from pulsar timing measurements would be more stringent.
![The illustration of the resonance effect, present for $\epsilon \neq 0$. The graphs show integrand in expression (\[transferfunction2\]). For the case $\epsilon\neq 0$ the integrand sharply peaks at angle $\mu\approx (1-\epsilon)$ (solid red line), while for the case $\epsilon = 0$ the effect is absent (dashed blue line). In the case of $\epsilon \neq 0$, the gravitational waves travelling at an angle $\cos{\theta} \approx
(1-\epsilon)$ to the line of sight are the predominant contributors to the surfing effect. The figure on the left shows the integrand for the whole region of $\mu$, while the figure on the right zooms into the region around the resonance.[]{data-label="figure1"}](Integrand1.eps "fig:"){width="7cm"} ![The illustration of the resonance effect, present for $\epsilon \neq 0$. The graphs show integrand in expression (\[transferfunction2\]). For the case $\epsilon\neq 0$ the integrand sharply peaks at angle $\mu\approx (1-\epsilon)$ (solid red line), while for the case $\epsilon = 0$ the effect is absent (dashed blue line). In the case of $\epsilon \neq 0$, the gravitational waves travelling at an angle $\cos{\theta} \approx
(1-\epsilon)$ to the line of sight are the predominant contributors to the surfing effect. The figure on the left shows the integrand for the whole region of $\mu$, while the figure on the right zooms into the region around the resonance.[]{data-label="figure1"}](Integrand2.eps "fig:"){width="7cm"}
Upper limits on the speed of gravitational waves\[upperlimits\]
===============================================================
Let us now turn our attention to the various cosmological and astrophysical candidates for a stochastic gravitational wave background and their contribution to the surfing effect in pulsar timing measurement. Analyzing their magnitude, we shall study the achievable upper limits on $\epsilon$ that these backgrounds could place.
The stochastic gravitational wave field may be characterized by the dimensionless strain amplitude $h_c(f)$ which is related to the power spectrum $P_h$ in the following way h\_c(f) , f = (1-). \[definitionhc\] The quantity $h_c(f)$ is the root-mean value of the gravitational wave amplitude in a unit logarithmic interval of frequencies. For analyzing the stochastic gravitational wave fields, it is also customary to introduce the density parameter $\Omega_{gw}$ to characterize the strength of the gravitational wave field [@Allen1997], [@glpps2001], [@Grishchuk2003]. $\Omega_{gw}$ is related to the power spectrum $P_h(k)$ and strain $h_c(f)$ by the relation \_[gw]{}(k) = ( )\^2P\_h(k) = ( )\^2 h\_c\^2(f) \[definitionofOmega\] where $k_H= 2\pi f_H/c = 2\pi H_o/c$, and $H_o$ is the current Hubble parameter. The density parameter $\Omega_{gw}$ is the current day ratio of energy density of gravitational waves (per unit logarithmic interval in $k$) to the critical density of the Universe $\rho_{crit} = 3c^2H_o^2/8\pi G$. Below, for numerical estimations, we set Hubble parameter $H_o = 75~\frac{{\rm km}}{{\rm sec}}/{\rm Mpc}$. Note, that the above definition (\[definitionofOmega\]) is valid for stationary gravitational wave backgrounds. In cosmological context, when considering relic gravitational, waves this definition modifies to $\Omega_{gw}(k) = \frac{\pi^2}{3} \left( \frac{k}{k_H}\right)^2P_h(k)$ due to the non-stationary (standing wave) nature of relic gravitational waves (see for example [@glpps2001]).
For simplicity, in the numerical estimations below, we shall assume a simple power law behaviour for $h_c$ which is equivalent to a power law spectrum for the density parameter $\Omega_{gw}$ h\_c(f) = h\_c(f\_o)()\^, \_[gw]{} (k)= \_[gw]{} (k\_o) ()\^[n\_T]{}, \[powerlawspectrum\] where \_[gw]{}(k\_o) = ()\^2h\_c\^2(f\_o), k\_o = , n\_T = 2(1+). \[powerlawspectrum2\] Although restricted, this form of spectrum is a good approximation for a large variety of models in gravitational wave frequency range of our interest. For example, this type of power law spectrum, with $\alpha = -2/3$, is produced by the extragalactic coalescing super massive binary black hole systems [@WyitheLoeb2003]. In cosmological context, this type of a power spectrum, with spectral index $\alpha$ at the current epoch, arises due to the evolution of relic gravitational waves with a primordial spectral index equal to $2(1+\alpha)$, (i.e. $P_h(k)|_{prim}\propto k^{2(1+\alpha)}$) [@Grishchuk1974]. The flat, scale invariant power spectrum (also known as Harrison-Zeldovich power spectrum) corresponds to $\alpha=-1$ (i.e. $n_T=0$). In general the power law spectrum just assumes the absence of features in the spectrum of gravitational waves at the wavelengths of our interest.
In practice, when considering pulsar timing, we are interested in calculating the expected mean square deviation of the timing residuals due to stochastic background of gravitational waves (\[Rsquaremean\]). In order to evaluate $<R^2(t)>$ from expression (\[Rsquaremean\]) we require to specify the limits of integration $k_{min}$ and $k_{max}$. $k_{min}$ and $k_{max}$ determine the frequency range of gravitational waves that can be probed by pulsar timing measurements. The lower limit $k_{min}$ is determined by the time duration of observations $T_{obs}$, $k_{min} \approx 2{\pi}f_{obs}/c = 2{\pi}/cT_{obs}$. In our estimates we shall assume $T_{obs} \approx 10~{\rm yrs}$. The upper limit $k_{max}\approx 2\pi/c\delta t$ is determined by the duration of single observation $\delta t$ (in other words, the time of integration), which is usually of the order of 1-2 hours. We note here that it is this time (and not the time between consecutive observations, of order of weeks) which determines $k_{max}$ in timing residuals. Indeed, if the period of a gravitational wave is smaller than $\delta t$, its effect is smeared out by averaging procedure. But if the period of the gravitational wave lies between the averaging time and the sampling time, the wave will clearly manifest itseft in the timing residuals. Some authors erroneously use the inverse sampling time as $k_{max}$, apparently guided by the analogy with time series analysis. Thus, in our case, it is safe to assume $\delta t\ll T_{obs}$ (i.e. $k_{max}\gg k_{min}$), and set $k_{max} = \infty$ in numerical evaluations below. Furthermore, we shall be working under the assumption $kD = 2\pi D/\lambda_{gw} \gg 1$, which corresponds to the reasonable assumption that the gravitational waves of our interest ($\lambda_{gw}\lesssim 10~l{\mathrm{yrs}}$) have wavelengths much shorter than the distance to the pulsar ($D\sim 10~{\mathrm{kpc}}$).
As can be seen from expression (\[transferfunction3\]) and the considerations in Appendix \[AppendixA\], the behaviour of the transfer function $\tilde{R}^2(k)$ depends on value of the quantity $3\pi\epsilon^2kD/2$. In order to analyze the various possibilities let us introduce \_\* = (k\_[min]{}D)\^[-1/2]{} = 3.2 10\^[-3]{}\^. \[epsilonstar\] Below we shall analyze the two possibilities, $\epsilon\ll\epsilon_*$ and $\epsilon\gg \epsilon_*$, separately.
In the case $\epsilon\ll\epsilon_*$ in transfer function $\tilde{R}^2(k)$, in expression (\[transferfunction3\]), we can neglect the second term in the square brackets in comparison with the first. Furthermore, in the term $\sin^2{\left(kcT\left(1-\epsilon\right)\right)/2}$ we can neglect the rapid oscillatory factor. Thus, for the transfer function we get \^2(k) () . \[transferfunctionepsilonllepsilonstar\] Substituting the above approximation (\[transferfunctionepsilonllepsilonstar\]), taking into account the definition (\[definitionhc\]) and a power law spectrum (\[powerlawspectrum\]), into expression (\[Rsquaremean\]), and setting the limits of integration as mentioned above, we arrive at , \_\*. \[Rsquare1\]
In the case $\epsilon\gg\epsilon_*$, neglecting the first term in the square brackets with respect to the second in (\[transferfunction3\]) and ignoring the rapid oscillatory factor, the transfer function can be approximated as \^2(k) () . \[transferfunctionepsilonggepsilonstar\] In this case, the expression for (\[Rsquaremean\]) takes the form ()\^2, \_\*. \[Rsquare2\] Comparing expressions (\[Rsquare1\]) and (\[Rsquare2\]) it can be seen that when $\epsilon\gg\epsilon_*$ the surfing effect leads to a strong resonance contribution (proportional to $kD$) in the timing residual compared with the case when $\epsilon\ll\epsilon_*$. This dominant resonance contribution comes from gravitational waves traveling at an angle $\cos{\theta}\approx\left(1-\epsilon\right)$ to the direction of signal propagation from the pulsar (see Appendix \[AppendixA\] for details).
From expressions (\[Rsquare1\]) and (\[Rsquare2\]), it follows that the direct measurement of pulsar timing residuals would be able to measure or constrain either $h_c$ or $h_c\epsilon$, depending on the value of $\epsilon$ compared with $\epsilon_*$. A null result in timing residual measurements would place the following upper limits h\_c 4.910\^[-15]{}, \_\*, \[hc\_limit\] or h\_c1.110\^[-17]{}, \_\*. \[hcepsilon\_limit\] where $R_{rms} = \sqrt{<R^2(t)>}$ is the precision of the pulsar residual timing, and $h_c=h_c(f_{obs})$ is evaluated at $f_{obs}=0.1~{\rm yrs}^{-1}$. It is also convenient to present this limits in terms of the density parameter $\Omega_{gw}$ \_[gw]{} 5.310\^[-10]{}, \_\* \[omegalimit\] or \_[gw]{} \^2 4.010\^[-15]{}, \_\* \[omegaepsilonsquarelimit\] Thus, from (\[hc\_limit\]) (or (\[omegalimit\])), it can be seen that for $\epsilon\ll\epsilon_*$, when surfing effect is not important, pulsar timing sets limits directly on $h_c$ (or equivalently on $\Omega_{gw}$), i.e. the strength of the gravitational wave background. On the other hand, when $\epsilon\gg\epsilon_*$ and surfing effect becomes dominant, from (\[hcepsilon\_limit\]) (or (\[omegaepsilonsquarelimit\])), it follows that pulsar timing sets limits on the combination $h_c\epsilon$ (or $\Omega_{gw}\epsilon^2$ equivalently). The upper limits from pulsar timing, along with possible sources and sensitivity levels of various experimental techniques to detect gravitational waves, are illustrated on figure \[figure2\].
![The upper limit on strain amplitude $h_c$ and velocity parameter $\epsilon$ for gravitational waves, achievable by pulsar timing residual measurements with precision $R_{rms}=0.1~{{\mu}\mathrm{sec}}$ and time of observation $T_{obs} = 10~{\mathrm{yrs}}$. The shaded area shows the region that can be probed or ruled out by pulsar timing observations. The horizontal lines show the strain $h_c$, at $f = (10~\mathrm{{\mathrm{yrs}}})^{-1}$, for some viable sources of gravitational wave.[]{data-label="figure2"}](epsilon_hc.eps){width="12cm"}
As follows from the above discussion, and can be seen from figure \[figure2\], an independent knowledge of $h_c$ would enable us to directly constrain the parameter $\epsilon$, i.e. constrain the deviation of speed of gravitational waves from speed of light. From expression (\[hcepsilon\_limit\]) we arrive at the following constrain on $\epsilon$ 1.110\^[-2]{}. \[epsilonlimit2\] In terms of the density parameter $\Omega_{gw}$ the constraint has the form 6.410\^[-3]{}. \[epsilonlimit1\]
In the next section we shall discuss the various viable candidates for a stochastic gravitational wave background and explicitly calculate the achievable limits on $\epsilon$. We shall also discuss the implications of the surfing effect for theories with massive gravitons.
The physical implications of the surfing effect \[physicalconsequences\]
========================================================================
The analysis in Section \[upperlimits\] indicates that the surfing effect in pulsar timing can yield interesting constraints on $\epsilon$ parameter and consequently the mass of graviton in a sufficiently strong gravitational wave background with $\Omega_{gw}\sim
10^{-10}$ (see (\[epsilonlimit1\])). It is important to note that this method is fundamentally limited by the value $\epsilon_*$, which is currently about $3\times 10^{-3}$ (see (\[epsilonstar\])). Although an increase in the time of observation will improve overall precision, it will also increase the value of $\epsilon_*$, thus worsening the potential constraints on $\epsilon$. In future, the method can become more sensitive with implementation of large radio telescopes like the Square Kilometer Array (SKA) (see [@Kramer2004] for detailed discussion of SKA and its usage in pulsar astrophysics), which would improve the limitations to $\epsilon_*\sim 10^{-3}$. Furthermore, as seen from expression (\[epsilonlimit1\]) (or (\[epsilonlimit2\])), increasing the pulsar timing accuracy (for example, using pulsar timing ensembles [@Manchester2007]) can reduce the limit down to the critical value $\epsilon_*$.
The gravitational wave background, at the frequency range of our interest (${f_{gw}} \lesssim 0.1 \mathrm{{\mathrm{yrs}}}^{-1}$), consists of contribution from a variety of well established astrophysical and cosmological sources [@glpps2001] as well as possible contribution from exotic remnants of early universe [@Maggiore2000], [@Hogan2006]. The strongest contribution to the gravitational wave background, at these frequencies, come from the background of extragalactic coalescing supermassive binary black holes (SMBH) [@WyitheLoeb2003], [@JaffeBacker2003], [@Enoki2004], [@Sesana2008]. For this reason, below in the subsection \[bhbackground\], we shall study the implications of the surfing effect for this background. Following this, in subsection \[darkmattergw\], we shall analyze the consequences of the limitations on $\epsilon$ for theories with massive gravitons.
Gravitational wave background from extragalactic black holes\[bhbackground\]
----------------------------------------------------------------------------
As was mentioned above, one of the strongest sources for a stochastic gravitational wave background at frequency range of our interest, ${f_{gw}}\sim T_{obs}^{-1}\approx 0.1~{\mathrm{yrs}}^{-1} $, comes from the extragalactic black hole binaries. Various groups have conducted a theoretical study on the strength of this background [@JaffeBacker2003], [@WyitheLoeb2003], [@Enoki2004], [@Sesana2008]. There is a general consensus on the expected gravitational wave strain for this background h\_c(f) 10\^[-16]{}()\^[-]{}, \[bhstrain\] corresponding to the value for the density parameter \_[gw]{}(f) 2.410\^[-10]{}()\^. \[bhOmega\] The uncertainty surrounding this value of $h_c$ arises mainly due to the uncertainty in the galaxy merger rates as well as some other astrophysical factors. Taking into account these uncertainties, the amplitude lies in the interval $h_c(f = 1\mu {\rm Hz}) \approx 2.5\times10^{-17}- 4\times10^{-16} $ [@Sesana2008].
The expected strain $h_c$ from the background of SMBH allows to place significant bounds on the $\epsilon$ parameter. Substituting expression (\[bhstrain\]) into expression (\[epsilonlimit2\]), and setting $\alpha=-2/3$, we arrive at the following limit on upper $\epsilon$ 3.7 10\^[-3]{}. Thus, the stochastic gravitational wave background of extragalactic SMBH mergers can potentially place very stringent constraints, $\epsilon\lesssim 0.4\%$, on the speed of gravitational waves.
Implications for theories with massive gravitons \[darkmattergw\]
-----------------------------------------------------------------
The phenomenological parameter $\epsilon$ is directly related to the mass of the graviton $m_g$ (see (\[mgravitondef\])). It is convenient to rewrite expression (\[mgravitondef\]) in the form (k) = \_o(), \_o = , \[epsilon\_o\] For the fiducial strength of gravitational wave background we get \_o 8.310\^[-3]{}. \[massivegravitonlimit1\] Note that the factor $n_T/5$, in the above expression (\[massivegravitonlimit1\]) (compared with the factor $n_T/3$ in expression (\[epsilonlimit1\])), arises because we are constraining $\epsilon_o$ (compared with constraints on $\epsilon$ in (\[epsilonlimit1\])). This leads to an extra factor $\left(k/k_{min}\right)$ in integral $(\ref{Rsquaremean})$ and hence slightly modifies the result. The above limit on $\epsilon_o$ implies the following limit on the mass of the graviton m\_g 1.110\^[-25]{} . \[massivegravitonlimit2\] From expression (\[epsilon\_o\]) it follows that, stronger constraint on $m_g$ require smaller values of $k_{min}$, i.e. require a longer time of observation $T_{obs}$. On the other hand, the strongest possible constraint for $\epsilon_o$ is determined by the value of $\epsilon_*$ (which increases with the time of observation, see expression (\[epsilonstar\])). For this reason, an increase in $T_{obs}$ beyond a value of approximately $25~{\rm yrs}$ will not lead to an improvement in constraining $m_g$.
As a concrete example, let us assume that the gravitational wave background from SMBH coalesces dominates at frequencies $0.1-1~{\rm yrs}^{-1}$, and that its properties are not affected by the non-zero mass of graviton. Then the existing four-years precise timing of PSR B1937+21 [@Manchester2007] allow to significantly constrain the mass of the graviton. Setting $T_{obs}=4~{\rm yrs}$, $R_{rms}=0.17~{\mu}{\rm sec}$, $D=8.3~{\rm kpc}$, $n_T = 2/3$ and $\Omega_{gw}(T_{obs}^{-1}) = 4.2\times10^{-10}$ (see (\[bhOmega\])) in expression (\[massivegravitonlimit2\]), we arrive at a limit m\_g3.610\^[-25]{} [eV]{}, corresponding to a Compton length for graviton of $\lambda_g={h}/{m_g c}\gtrsim3.4\times10^{15}\, {\rm km}$. This bound is three orders stronger than the current limit from Solar system tests [@Talmadge1988] and is comparable to future limits from SMBH mergers obtainable with LISA (see [@Will2006] and references therein). It is worth stressing, that the limits from pulsar timing are more robust and less model dependent than the prospects for LISA.
The surfing effect in pulsar timing puts stringent constraint on the mass of graviton in some theories of gravity (see [@ptp08]). In [@dtt2005] the authors propose massive gravitons as a viable candidates for cold dark matter in the galactic halo. At the frequency ranges of our interest, these massive gravitons imply $\epsilon\approx 0.5$. The existing precise timing of PSR B1937+21 place direct limits on the parameter $\Omega_{gw}\epsilon^2\lesssim 2{\times}10^{-13}$ (setting $R_{rms}=0.17~{{\mu}\mathrm{sec}}$ and $T_{obs}=4~{\mathrm{yrs}}$ in expression (\[omegaepsilonsquarelimit\])) . This implies that massive gravitons, as candidates to explain the dark matter in the galactic halo, can be ruled out with the current observations.
Conclusions\[conclusions\]
==========================
In this work we have analyzed the consequences of the surfing effect, introduced in [@PolnarevBaskaran2008], for pulsar timing observations. The surfing effect, due to the transverse nature of gravitational waves, leads to a strong observable signature only when the speed of gravitational waves is smaller than the speed of light. In order to analyze this possibility, we have introduced a parameter $\epsilon$, which characterizes the deviation of speed of gravitational waves from speed of light. By studying the pulsar timing residuals in the presence of a single plane monochromatic gravitational wave, followed by a generalization to an arbitrary gravitational wave field, we show the presence and importance of surfing effect in the case when $\epsilon\neq0$.
The surfing effect allows to place significant bounds on the parameter $\epsilon$. For a timing accuracy of $R_{rms}=0.1~{{\mu}\mathrm{sec}}$, and assuming a realistic background of gravitational waves from extragalactic super massive black hole binary mergers, the achievable limits are $\epsilon\lesssim 0.4\%$. The strongest achievable bounds on $\epsilon$ are determined by $\epsilon_*$. For a pulsar at a typical distance $D=10~{\mathrm{kpc}}$ the value is $\epsilon_*\approx 0.3\%$. This limit could potentially be slightly improved by observing pulsars at a greater distance $D$.
The surfing effect leads to interesting consequences for theories with massive gravitons. Using the existing observations, we have constrained the mass of graviton to $m_g\lesssim 4\times10^{-25}\, {\rm eV}$, which is three orders of magnitude stronger than the current limits from Solar system tests. With future observations this constraint could improve by an order of magnitude. Based on the existing observations, we have also ruled out massive gravitons as candidates to explain the dark matter in the galactic halo.
In comparison with precision interferometry methods considered in [@PolnarevBaskaran2008], pulsar timing measurements (due to their high precision) should be able to put tighter constraints on $\epsilon$. In any case, these two methods of constraining $\epsilon$ are independent and hence should be considered complementary.
Acknowledgements {#acknowledgements .unnumbered}
================
Authors thank B. G. Keating, W. Zhao, L. P. Grishchuk and M.V. Sazhin for useful discussions and fruitful suggestions. The work of MP is supported by RFBR grant 06-02-16816-a. KP acknowledges partial support from RFBR grant 07-02-00961-a.
[99]{}
K. S. Thorne, in [*300 years of gravitation*]{}, (Ed. S.W. Hawking and W. Israel), (Cambridge: Cambridge University Press, 1987), p.330.
B. Allen, [*“The Stochastic Gravity-Wave Background: Sources and Detection"*]{}, in [*Some Topics on General Relativity and Gravitational Radiation*]{}, ( Ed. J. A. [Miralles]{}, J. A. [Morales]{}, and D. [Saez]{}), 1997.
L. P. Grishchuk , V. M. Lipunov , K. A. Postnov , M. E. Prokhorov and B. S. Sathyaprakash , [*Usp. Fiz. Nauk*]{}, [**171**]{}, 3, 2001 \[[*Physics-Uspekhi*]{}, [**44**]{}, 1, 2001\].
C. Cutler and K. S. Thorne , in [*Proceedings of GR16*]{}, (Durban, South Africa, 2001).
S. A. Hughes, [*Annals Phys.*]{}, [**303**]{}, 142-178, 2003.
L. P. Grishchuk, [*“Update on gravitational-wave research*]{}, in [*Astrophysics Update"*]{}, (Heidelberg: Springer-Verlag, 2003), p.281. (gr-qc/0305051)
B. S. Sathyaprakash, [*Current Science*]{}, [**89**]{}, 2129, 2005.
LIGO official website, [*http://www.ligo.caltech.edu/*]{}.
F. A. Jenet, et al., [*Astrophys. J.*]{}, [**653**]{}, 1571-1576, 2006.
ÒThe ScientiÞc Program of PlanckÓ, [*The Planck Consortia: 2005*]{}, in press at the ESA Publication Division.
M. V. Sazhin, [*Sov. Astron.*]{}, [**22**]{}, 36-38, 1978.
S. Detweiler, [*Astrophys. J.*]{}, [**234**]{}, 1100-1104, 1979.
B. Bertotti, B. J. Carr, M. J. Rees, [*MNRAS*]{}, [**203**]{}, 945-954, 1983.
J. M. Cordes, et al., [*New Astron. Rev.*]{} [**48**]{}, 1413, 2004.
G. Hobbs, [*Publications of the Astronomical Society of Australia*]{}, [**22**]{}, 179-183, 2005.
F. A. Jenet, et al., [*Astrophys. J. Lett.*]{}, [**625**]{}, L123-L126, 2005.
J. S. B. Wyithe and A. Loeb, [*Astrophys. J.*]{}, [**590**]{}, 691-706, 2003.
A. H. Jaffe and D. C. Backer, [*Astrophys. J.*]{}, [**583**]{}, 616-631, 2003.
M. Enoki, et. al., [*Astrophys. J.*]{}, [**615**]{}, 19-28, 2004.
A. Sesana, A. Vecchio and C. N. Colacino, arXive:0804.4476.
L. P. Grishchuk , [*Zh. Eksp. Teor. Fiz.*]{}, [**66**]{}, 833, 1974, \[[*Sov. Phys. JETP*]{}, [**39**]{}, 402, 1974\].
L. P. Grishchuk, [*Physics-Uspekhi*]{}, [**48**]{}, 1235-1247, 2005.
M. Maggiore, [*Phys. Rep.*]{}, [**331**]{}, 283-367, 2000.
C. J. Hogan, [*“Gravitational Wave Sources from New Physics"*]{}, in [*Laser Interferometer Space Antenna: 6th International LISA Symposium*]{}, [*American Institute of Physics Conference Series*]{}, [**873**]{}, 2006 (astro-ph/0608567).
Landau L. D. and Lifshitz E. M., [*The Classical Theory of Fields*]{} (New York: Pergamon Press, 1975).
C. Misner , K. S. Thorne and J. A. Wheeler , [*Gravitation*]{} (San Fransisco: Freeman, 1973).
L. P. Grishchuk and A. G. Polnarev, [*“General relativity and gravitation"*]{}, [**2**]{}, 393-434, 1980.
V. B. [Braginsky]{}, N. S. [Kardashev]{}, A. G. [Polnarev]{} , and I. D. [Novikov]{} , [*Nuovo Cimento B Serie*]{}, [**105**]{}, 1141-1158, 1990.
V. B. [Braginsky]{}, N. S. [Kardashev]{}, A. G. [Polnarev]{} , and I. D. [Novikov]{} , in [*Astrophysics on the Threshold of the 21st Century*]{}, (Ed. N. S. Kardashev), (Philadelphia: Gordon & Bridge Scient. Pub., 1992), p. 315.
A. G. Polnarev and D. Baskaran, ArXiv/0802.3821v1, 2008. To be published in [*Phys. Rev. D*]{}.
C. M. Will, [*Theory and Experiment in Gravitational Physics*]{} (Cambridge: Cambridge University Press, 1993).
C. M. Will, [*Living Reviews in Relativity*]{}, [**4**]{}, 4, 2001.
S. M. Kopeikin, [*Class. Quant. Grav.*]{}, [**21**]{}, 3251-3286, 2004.
D. M. Eardley, D. L. Lee, A. P. Lightman, R. V. Wagoner, and C. M. Will, [*Phys. Rev. Lett.*]{}, [**30**]{}, 884, 1973; D. M. Eardley, D. L. Lee, and A. P. Lightman, [*Phys. Rev. D*]{}, [**10**]{}, 3308, 1973.
S. V. Babak and L. P. Grishchuk, [*Int. J. Mod. Phys.*]{}, [**D12**]{}, 1905-1960, 2003.
J. D. Jackson, [*Classical Electrodynamics*]{}, (New York: John Wiley & Sons, 1975).
H. [Goldstein]{}, [*Classical mechanics*]{} (Addison-Wesley World Student Series, Reading, Mass.: Addison-Wesley, 1950).
D. Baskaran, L.P. Grishchuk and A.G. Polnarev, [*Phys. Rev. D*]{}, [**74**]{} (2006) 083008.
D. [Baskaran]{} and L. P. [Grishchuk]{}, , [**21**]{}, 4041, 2004.
J. M. Cordes, M. Kramer, T. J. W. Lazio, B. W. Stappers, D. C. Backer and S. Johnston, [*New Astron. Rev.*]{}, [**48**]{}, 1413, 2004;
R. N. Manchester, ArXiv/0710.5026v2, 2007
C. L. Talmadge, et al., [*Phys. Rev. Lett.*]{}, [**61**]{}, 1159-1162, 1988.
C. M. Will, [*Living. Rev. Rel.*]{}, [**9**]{}, 3, 2005.
M. S. Pshirkov, A. V. Tuntsov, K.A. Postnov, arXiv:0805.1519v1, submitted to PRL, 2008.
S. L. Dubovsky, P. G. Tinyakov, I. I. Tkachev, PRL, [**94**]{}, 181102, 2005; hep-th/0411158.
Evaluation of the transfer function\[AppendixA\]
================================================
Let us evaluate the integral in expression (\[transferfunction2\]) I(k)= \_[-1]{}\^[+1]{} d(1-\^2)\^2 , \[SurfingIntegral\] in the physically interesting case when $\epsilon\rightarrow 0$ and $kD\gg1$. The integral can be separated into two distinctive contributions I(k) = I\_[NR]{}(k) + I\_[R]{}(k), where $I_{NR}(k)$ is the non-resonance contribution I\_[NR]{}(k) & = & \_[-1]{}\^[1--]{} d ( 1-\^2 )\^2\
& & + \_[1-+]{}\^[+1]{} d ( 1-\^2 )\^2 , \[NRintegral\] and $\Delta\tilde{\alpha}_{R}^2(k)$ is the resonance (or, in other words “surfing",) contribution \_[R]{}\^2(k) & = & \_[1--]{}\^[1-+]{} d ( 1-\^2 )\^2 . \[Rintegral\] The quantity $\Delta\mu$ occurring in the limits of integration in the above expressions is fixed by the condition for the resonance to occur. This condition corresponds to the region, around $\mu=1-\epsilon$, where the sine function undergoes a few oscillations. Thus $\Delta\mu = N\lambda_{gw}/D = 2\pi N/kD$, where $N$ is the number of oscillations of the sine function, around the point $\mu=1-\epsilon$, included in evaluation of the resonance. The value of $N$ is limited by the condition $\Delta\mu
= 2\pi N/kD\ll\epsilon$, implying $N\ll\epsilon kD/2\pi$. Since in all our considerations we assume $\epsilon\ll 1$, and $\epsilon^2kD\gg1$, the condition imposed on $N$ is consistent with an additional condition $N\gg1$ that we shall assume.
When evaluating (\[NRintegral\]), since we assume $\epsilon\ll 1$, we can neglect the second integral in comparison with the first. In evaluation fo the remaining integral we can set $\epsilon = 0$. Thus, we get I\_[NR]{}(k) & & \_[-1]{}\^[1]{} d ( 1+)\^2 \^2[( (1 - ) )]{}\
& =& \_[-1]{}\^[1]{} d ( 1+)\^2 (1-)\
& & \_[-1]{}\^[1]{} d ( 1+)\^2 = ,\[NRintegral2\] where, assuming $kD\gg1$, we have explicitly separate out the rapid oscillatory part and neglected it in the last line.
In order to evaluate (\[Rintegral\]), in the case of $\epsilon\ll 0$ and $kD\gg1$, it is helpful to notice that the factor $\left( 1-\mu^2 \right)^2$ in the right side of (\[Rintegral\]) is a slowly varying function over the range of integration. Taking this factor (evaluated at $\mu=1-\epsilon$) outside the integral we get the following approximation for the resonance part of the transfer function I\_[R]{}(k) & & 4\^2\_[1--]{}\^[1-+]{} d = 2\^2kD\_[-N]{}\^[+N]{}dx\
&& 2\^2kD(1-O()) 2\^2kD. \[Rintegral2\]
Finally, the total transfer function, given by the sum of the non-resonance (\[NRintegral2\]) and resonance parts (\[Rintegral2\]), has the following form I(k) = I\_[NR]{}(k)+I\_R(k) . \[AppendixAintegralTotal\]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we propose an iterative method to compute the positive ground states of saturable nonlinear Schrödinger equations. A discretization of the saturable nonlinear Schrödinger equation leads to a nonlinear algebraic eigenvalue problem (NAEP). For any initial positive vector, we prove that this method converges globally with a locally quadratic convergence rate to a positive solution of NAEP. During the iteration process, the method requires the selection of a positive parameter $\theta_k$ in the $k$th iteration, and generates a positive vector sequence approximating the eigenvector of NAEP and a scalar sequence approximating the corresponding eigenvalue. We also present a halving procedure to determine the parameters $\theta_k$, starting with $\theta_k=1$ for each iteration, such that the scalar sequence is strictly monotonic increasing. This method can thus be used to illustrate the existence of positive ground states of saturable nonlinear Schrödinger equations. Numerical experiments are provided to support the theoretical results.'
author:
- 'Ching-Sung Liu'
title: A positivity preserving iterative method for finding the ground states of saturable nonlinear Schrödinger equations
---
Schrödinger equations, Saturable nonlinearity, Ground states, $M$-matrix, quadratic convergence,positivity preserving
65F15, 65F50
Introduction
============
The nonlinear Schrödinger (NLS) equation [@R92] is a nonlinear variation of the Schrödinger equation and is a general model in nonlinear science and mathematics. Such an equation can be expressed as follows: $$i\frac{\partial \phi}{\partial z}+\triangle \phi+\Gamma f(|\phi |^2)\phi=0 \text{ for some constant } \Gamma \in \mathbb{R},\label{NLS}$$ where $\phi = \phi(x, z): \mathbb{R}^2 \times \mathbb{R}^{+} \rightarrow \mathbb{C}$, the function $f$ denotes the nonlinearity and $i$ is the imaginary unit. A NLS equation is called a saturable NLS equation [@C82; @L17] if the nonlinear function $f(s)=1-1/(a+s^2)$, that is, $$i\frac{\partial \phi}{\partial z}=-\triangle \phi+\Gamma \left (1-\frac{1}{a(x)+| \phi |^2}\right )\phi , \text{ for } \Gamma >0, \label{NLS1}$$ where $a(x) >0$ is a bounded function. A saturable NLS equation is of interest in several applications [@G97; @K92; @K65; @M13; @M68; @M07], and has been extensively studied in the past thirty years. In many application areas, one is interested in finding the ground state vector of equation (\[NLS1\]). The ground state of equation (\[NLS1\]) is defined as the minimizer of the energy function, which is determined by the following constrained optimization problem [@C82; @L17]: $$m(\Gamma,I)=\inf \{ H(u)\text{ }|\text{ } u \in H^1(\mathbb{R}^2), \int_{\mathbb{R}^2}u^2=1 \}, \label{GroundS}$$ where $$H(u)=\int_{\mathbb{R}^2}|\nabla u|^2+\Gamma\left[ u^2- \ln\left( 1+\frac{u^2}{a(x)}\right) \right]dx.$$ Therefore, the associated Euler-Lagrange equation of (\[GroundS\]) is as follows: $$\label{eq:NSLE}
-\Delta u+\Gamma \left (1-\frac{1}{a(x)+u^2}\right ) u=\lambda u,$$ where $a(x))> 0, \int_{-\infty}^{\infty}u^2(x)dx=1$, $(\lambda, u)$ is the eigenpair. In general, the eigenfunction $u(x)$ describes the probability distribution of finding a particle in a particular region in space. Therefore, the existence of positive solutions $u(x)$ [@L17] and the problem of computing these solutions has attracted much attention in recent years. Here we consider the finite-difference discretization of the nonlinear eigenvalue problem (\[eq:NSLE\]) with Dirichlet boundary conditions, and the discretization gives a nonlinear algebraic eigenvalue problem (NAEP) $$\label{dnep}
A\mathbf{u}+\Gamma \mathrm{diag} \left (\mathbf{e}-\frac{\mathbf{e}}{\mathbf{a}+\mathbf{u}^{[2]}}\right ) \mathbf{u}=\lambda \mathbf{u}, \quad \mathbf{u}^T\mathbf{u}=1,$$ where $\mathbf{a}>0, \Gamma >0$, $\mathbf{u}=[u_{1},u_{2},\ldots ,u_{n}]^{T}\in \mathbb{R}^{n},$ $\mathbf{u}^{[2]}=[u_{1}^{2},u_{2}^{2},\ldots ,u_{n}^{2}]^{T},$ $A$ is an irreducible nonsingular $M$-matrix and $\mathbf{e}=[1,\ldots ,1]^{T}$. We aim to provide a structure-preserving algorithm with fast convergence rate for computing positive eigenvectors $\mathbf{u}_{\ast}$ and eigenvalues $\lambda_{\ast}$of NAEP (\[dnep\]) and giving a detailed convergence analysis.
In many applications, the positivity structure of the approximate solutions is important; if the approximations lose positivity structure, then they may be meaningless and unexplained. Therefore, in this paper, we propose a positivity preserving iteration for nonlinear algebraic eigenvalue problems (\[dnep\]) by combining the idea of Newton’s method with the idea of the Noda iteration [@N71], called the Newton-Noda iteration (NNI). NNI is a Newton iterative method with a new type of full Newton steps, it has the advantage that no line-searches are needed, and naturally preserves the strict positivity of the target eigenvector $\mathbf{u}_{\ast}$ in its approximations at all iterations. We also present a halving procedure to determine the parameters $\theta_k$, starting with $\theta_k=1$ for each iteration, such that the sequence approximating target eigenvalue $\lambda_{\ast}$ is strictly monotonic increasing and bounded, and thus its global convergence is guaranteed. Another advantage of NNI is that it converges quadratically and computes the desired eigenpair correctly for any positive initial vector.
The rest of this paper is organized as follows. In Section 2, we present a Newton-Noda iteration. In Section 3, we prove some basic properties for Newton-Noda iteration. Section 4 addresses the global convergence and the local convergence rate of NNI. In Section 5, we provide numerical examples to verify the theoretical results and the performance of NNI. Some concluding remarks are given in the last section.
Throughout this paper, we use the bold face letters to denote a vector and use the $2$-norm for vectors and matrices. The superscript $T$ denotes the transpose of a vector or matrix, and we use $\mathbf{v}^{(i)}$ to represent the $i$th element of a vector $\mathbf{v} $. $\mathbf{v}^{[m]}$ denotes element-by-element powers, i.e., $\mathbf{v}^{[m]}=[v_{1}^{m},v_{2}^{m},\ldots ,v_{n}^{m}]^{T}.$ A real matrix $A=\left[ A_{ij}\right] \in \mathbb{R}^{n\times k}$ is called nonnegative (positive) if $A_{ij}\geq 0$ $(A_{ij}>0)$. For real matrices $A$ and $B$ of the same size, we write $A\geq B$ ($A>B$) if $A-B$ is nonnegative (positive). A real square matrix $A$ is called a $Z$-matrix if all its off-diagonal elements are nonpositive. A matrix $A$ is called a M-matrix if it is a Z-matrix with $A^{-1} \geq 0$. A matrix $A$ is called reducible [@BPl94; @HJo85] if there exists a nonempty proper index subset $S\subset \left\{
1,2,\ldots ,n\right\} $ such that$$A_{ij}=0,\text{ }\forall \ i\in S,\text{ }\forall
\ j\notin S.$$If $A$ is not reducible, then we call $A$ irreducible. For a pair of positive vectors $\mathbf{v}$ and $\mathbf{w}$, define $$\max \left( \frac{\mathbf{w}}{\mathbf{v}}\right) =\underset{i}{\max }\left(
\frac{\mathbf{w}^{(i)}}{\mathbf{v}^{(i)}}\right) ,\text{ \ }\min \left(
\frac{\mathbf{w}}{\mathbf{v}}\right) =\underset{i}{\min }\left( \frac{\mathbf{w}^{(i)}}{\mathbf{v}^{(i)}}\right) .$$
The Newton-Noda iteration
=========================
In this section, we will present a Newton-Noda iteration (NNI) for computing a positive eigenvector $\mathbf{u}_{\ast}$ of NAEP (\[dnep\]), and then we prove some basic properties of NNI in Section 3, which will be used to establish its convergence theory in Section 4.
First, NAEP (\[dnep\]) can be simplified as follows: $$\mathcal{A}(\mathbf{u})\mathbf{u}=\lambda \mathbf{u},$$ where $$\mathcal{A}(\mathbf{u})=A+\Gamma \mathrm{diag} \left (\mathbf{e}-\frac{\mathbf{e}}{\mathbf{a}+\mathbf{u}^{[2]}}\right )$$ and $\mathrm{diag}\left ( \ast \right )$ returns a square diagonal matrix with the elements of vector $\ast$ on the main diagonal. We define two vector-valued functions $\mathbf{r}:$ $\mathbb{R}_{+}^{n+1}\mathbb{\rightarrow R}^{n}$ and $F:$ $\mathbb{R}_{+}^{n+1}\mathbb{\rightarrow R}^{n+1}$ as follows: $$\mathbf{r}(\mathbf{u,}\lambda )=\mathcal{A}(\mathbf{u})\mathbf{u}-\lambda \mathbf{u}, \quad \mathbf{F}(\mathbf{u},\lambda )=\left[
\begin{array}{c}
-\mathbf{r}(\mathbf{u,}\lambda ) \\
\frac{1}{2}\left( 1-\mathbf{u}^{T}\mathbf{u}\right)\end{array}\right]. \label{eq:Fx}$$ The Fréchet derivative of $F$ is given by $$\label{eq:Fre}
F^{\prime}(\mathbf{u},\lambda)=\left[
\begin{array}{cc}
J({\mathbf{u}}) & -\mathbf{u} \\
-\mathbf{u}^{T} & 0\end{array}
\right ],$$ where $$J(\mathbf{u})=A+(\Gamma-\lambda)I-\Gamma \mathrm{diag} \left (\frac{\mathbf{a}-\mathbf{u}^{[2]}}{(\mathbf{a}+\mathbf{u}^{[2]})^{[2]}}\right ).$$
Next, we consider using Newton’s method to solve the equation $\mathbf{F}(\mathbf{u,}\lambda )=0$. Given an approximation $(\mathbf{u}_k,\widehat{\lambda }_{k})$, Newton’s method produces the next approximation $(\mathbf{u}_{k+1},\widehat{\lambda }_{k+1})$ as follows:
$$\begin{aligned}
\left[
\begin{array}{cc}
J(\mathbf{u}_k) & -\mathbf{u}_k \\
-\mathbf{u}_k^{T} & 0\end{array}\right] \left[
\begin{array}{c}
\Delta_{k} \\
\delta _{k}\end{array}\right]&=-\left[
\begin{array}{c}
\mathbf{r}(\mathbf{u}_k,\widehat{\lambda }_{k}) \\
\frac{1}{2}\left( \mathbf{u}_k^{T}\mathbf{u}_k-1\right)\end{array}\right], \label{eq:step1} \\
\mathbf{u}_{k+1}& =\mathbf{u}_k\,+\Delta_{k},
\label{eq:step2} \\
\widehat{\lambda }_{k+1}& =\widehat{\lambda }_{k}+\delta _{k}.
\label{eq:step3}\end{aligned}$$
From the first equation of (\[eq:step1\]), we have $$\begin{aligned}
J(\mathbf{u}_k)(\Delta_k+\mathbf{u}_k) &=&J(\mathbf{u}_k)\Delta_k+J(\mathbf{u}_k)\mathbf{u}_k\\
&=&\delta_k \mathbf{u}_k-\mathbf{r}(\mathbf{u}_k,\widehat{\lambda }_{k})+ J(\mathbf{u}_k)\mathbf{u}_k \\
&=&\delta_k \mathbf{u}_k-(\mathcal{A}(\mathbf{u}_k)-\widehat{\lambda }_{k} I)\mathbf{u}_k\\&+& (\mathcal{A}(\mathbf{u}_k)-\widehat{\lambda }_{k} I)\mathbf{u}_k+2\Gamma \mathrm{diag} \left (\frac{\mathbf{u}_k^{[2]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}}\right )\mathbf{u}_k \\
&=& \delta_k \mathbf{u}_k+2\Gamma \mathrm{diag} \left (\frac{\mathbf{u}_k^{[2]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}}\right )\mathbf{u}_k.\end{aligned}$$Hence, $$\mathbf{u}_{k+1}=J(\mathbf{u}_k)^{-1}\left (\delta_k \mathbf{u}_k+2\Gamma \mathrm{diag} \left (\frac{\mathbf{u}_k^{[2]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}}\right )\mathbf{u}_k\right ).
\label{eq: linearsys0}$$ Since $\mathbf{u}_k$ is going to approximate the positive eigenvector of NAEP, we will also require $\mathbf{u}_k>0$. However, we cannot guarantee $\mathbf{u}_{k+1}>0$ in unless we have $$\delta_k>0, J(\mathbf{u}_k)^{-1}\geq0.$$ What is needed here is that $J(\mathbf{u}_k)$ is a nonsingular M-matrix. For $\mathbf{u}_k>0$, we suggests taking $$\label{eq:lamk}
\lambda_k =\min \left( \frac{\mathcal{A}(\mathbf{u}_k)\mathbf{u}_k}{\mathbf{u}_k}\right),$$ which is precisely the idea of the Noda iteration [@N71]. This implies that the $Z$-matrix $\mathcal{A}(\mathbf{u}_k)-\lambda_k I$ is such that $(\mathcal{A}(\mathbf{u}_k)-\lambda_k I)\mathbf{u}_k\ge 0$. Thus $\mathcal{A}(\mathbf{u}_k)-\lambda_k I$ is a nonsingular $M$-matrix when $(\mathbf{u}_k, \lambda_k)$ is not an eigenpair, and is a singular $M$-matrix when $(\mathbf{u}_k, \lambda_k)$ is an eigenpair. Since $$\label{JandA}
J(\mathbf{u}_k)-(\mathcal{A}(\mathbf{u}_k)-\lambda_k I)=2\Gamma \mathrm{diag} \left (\frac{\mathbf{u}_k^{[2]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}}\right ),$$ we have $J(\mathbf{u}_k)\mathbf{u}_k>0$. Thus $J(\mathbf{u}_k)$ is a nonsingular $M$-matrix. Based on (\[eq:step1\]), (\[eq:step2\]) and (\[eq:lamk\]), we can present NNI as Algorithm \[alg1\].
1. Given ${\mathbf u}_0 > 0$ with $\Vert {\mathbf u}_0\Vert =1$, ${\lambda}_{0}= \min \left( \frac{\mathcal{A}(\mathbf{u}_{0}){\bf u}_0}{\mathbf{u}_{0}}\right)$ and tol $>0$.
2. [**for**]{} $k =0,1,2,\dots$
3. Solve the linear system $F'(\mathbf{u_k},\lambda_k ) \left [\begin{array}{c}
\Delta_k\\
\delta_k
\end{array}\right ]=- F(\mathbf{u_k},\lambda_k ) $.
4. Choose a scalar $\theta_k>0$.
5. Compute the vector ${\mathbf{w}}_{k+1} =\mathbf{u}_{k}\,+\theta_k \Delta_k$.
6. Normalize the vector ${\mathbf u}_{k+1}= {\mathbf{w}}_{k+1}/\Vert {\mathbf{w}}_{k+1}\Vert$.
7. Compute ${\lambda }_{k+1} =\min \left( \frac{\mathcal{A}(\mathbf{u}_{k+1}){\bf u}_{k+1}}{\mathbf{u}_{k+1}}\right)$.
8. [**until**]{} convergence: $\Vert \mathcal{A}(\mathbf{u}_k)-\underline{\lambda}_k\mathbf{u}_k\Vert < $tol.
In what follows, we will prove the positivity of $\mathbf{u}_{k}$ and give a strategy for choosing $\theta _{k}$. These results will show that Algorithm \[alg1\] is a positivity preserving algorithm.
Positivity of $\mathbf{u}_{k}$
------------------------------
Suppose that $\left\{\mathbf{u}_{k}, {\lambda }_{k}\right\}$ is generated by Algorithm \[alg1\]. We now prove that the parameter $\theta_k \in (0,1]$ in Algorithm \[alg1\] naturally preserves the strict positivity of $\mathbf{u}_{k}$ at all iterations.
For any vector $\mathbf{u}>0$, from (\[eq:Fre\]), it follows that $$\label{eq:Fnon}
F^{\prime}(\mathbf{u}, \lambda)=\left[
\begin{array}{cc}
I & 0 \\
-\mathbf{u}^{T} (J(\mathbf{u}))^{-1} & 1\end{array}
\right ] \left[
\begin{array}{cc}
J({\mathbf{u}}) & -\mathbf{u} \\
0 & -\mathbf{u}^{T} (J(\mathbf{u}))^{-1} \mathbf{u}\end{array}
\right ]$$ is nonsingular and $$\begin{aligned}
\label{eq1.1}
(F^{\prime}(\mathbf{u}, \lambda))^{-1}=\left[
\begin{array}{cc}
(J(\mathbf{u}))^{-1} -\frac{(J(\mathbf{u}))^{-1} \mathbf{u} \mathbf{u}^T(J(\mathbf{u}))^{-1}}{\mathbf{u}^{T}(J(\mathbf{u}))^{-1} \mathbf{u}} & -\frac{
(J(\mathbf{u}))^{-1} \mathbf{u} }{\mathbf{u}^{T} (J(\mathbf{u}))^{-1}
\mathbf{u}} \\
-\frac{ \mathbf{u}^T (J(\mathbf{u}))^{-1} }{\mathbf{u}^{T} (J(\mathbf{u}))^{-1} \mathbf{u}} & -\frac{ 1 }{\mathbf{u}^{T} (J(\mathbf{u}))^{-1}
\mathbf{u}}\end{array}
\right ].\end{aligned}$$
\[lem1\] Given $\mathbf{a},\Gamma >0$. Suppose that $\lambda_k\in \mathbb{R}$ and $\mathbf{u}_k>0$ with $\|\mathbf{u}_k\|=1$ such that $\mathbf{r}(\mathbf{u}_k,\lambda_k )\geqslant 0$. Then $$\begin{aligned}
\label{eq1.2}
\frac{1}{2}(1+ \mathbf{u}_k^T \mathbf{u}_k)-\Gamma \mathbf{u}_k^T (J(\mathbf{u}_k))^{-1} \mathrm{diag} \left (\frac{2\mathbf{u}_k^{[2]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}}\right )\mathbf{u}_k\geqslant 0.\end{aligned}$$ Moreover, the equality holds if and only if $\mathbf{r}(\mathbf{u}_k,\lambda_k )= 0$.
Since $\mathcal{A}({\bf u}_k){\bf u}_k-\lambda_k{\bf u}_k\geqslant 0$, $J({\bf u}_k)=(\mathcal{A}({\bf u}_k)-\lambda_k I)+\Gamma {\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right )$ is nonsingular $M$-matrix. Then we have $$\begin{aligned}
\Gamma(J({\bf u}_k))^{-1}{\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right ){\bf u}_k=\left(I-(J({\bf u}_k))^{-1}(\mathcal{A}({\bf u}_k)-\lambda_k I)\right){\bf u}_k.\end{aligned}$$ Hence, $$\begin{aligned}
\frac{1}{2}&(1+ {\bf u}_k^T {\bf u}_k)-\Gamma {\bf u}_k^T (J({\bf u}_k))^{-1} {\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right ){\bf u}_k\\
&=\frac{1}{2}(1+ {\bf u}_k^T {\bf u}_k)-{\bf u}_k^T {\bf u}_k+{\bf u}_k^T (J({\bf u}_k))^{-1}(\mathcal{A}({\bf u}_k){\bf u}_k-\lambda_k {\bf u}_k)\\
&=\frac{1}{2}(1- {\bf u}_k^T {\bf u}_k)+{\bf u}_k^T (J({\bf u}_k))^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k ).\end{aligned}$$ Since $\|{\bf u}_k\|=1$ and $\mathbf{r}(\mathbf{u}_k,\lambda_k )\geqslant 0$, holds by using ${\bf u}_k^T (J({\bf u}_k))^{-1}>0$. It is easily seen that the equality of holds if and only if $\mathbf{r}(\mathbf{u}_k,\lambda_k )=0$.
\[positivethm\] Given $\mathbf{a},\Gamma >0$. Assume $\left\{\mathbf{u}_{k}, {\lambda }_{k}\right\}$ is generated by Algorithm \[alg1\]. If $\theta_k \in (0,1]$, then $\mathbf{u}_{k}>0$ for all $k$.
Since ${\bf u}_0>0$, by mathematical induction, it suffices to show that if ${\bf u}_k>0$ then ${\bf u}_{k+1}>0$. Suppose that ${\bf u}_k>0$, it follows from the step 3 of Algorithm \[alg1\] that $$\begin{aligned}
F'(\mathbf{u_k},\lambda_k ) \left [\begin{array}{c}
{\bf u}_k+\Delta_k\\
\delta_k
\end{array}\right ]&=-\left[
\begin{array}{c}
\mathcal{A}({\bf u}_k){\bf u}_k-\lambda_k{\bf u}_k \\
\frac{1}{2}\left( 1-{\bf u}_k^{T}{\bf u}_k\right )
\end{array}
\right] +\left [\begin{array}{c}
J({\bf u}_k){\bf u}_k\\
-{\bf u}_k^{T}{\bf u}_k
\end{array}\right ]\\
&=\left [\begin{array}{c}
\Gamma {\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right ){\bf u}_k\\
-\frac{1}{2}\left( 1+{\bf u}_k^{T}{\bf u}_k\right )
\end{array}\right ].\end{aligned}$$ By , we have $$\begin{aligned}
\label{eq:udelta}
{\bf u}_k+\Delta_k =&\Gamma\left(I-\frac{(J({\bf u}_k))^{-1} {\bf u}_k {\bf u}_k^T}{\mathbf{u}_k^{T}(J({\bf u}_k))^{-1} {\bf u}_k} \right) (J({\bf u}_k))^{-1} \notag {\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right ){\bf u}_k\\ \notag
&+\frac{1+ {\bf u}_k^T {\bf u}_k}{2\mathbf{u}_k^{T}(J({\bf u}_k))^{-1} {\bf u}_k} (J({\bf u}_k))^{-1}{\bf u}_k\\ \notag
=&\Gamma(J({\bf u}_k))^{-1} {\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right ){\bf u}_k\\
&+\frac{\frac{1}{2}(1+ {\bf u}_k^T {\bf u}_k)-\Gamma {\bf u}_k^T (J({\bf u}_k))^{-1} {\rm diag} \left (\frac{2{\bf u}_k^{[2]}}{({\bf a}+{\bf u}_k^{[2]})^{[2]}}\right ){\bf u}_k}{\mathbf{u}_k^{T}(J({\bf u}_k))^{-1} {\bf u}_k} (J({\bf u}_k))^{-1}{\bf u}_k.\end{aligned}$$ Since $\Gamma>0$, ${\bf u}_k>0$, $J({\bf u}_k)$ is a nonsingular $M$-matrix and $\mathbf{u}_k^{T}(J({\bf u}_k))^{-1} {\bf u}_k>0$, it follows from Lemma \[lem1\] that ${\bf u}_k+\Delta_k>0$. Therefore, ${\bf w}_{k+1}={\bf u}_k+\theta_k \Delta_k>0$ if $0< \theta_k \le 1 $, and hence, ${\mathbf u}_{k+1}= {\mathbf{w}}_{k+1}/\Vert {\mathbf{w}}_{k+1}\Vert>0$.
$\text{ }$
1. $\mathbf{u}_k^T\Delta_k=0$: From , it is easily seen that $\mathbf{u}_k^T\Delta_k=\frac{1}{2}(1-\mathbf{u}_k^T \mathbf{u}_k)=0.$
2. $\delta _{k}\geqslant 0$: From the step 3 of Algorithm \[alg1\] and using , we have $$\begin{aligned}
\delta _{k}& =\frac{1}{\mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{u}_{k}}\mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k )+\frac{\frac{1}{2}(1-\mathbf{u}_{k}^{T}\mathbf{u}_{k})}{\mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{u}_{k}} \\
& =\frac{1}{\mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{u}_{k}}\mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k )\geqslant 0.\end{aligned}$$
\[equiThm\]If $\delta_k$, $\Delta_k$ and $\mathbf{u}_{k}$ are generated by Algorithm \[alg1\], then the following statements are equivalent:$$\text{{\rm (i) }}\delta _{k}=0;\quad \text{ {\rm (ii) }}\mathbf{r}(\mathbf{u}_k,\lambda_k )=0; \quad \text{{\rm (iii) }}\Delta_k =0.$$
From the step 3 of Algorithm \[alg1\], we have $$F'(\mathbf{u_k},\lambda_k ) \left [\begin{array}{c}
\Delta_k\\
\delta_k
\end{array}\right ]=-\left[
\begin{array}{c}
\mathbf{r}(\mathbf{u}_k,\lambda_k ) \\
0\end{array}
\right].$$
(i)$\Rightarrow $(ii): From (ii) of Remark 1, we get $\delta _{k}=0$ if and only if $\mathbf{r}(\mathbf{u}_k,\lambda_k )=0$.
(ii)$\Rightarrow $(iii): Since $\mathbf{r}(\mathbf{u}_k,\lambda_k )=0$ and $F'(\mathbf{u_k},\lambda_k )$ is a nonsingular matrix, we have $\Delta_k=0$ and $\delta_k=0$.
\(iii) $\Rightarrow $ (i): If $\Delta_k =0$, then $$F'(\mathbf{u_k},\lambda_k ) \left [\begin{array}{c}
0\\
\delta_k
\end{array}\right ]=-\left[
\begin{array}{c}
\mathbf{r}(\mathbf{u}_k,\lambda_k ) \\
0
\end{array}
\right],$$ and it follows $$-\delta_k \mathbf{u}_k=-\mathbf{r}(\mathbf{u}_k,\lambda_k )=-\left(\mathcal{A}({\bf u}_k){\bf u}_k-\lambda_k{\bf u}_k\right),$$ which implies $$\mathcal{A}({\bf u}_k){\bf u}_k=\left(\lambda_k+\delta_k\right){\bf u}_k.$$ Then $${\lambda }_{k} +\delta_k=\min \left( \frac{\mathcal{A}(\mathbf{u}_{k}){\bf u}_{k}}{\mathbf{u}_{k}}\right)=\lambda_k,$$ which means $\delta_k=0$.
The strategy for choosing $\theta_k$
------------------------------------
In this section, we would like to choose $\theta_k \in (0,1]$ such that the sequence $\left\{ \lambda _{k}\right\} $ is strictly increasing and bounded above.
\[posiyk\] Given a unit vector $\mathbf{u}_{k}>0$ and $\theta_k \in (0,1]$, then $$\lambda _{k+1}=\lambda _{k}+\min \left( \frac{\mathbf{h}_{k}(\theta _{k})}{\mathbf{u}_{k+1}}\right),
\label{eq: recurrLam}$$where $\mathbf{h}_{k}(\theta_k )=\mathbf{r}(\mathbf{u}_{k+1},\lambda _{k})$. Moreover, $\mathbf{h}_{k}(\theta _{k})$ can be also expressed in the form $$\mathbf{h}_{k}(\theta_k )=\frac{1-\theta_k}{\Vert {\mathbf{w}}_{k+1}\Vert}\mathbf{r}(\mathbf{u}_k,\lambda_k)+\frac{\theta_k \delta_k}{\Vert {\mathbf{w}}_{k+1}\Vert} \mathbf{u}_k+\mathbf{R}(\theta_k \Delta_k) \label{eq:ftheta}$$ where $\Vert\mathbf{R}(\theta_k \Delta_k)\Vert \leq M\Vert \theta_k \Delta_k\Vert^2$.
By Theorem \[positivethm\], we know that $\mathbf{u}_{k}>0$ for all $k$. $$\lambda _{k+1}-\lambda _{k}=\min \left( \frac{\mathcal{A}(\mathbf{u}_{k+1}){\bf u}_{k+1}}{\mathbf{u}_{k+1}}\right)-\lambda _{k}=\min \left( \frac{\mathbf{h}_{k}(\theta _{k})}{\mathbf{u}_{k+1}}\right),
$$where $\mathbf{h}_{k}(\theta_k )=\mathbf{r}(\mathbf{u}_{k+1},\lambda _{k})$. By Taylor’s theorem, we have $$\begin{aligned}
\mathbf{r}(\mathbf{u}_{k+1},\lambda_k)&=&\mathbf{r}(\mathbf{u}_{k},\lambda_k)+J(\mathbf{u}_{k})(\mathbf{u}_{k+1}-\mathbf{u}_{k})+\mathbf{E}_k \notag\\
&=&\mathbf{r}(\mathbf{u}_{k},\lambda_k)+J(\mathbf{u}_{k})(\frac{\mathbf{u}_{k}+\theta_k \Delta_k}{\Vert {\mathbf{w}}_{k+1}\Vert}-\mathbf{u}_{k})+\mathbf{E}_k \notag\\
&=&\mathbf{r}(\mathbf{u}_{k},\lambda_k)+\left(\frac{1}{\Vert {\mathbf{w}}_{k+1}\Vert}-1\right)J(\mathbf{u}_{k})\mathbf{u}_{k}+\frac{\theta_k }{\Vert {\mathbf{w}}_{k+1}\Vert}J(\mathbf{u}_{k})\Delta_k+\mathbf{E}_k\notag\\
&=&\mathbf{r}(\mathbf{u}_{k},\lambda_k)+\left(\frac{1}{\Vert {\mathbf{w}}_{k+1}\Vert}-1\right)\left[\mathbf{r}(\mathbf{u}_{k},\lambda_k)+2\Gamma \mathrm{diag} \left (\frac{\mathbf{u}_k^{[2]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}}\right )\mathbf{u}_{k}\right] \notag\\
&+&\frac{\theta_k }{\Vert {\mathbf{w}}_{k+1}\Vert}\left[\delta_k\mathbf{u}_{k}- \mathbf{r}(\mathbf{u}_{k},\lambda_k)\right]+\mathbf{E}_k \notag\\
&=& \frac{1-\theta_k}{\Vert {\mathbf{w}}_{k+1}\Vert}\mathbf{r}(\mathbf{u}_k,\lambda_k)+\frac{\theta_k \delta_k\mathbf{u}_k}{\Vert {\mathbf{w}}_{k+1}\Vert} +\frac{1-\Vert {\mathbf{w}}_{k+1}\Vert}{\Vert {\mathbf{w}}_{k+1}\Vert}\left[\frac{2\Gamma\mathbf{u}_k^{[3]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}} \right]+\mathbf{E}_k, \label{eq:ruk}\end{aligned}$$ where $\Vert\mathbf{E}_k\Vert \leq M_1\Vert \mathbf{u}_{k+1}-\mathbf{u}_{k}\Vert^2$.
Since $\mathbf{u}_{k}^{T}\Delta_k=0$ from Remark 1, we have $\Vert {\mathbf{w}}_{k+1}\Vert= \sqrt{1+\Vert \theta_k \Delta_k\Vert^2 }.$ Hence, the third term in the right-hand side of (\[eq:ruk\]) is bounded by $$\begin{aligned}
\Vert \frac{1-\Vert {\mathbf{w}}_{k+1}\Vert}{\Vert {\mathbf{w}}_{k+1}\Vert} \left[\frac{2\Gamma\mathbf{u}_k^{[3]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}} \right]\Vert
&\le &\Vert \frac{1-\sqrt{1+\Vert \theta_k \Delta_k\Vert^2 }}{\sqrt{1+\Vert \theta_k \Delta_k\Vert^2 }} \Vert \Vert\frac{2\Gamma\mathbf{u}_k^{[3]}}{\mathbf{a}^{[2]}+2\mathbf{a}\mathbf{u}_k^{[2]}+\mathbf{u}_k^{[4]}} \Vert \notag \\
&\leq& \frac{\Gamma}{2\min{(\mathbf{a})}}\Vert \theta_k \Delta_k\Vert^2, \label{eq:wk1}\end{aligned}$$ and the upper bound of $\Vert\mathbf{E}_k\Vert $ can be re-estimated as follows: $$\begin{aligned}
\Vert\mathbf{E}_k\Vert
&\leq &M_1\Vert \mathbf{u}_{k+1}-\mathbf{u}_{k}\Vert^2 \notag \\
&=&M_1\Vert \left(\frac{1}{\Vert {\mathbf{w}}_{k+1}\Vert}-1\right)\mathbf{u}_{k}+\frac{\theta_k }{\Vert {\mathbf{w}}_{k+1}\Vert}\Delta_k\Vert^2 \notag \\
&\leq &M_1\Vert \left(\frac{1}{\Vert {\mathbf{w}}_{k+1}\Vert}-1\right)\mathbf{u}_{k}\Vert^2+\frac{M_1 }{\Vert {\mathbf{w}}_{k+1}\Vert}\Vert \theta_k\Delta_k\Vert^2 \notag \\
&\leq & \left(\frac{M_1}{2}+M_1\right)\Vert \theta_k \Delta_k\Vert^2. \label{eq:resi}\end{aligned}$$ From the above relation (\[eq:ruk\])-(\[eq:resi\]), we have $$\mathbf{h}_{k}(\theta_k )=\frac{1-\theta_k}{\Vert {\mathbf{w}}_{k+1}\Vert}\mathbf{r}(\mathbf{u}_k,\lambda_k)+\frac{\theta_k \delta_k}{\Vert {\mathbf{w}}_{k+1}\Vert} \mathbf{u}_k+\mathbf{R}(\theta_k \Delta_k),$$ where $$\mathbf{R}(\theta_k \Delta_k)=\frac{1-\Vert {\mathbf{w}}_{k+1}\Vert}{\Vert {\mathbf{w}}_{k+1}\Vert}\left[\frac{2\Gamma\mathbf{u}_k^{[3]}}{(\mathbf{a}+\mathbf{u}_k^{[2]})^{[2]}} \right]+\mathbf{E}_k$$ with $\Vert\mathbf{R}(\theta_k \Delta_k)\Vert \leq M\Vert \theta_k \Delta_k\Vert^2$ and $M=\frac{\Gamma}{2 \min(\mathbf{a})}+\frac{M_1}{2}+M_1$.
We next show that $\lambda _{k}$ is strictly increasing and bounded above for suitable $\theta_k$, unless $\mathbf{u}_{k}$ is an eigenvector of NAEP for some $k$, in which case NNI terminates with $\lambda _{k}$.
\[monotone\]Suppose $A$ be an irreducible M-matrix and $\eta >0$ be a fixed constant. Given a unit vector $\mathbf{u}_{k}>0
$, suppose $\mathbf{u}_{k}\not=\mathbf{u}_{\ast}$ and $\theta _{k}$ in Algorithm \[alg1\] satisfies $$\theta _{k}=\left\{
\begin{array}{cl}
1 & \text{if }\mathbf{h}_{k}(1)\geq \frac{\delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }\text{;} \\
\eta _{k} & \text{otherwise,}\end{array}\right. \label{eq:lowbdtheta}$$where for each $k$ with $\mathbf{h}_{k}(1)< \frac{\delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }$, $$\eta_{k}=\frac{\eta \delta_k \min \left( \mathbf{u}_{k}\right)}{(1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert \left\Vert \Delta_{k}\right\Vert^2 }.$$ Then $0<\eta_k< 1$ whenever it is defined, and $$\lambda _{k}<\lambda _{k+1}<\Vert A\Vert+(1+n)\Gamma .
\label{eq:monolam}$$
By Lemma \[posiyk\], we have $$\lambda _{k+1}=\lambda _{k}+\min \left( \frac{\mathbf{h}_{k}(\theta _{k})}{\mathbf{u}_{k+1}}\right) .$$We need to prove $\mathbf{h}_{k}(\theta _{k})>0.$
From (\[eq:ftheta\]) and $\Vert\mathbf{R}(\theta_k \Delta_k)\Vert \leq M\Vert \theta_k \Delta_k\Vert^2$, we have $$\begin{aligned}
\mathbf{h}_{k}(\theta )&=&\frac{\theta \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert}+\frac{\theta \eta \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert} +\frac{1-\theta}{\Vert {\mathbf{w}}_{k+1}\Vert}\mathbf{r}(\mathbf{u}_k,\lambda_k)+\mathbf{R}(\theta \Delta_k) \notag \\
&>&\frac{\theta \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert}+\frac{\theta \eta \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert} -M\theta^2 \left\Vert \Delta_{k}\right\Vert^2 \mathbf{e}. \label{eq:ftheta3}\end{aligned}$$
If $\eta_k \ge 1$, then $$\eta \delta_k \min \left( \mathbf{u}_{k}\right)\ge (1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert \left\Vert \Delta_{k}\right\Vert^2,$$ and it follows $$\frac{\eta \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert} \ge M \left\Vert \Delta_{k}\right\Vert^2 \mathbf{e}.$$ Thus $$\begin{aligned}
\mathbf{h}_{k}(1)&=&\frac{ \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert}+\frac{ \eta \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert} +\mathbf{R}(\Delta_k) \notag \\
&>&\frac{\delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }+\frac{ \eta \delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert } -M \left\Vert \Delta_{k}\right\Vert^2 \mathbf{e}>0.\end{aligned}$$ If $\eta_k < 1$, we have $$\theta _{k}=\eta _{k}=\frac{\eta \delta_k \min \left( \mathbf{u}_{k}\right)}{(1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert \left\Vert \Delta_{k}\right\Vert^2 },
\label{eq: betheta1}$$which ensures the inequality $$\frac{\theta_k\eta \delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert } \ge \theta^2_k M \left\Vert \Delta_{k}\right\Vert^2 \mathbf{e}. \label{eq: dklow}$$Substituting (\[eq: dklow\]) into (\[eq:ftheta3\]), we obtain $$\begin{aligned}
\mathbf{h}_{k}(\theta _{k}) &=&\frac{\theta_k \delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }+\frac{\theta_k \eta \delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert } +\frac{1-\theta_k}{\Vert {\mathbf{w}}_{k+1}\Vert}\mathbf{r}(\mathbf{u}_k,\lambda_k)+\mathbf{R}(\theta_k \Delta_k) \notag \\
&\ge&\frac{\theta_k \delta_k\mathbf{u}_{k}}{(1+\eta) \Vert {\mathbf{w}}_{k+1}\Vert}+\frac{\theta_k \eta \delta_k\mathbf{u}_{k}}{(1+\eta) } -M\theta_k^2 \left\Vert \Delta_{k}\right\Vert^2 \mathbf{e} \notag \\
&\ge&\frac{\theta_k \delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }>0. \label{eq333}\end{aligned}$$Therefore, $$\lambda _{k+1}=\lambda _{k}+\min \left( \frac{\mathbf{h}_{k}(\theta _{k})}{\mathbf{u}_{k+1}}\right)>\lambda _{k} .$$ Next, we prove that the sequence $\left\{ \lambda _{k}\right\} $ is bounded above. Suppose that $\left\{ \lambda _{k}\right\} $ is unbounded. This implies that $\lambda_k \ge N>0$ for $k$ large enough. Since $A(\mathbf{u}_{k}){\bf u}_k \ge {\lambda}_{k}\mathbf{u}_{k}$, we then have $$\begin{aligned}
{\lambda}_{k}&\le &|\mathbf{u}_{k}^{T}A(\mathbf{u}_{k}){\bf u}_k|\\
&\le & |\mathbf{u}_{k}^{T}A{\bf u}_k| +\Gamma \left |\sum_{i=1}^{n} (1-\frac{1}{\mathbf{a}(i)+\mathbf{u}_k^{2}(i)})\mathbf{u}_k^{2}(i)\right | \\
&\le & \Vert A\Vert + \Gamma(1 +n) <\infty,\end{aligned}$$ which is a contradiction.
From , we know that the inequality $\mathbf{h}_{k}(1)\geq \frac{\delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }$ depends on the parameter $\eta$. Therefore, if $\eta$ large enough, then we can choose $\theta_k = 1$ for which $\mathbf{h}_{k}(1)>0$ holds. By Theorem \[monotone\], we can indeed choose $\theta _{k} \in (0,1]$ in NNI such that the sequence $\{\lambda _{k}\}$ is strictly increasing. However, in practice, it is difficult to determine $\eta _{k}$. Therefore, we can determine $\theta _{k}$ by repeated halving technique. More precisely, for each $k$, we can take $\theta _{k}=1$ first and check whether $\mathbf{h}_{k}(1)>0$ holds. If not, then we update $\theta _{k}$ using $\theta _{k}\leftarrow \theta_{k}/2$ and check again until we get $\theta _{k}$ for which $\mathbf{h}_{k}(1)>0$ holds. This process of repeatedly halving will be referred to as the halving procedure. As long as $\theta _{k}$ is bounded below by a positive constant, which will be mentioned in the next section.
Some basic properties of Newton-Noda iteration
===============================================
In this section, we prove a number of basic properties of NNI, which will be used to establish its convergence theory in Section 4.
\[yktox\]Let $A$ be an irreducible M-matrix. Assume that the sequence $\left\{ \lambda _{k},\mathbf{u}_{k}, \mathbf{w}_{k}\right\} $ is generated by Algorithm [alg1]{}. For any subsequence $\left\{ \mathbf{u}_{k_{j}}\right\} \subseteq
\left\{ \mathbf{u}_{k}\right\} ,$ we have the following results:
1. If $\mathbf{u}_{k_{j}}\rightarrow \mathbf{v}$ as $j\rightarrow
\infty ,$ then $\mathbf{v}>0.$
2. $\min (\mathbf{u}_{k}) \ge m$ for some positive constant $m$.
3. $\Vert \mathbf{w}_k \Vert \le \frac{1}{m}$.
(i). If $\lim_{j\rightarrow \infty }\mathbf{u}_{k_{j}}=\mathbf{v,}$ then $\mathbf{v}\ge 0$. Let $S$ be the set of all indices $i$ such that $\lim_{j\to \infty} \mathbf{u}_{k_j}^{(i)}=\mathbf{v}^{(i)}= 0$. Since $\left\Vert \mathbf{u}_{k_{j}}\right\Vert =1$, $S$ is a proper subset of $\{1, 2, \ldots, n\}$. Suppose $S$ is nonempty. Then by the definition of $\lambda _{k},$ $$\lambda_{k_{j}}=\min \left( \frac{\mathcal{A}(\mathbf{u}_{k_j}){\bf u}_{k_j}}{\mathbf{u}_{k_j}}\right) \leq
\frac{\left(\mathcal{A}(\mathbf{u}_{k_j}){\bf u}_{k_j}\right)^{(i)}}{\mathbf{u}_{k_j}^{(i)}}<\infty \text{ for all }i=1, 2, \ldots, n.$$Since $\lim_{j\rightarrow \infty } \mathbf{u}_{k_j}^{(i)}= 0$ for $i\in S$, it holds that $\lim_{j\rightarrow \infty }\left(\mathcal{A}(\mathbf{u}_{k_j}){\bf u}_{k_j}\right)^{(i)}=\left(\mathcal{A}(\mathbf{v}){\bf v}\right)^{(i)}=0$ for $i\in S$. Thus, $\mathcal{A}(\mathbf{v})_{i,j}=0$ for all $i\in S$ and for all $j\notin S$, which contradicts the irreducibility of $\mathcal{A}(\mathbf{v})$. Therefore, $S$ is empty and thus $\mathbf{v}>0$.
(ii). Suppose $\min (\mathbf{u}_{k})$ is not bounded below by a positive constant. Then there exists a subsequence $\{k_j\}$ such that $\lim_{j\rightarrow \infty } \min (\mathbf{u}_{k_j}) =0$. Since $\|\mathbf{u}_{k_j}\|=1$, we may assume that $\lim_{j\to \infty} \mathbf{u}_{k_{j}} =\mathbf{v}$ exists. Then $\lim_{j\rightarrow \infty } \min (\mathbf{u}_{k_j}) =
\min(\mathbf{v}) = 0$. This is a contradiction since $\mathbf{v}>0$ by (i). Therefore, $\min (\mathbf{u}_{k})$ is bounded below by a positive constant. That is $\min (\mathbf{u}_{k}) \ge m$ for some positive constant $m$.
(iii). From Remark 1, we have $\mathbf{u}_{k}^{T}\mathbf{w}_{k+1}=1$ and then $$\Vert \mathbf{w}_{k+1} \Vert = \frac{\mathbf{u}_k^T\mathbf{w}_{k+1}}{\cos\angle (\mathbf{u}_k,\mathbf{u}_{k+1})}=\frac{1}{\cos\angle (\mathbf{u}_k,\mathbf{u}_{k+1})}.$$ Since $\mathbf{u}_k>0$ and $\mathbf{u}_{k+1}>0$ with $\Vert \mathbf{u}_k\Vert =\Vert \mathbf{u}_{k+1}\Vert =1$, we have $$\cos\angle (\mathbf{u_k},\mathbf{u_{k+1}}) =\mathbf{u}_k^T\mathbf{u}_{k+1}\geq \Vert \mathbf{u}_{k+1}\Vert_1 \min(\mathbf{u}_k)
>\Vert \mathbf{u}_{k+1}\Vert \min(\mathbf{u}_k)=\min(\mathbf{u}_k),$$ where $\Vert\cdot\Vert_1$ is the vector 1-norm. Form (ii), $$\Vert \mathbf{w}_{k+1} \Vert =\frac{1}{\cos\angle (\mathbf{u}_k,\mathbf{u}_{k+1})} \le \frac{1}{\min(\mathbf{u}_k)}\le \frac{1}{m} < \infty.$$
\[thetak\]Assume that the sequence $\left\{ \Delta_k, \delta_k, \theta_k\right\} $ is generated by Algorithm \[alg1\]. We have the following results:
1. There exists a constant $\beta>0$ such that $\beta \Vert \Delta_k \Vert \le \delta_k$.
2. $\theta_k = 1$ if $\Vert \Delta_{k}\Vert \le \frac{\eta \beta }{(1+\eta )M } $ where $(\eta, M)$ is as in Theorem \[monotone\].
3. $\theta_k \ge \xi$ for some positive constant $\xi$.
(i). From the step 3 of Algorithm \[alg1\], we have $$\begin{aligned}
\label{Deltak}
\Vert \Delta_k\Vert &\le& \Vert J(\mathbf{u}_k)^{-1}\left(\delta_k \mathbf{u}_k - \mathbf{r}(\mathbf{u}_k,\lambda_k)\right) \Vert \notag \\
&\le& \delta_k \Vert J(\mathbf{u})_k^{-1}\mathbf{u}_k \Vert + \Vert J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\Vert .\end{aligned}$$ Since $\Vert J(\mathbf{u})^{-1}\mathbf{u} \Vert $ a continuous function achieves its extreme values in a compact set, it follows $$\max_{0<m\le \min\left(\mathbf{u}\right) \le 1, \Vert \mathbf{u}\Vert =1} \left(\Vert J(\mathbf{u})^{-1}\mathbf{u} \Vert \right) < \infty.$$ Therefore, $\Vert J(\mathbf{u})^{-1}\mathbf{u} \Vert \le M_2$ for some constant $M_2$.
On the other hand, from (ii) of Remark 1, we have $$\left(\mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{u}_{k}\right)\delta_k = \mathbf{u}_{k}^{T}(J(\mathbf{u}_{k}))^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k).$$ Since $\mathbf{u}_k>0$ and $J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)>0$, by using the same proving technique of (iii) of Lemma \[yktox\], we have $$\cos\angle (\mathbf{u_k},\frac{J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)}{\Vert J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\Vert}) >\min(\mathbf{u}_k)\ge m,$$ which implies $$\begin{aligned}
\Vert J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\Vert &=& \mathbf{u}_{k}^{T}J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\sec \angle (\mathbf{u_k},\frac{J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)}{\Vert J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\Vert}) \\
&=& \left(\mathbf{u}_{k}^{T}J(\mathbf{u}_{k})^{-1}\mathbf{u}_{k}\right)\delta_k \sec \angle (\mathbf{u_k},\frac{J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)}{\Vert J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\Vert}) \\
&\le& \frac{\delta_k}{m} \Vert J(\mathbf{u})_k^{-1}\mathbf{u}_k \Vert \le \frac{M_2}{m}\delta_k.\end{aligned}$$ From (\[Deltak\]) and the above inequality, $$\begin{aligned}
\Vert \Delta_k\Vert &\le& \delta_k \Vert J(\mathbf{u})_k^{-1}\mathbf{u}_k \Vert + \Vert J(\mathbf{u}_{k})^{-1}\mathbf{r}(\mathbf{u}_k,\lambda_k)\Vert \\
&\le& \left(M_2 +\frac{M_2}{m}\right)\delta_k:= \frac{1}{\beta}\delta_k.\end{aligned}$$
(ii). If $\Vert \Delta_{k}\Vert \le \frac{\eta \beta }{(1+\eta )M } $, then $$\begin{aligned}
\eta_{k}&=&\frac{\eta \delta_k \min \left( \mathbf{u}_{k}\right)}{(1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert\left\Vert \Delta_{k}\right\Vert^2 } \\
&=& \frac{\eta \delta_k \min \left( \mathbf{u}_{k}\right)}{(1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert} \frac{\delta_k}{\Vert \Delta_{k} \Vert} \frac{1}{\Vert \Delta_{k} \Vert}\\
&\ge& \frac{\eta \beta m}{(1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert} \frac{1}{\Vert \Delta_{k} \Vert} \ge 1.\end{aligned}$$ From the proof of Theorem \[monotone\], $\theta_k = 1$ when $\eta_k \ge 1.$
(iii). From (\[eq:lowbdtheta\]), we recall that $$\theta _{k}=\left\{
\begin{array}{cl}
1 & \text{if }\mathbf{h}_{k}(1)\geq \frac{\delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert }\text{;} \\
\eta _{k} & \text{otherwise,}\end{array}\right.$$where $\eta_{k}=\frac{\eta \delta_k \min \left( \mathbf{u}_{k}\right)}{(1+\eta )M\Vert {\mathbf{w}}_{k+1}\Vert\left\Vert \Delta_{k}\right\Vert^2 } .$ Suppose $\theta _{k}$ is not bounded below by $\xi >0$. Since $\mathbf{u}_{k}$ is bounded, we can find a subsequence $\{k_{j}\} $ such that $$\lim_{j\rightarrow \infty }\theta _{k_{j}}=0, \lim_{j\rightarrow \infty }\mathbf{u}_{k_{j}}=\mathbf{v}>0.$$ Note that $\mathbf{v}>0$ by Lemma \[yktox\].
From (\[eq:Fnon\]), $F^{\prime}(\mathbf{u}_{k_j}, \lambda_{k_j})$ is a nonsingular matrix, and the vector $ \left [\Delta_{k_j}^{T},\delta_{k_j} \right ]^{T}$ satisfies $$\left [\begin{array}{c}
\Delta_{k_j}\\
\delta_{k_j}
\end{array}\right ]=-F'(\mathbf{u}_{k_j},\lambda_{k_j} )^{-1}\left[
\begin{array}{c}
\mathcal{A}({\bf u}_{k_j}){\bf u}_{k_j}-\lambda_{k_j}{\bf u}_{k_j} \\
0
\end{array}
\right].$$ Since the sequence $\left\{ \lambda_{k}\right\} $ is monotonically increasing and bounded above, we have $\lim_{k\rightarrow
\infty }\lambda_{k}=\alpha $. Therefore, $$\begin{aligned}
\lim_{j\rightarrow \infty }\left [\begin{array}{c}
\Delta_{k_j}\\
\delta_{k_j}
\end{array}\right ]&=&\lim_{j\rightarrow \infty
}-F'(\mathbf{u}_{k_j},\lambda_{k_j} )^{-1}\left[
\begin{array}{c}
\mathbf{r}(\mathbf{u}_{k_j},\lambda_{k_j}) \\
0
\end{array}
\right]\\
&=&-F'(\mathbf{v},\alpha)^{-1}\left[
\begin{array}{c}
\mathbf{r}(\mathbf{v},\alpha) \\
0
\end{array}
\right]<\infty,\end{aligned}$$which means $\left\Vert \Delta_{k_j} \right\Vert $ is bounded. If $\eta_k$ is defined only on a finite subset of $\{k_j\}$, then $\theta_{k_j}=1$ except for a finite number of $j$ values, contradicting $\lim_{j\rightarrow \infty }\theta _{k_{j}}=0$. If $\eta_k$ is defined on an infinite subset $\{k_{j_i}\}$ of $\{k_j\}$, then $$\begin{aligned}
0=\lim_{i\rightarrow \infty }\eta _{k_{j_i}}&=& \lim_{i\rightarrow \infty } \frac{\eta \delta_{k_{j_i}} \min \left( \mathbf{u}_{k_{j_i}}\right)}{(1+\eta )M\Vert {\mathbf{w}}_{k_{j_i}+1}\Vert\left\Vert \Delta_{k_{j_i}}\right\Vert^2 } \\
&\ge& \lim_{i\rightarrow \infty } \frac{\eta \delta_{k_{j_i}} m}{(1+\eta )Mm\left\Vert \Delta_{k_{j_i}}\right\Vert^2 }
\\
&=& \lim_{i\rightarrow \infty } \frac{\eta \delta_{k_{j_i}} }{(1+\eta )M\left\Vert \Delta_{k_{j_i}}\right\Vert } \frac{1}{\Vert \Delta_{k_{j_i}}\Vert}\\
&\ge& \lim_{i\rightarrow \infty } \frac{\eta \beta }{(1+\eta )M } \frac{1}{\Vert \Delta_{k_{j_i}}\Vert}.\end{aligned}$$ It follows that $\lim_{i\rightarrow \infty } \Vert \Delta_{k_{j_i}}\Vert = \infty.$ This is contradictory to $\Vert \Delta_{k_j} \Vert < \infty$.
Convergence analysis
====================
In this section, we prove that the convergence of NNI is global and quadratic, assuming that $\mathbf{u}_{k}\neq \mathbf{u}_{\ast }$ for each $k$.
Global convergence of NNI
-------------------------
Theorem \[monotone\] shows that the sequence $\left\{\lambda_{k}\right\} $ is strictly increasing and bounded above by a constant and hence converges. We now show that the limit of $\lambda_{k}$ is precisely the eigenvalue $\lambda_{\ast}$ of NAEP (\[dnep\]).
\[main\]Let $A$ be an irreducible M-matrix and the sequence $\left\{ \lambda_{k}\right\} $ is generated by Algorithm \[alg1\]. If $\mathbf{a}, \Gamma>0$, then the NAEP (\[dnep\]) has a positive eigenvecor.
From (\[eq: recurrLam\]), (\[eq333\]) and Lemma \[thetak\], we have $$\begin{aligned}
\lambda_{k+1}-\lambda_{k}&=&\min \left( \frac{\mathbf{h}_{k}(\theta _{k})}{\mathbf{u}_{k+1}}\right) \geq \min \left(\frac{\theta_k \delta_k\mathbf{u}_{k}}{(1+\eta)\Vert {\mathbf{w}}_{k+1}\Vert \mathbf{u}_{k+1}}\right) \notag \\
&\geq &\min \left(\frac{\xi \delta_k\mathbf{u}_{k}}{(1+\eta) \mathbf{w}_{k+1}}\right). \label{eq: 3term}\end{aligned}$$ From (iii) of Lemma \[yktox\], we have $\|\mathbf{w}_{k+1}\|
\le \frac{1}{m} < \infty$. It follows from (\[eq: 3term\]) that $\lim_{k\rightarrow \infty } \delta_k \min (\mathbf{u}_{k})=0$. From (ii) of Lemma \[yktox\], $\min (\mathbf{u}_{k})$ is bounded below by a positive constant, and thus $\lim_{k\rightarrow \infty }\delta_k=0.$
Let $\mathbf{v}$ be any limit point of $\{\mathbf{u}_k\}$, with $\lim_{j\to \infty} \mathbf{u}_{k_j}=\mathbf{v}>0$. From Lemma \[equiThm\], we then have $\lim_{j\to \infty}\delta _{k_j}=0$ if and only if $\lim_{j\to \infty} \left(\mathcal{A}(\mathbf{u}_{k_j})\mathbf{u}_{k_j}-\lambda _{k_j}\mathbf{u}_{k_j}\right)=0$, which means $\mathcal{A}(\mathbf{v})\mathbf{v}=\lambda \mathbf{v}$. Therefore, $\mathbf{v}$ is a positive eigenvector of NAEP and $\lambda =\min \left( \frac{\mathcal{A}(\mathbf{v})\mathbf{v}}{\mathbf{v}}\right)$ is the corresponding eigenvalue, i.e., $\mathbf{u}_{\ast}=\mathbf{v}$ and $ \lambda_{\ast}=\lambda.$
The above theorem guarantees the global convergence of NNI and also proves the existence of positive eigenvectors of NAEP.
Quadratic convergence of NNI
----------------------------
In the previous section, we discussed the global convergence of NNI. In the following section, we will establish a convergence rate analysis by exploiting a connection between NNI and Newton’s method. So we start with the following result about Newton’s method.
\[eslonyk\] Suppose that $\left( \mathbf{u}_{k}, \lambda_{k}\right) $ form Algorithm \[alg1\] is sufficiently close to an eigenpair $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right) $ with $\mathbf{u}_{\ast} >0$ and $\Vert\mathbf{u}_{\ast}\Vert=1$. Let $\left\{ \allowbreak \widehat{\mathbf{u}}_{k},\widehat{\lambda }_{k}\right\}$ be obtained by Newton’s method from $\left( \mathbf{u}_{k}, \lambda_{k}\right) $, i.e., $$\widehat{\mathbf{u}}_{k} = \mathbf{u}_{k}+\Delta_k, \text{ } \widehat{\lambda }_{k}=\lambda_k+\delta_k,$$ where $\Delta_k$ and $\delta_k$ as in Algorithm \[alg1\]. Then there is a constant $\beta $ such that for all $\left( \mathbf{u}_{k}, \lambda_{k}\right) $ sufficiently close to $\left( \mathbf{u}_{\ast }, \lambda_{\ast})\right)$ $$\left\Vert \left[
\begin{array}{c}
\widehat{\mathbf{u}}_{k+1} \\
\widehat{\lambda }_{k+1}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert \leq c \left\Vert \left[
\begin{array}{c}
\mathbf{u}_{k} \\
\lambda_{k}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert ^{2}, \label{eq: qudraNT}$$
We already know that $F^{\prime}\left( \mathbf{u}_{k}, \lambda_{k}\right) $ is nonsingular. It is also clear that $F^{\prime}\left( \mathbf{u}, \lambda \right) $ satisfies a Lipschitz condition at $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right)$ since its Fréchet derivative is continuous in a neighborhood of $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right)$. The inequality (\[eq: qudraNT\]) is then a basic result of Newton’s method.
\[relation\] Let $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right) $ be an eigenpair with $\mathbf{u}_{\ast} >0$ and $\Vert\mathbf{u}_{\ast}\Vert=1$. Let $\left\{ \mathbf{u}_{k}, \lambda_{k}\right\}$ be generated by NNI. Then there are constants $ c_2>0$ such that $ \left |\lambda _{k}-\lambda_{\ast}\right | \le
c_2 \|\mathbf{u}_{k}-\mathbf{u}_{\ast }\|$ for all $\mathbf{u}_{k}$ sufficiently close to $\mathbf{u}_{\ast}$.
Since $$\begin{aligned}
\left |\lambda _{k}-\lambda_{\ast}\right |
&=&\min \left ( \frac{\mathcal{A}(\mathbf{u}_{k})\mathbf{u}_{k}}{\mathbf{u}_{k}}- \frac{\mathcal{A}(\mathbf{u}_{\ast})\mathbf{u}_{\ast}}{\mathbf{u}_{\ast}}\right )\le \left \| \frac{\mathcal{A}(\mathbf{u}_{k})\mathbf{u}_{k}}{\mathbf{u}_{k}}- \frac{\mathcal{A}(\mathbf{u}_{\ast})\mathbf{u}_{\ast}}{\mathbf{u}_{\ast}}\right \| .\end{aligned}$$ Since the Fréchet derivative of $\frac{\mathcal{A}(\mathbf{u})\mathbf{u}}{\mathbf{u}}$ is continuous in a neighborhood of $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right)$, we have $ \left |\lambda _{k}-\lambda_{\ast}\right | \le
c_2 \|\mathbf{u}_{k}-\mathbf{u}_{\ast }\|$ for all $\left( \mathbf{u}_{k}, \lambda_{k}\right)$ sufficiently close to $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right)$.
We now prove the local quadratic convergence of Algorithm \[alg1\].
\[quadratic\] Assume $\left\{ \mathbf{u}_{k}, \lambda_{k}\right\}$ be generated by NNI. Suppose that $\left( \mathbf{u}_{k_0}, \lambda_{k_0}\right) $ is sufficiently close to an eigenpair $\left( \mathbf{u}_{\ast}, \lambda_{\ast}\right) $ with $\mathbf{u}_{\ast} >0$ and $\Vert\mathbf{u}_{\ast}\Vert=1$. Then $\lambda_{k}$ converges to $\lambda_{\ast}$ quadratically and $\mathbf{u}_{k}$ converges to $\mathbf{u}_{\ast}$ quadratically.
For some $\delta\in (0, \min \mathbf{u}_{\ast})$, there are positive constants $c_1$, $c_2 $ and $c_3$ such that $$\label{eq2.1}
\left\Vert \left[
\begin{array}{c}
\widehat{\mathbf{u}}_{k+1} \\
\widehat{\lambda }_{k+1}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert \leq c \left\Vert \left[
\begin{array}{c}
\mathbf{u}_{k} \\
{\lambda }_{k}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert^{2}$$ whenever $\left\Vert \left[
\begin{array}{c}
{\mathbf{u}}_{k} \\
{\lambda }_{k}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert <\delta$, $$\left |{\lambda }_{k}-\lambda_{\ast}\right | \le c_2 \|\mathbf{u}_{k}-\mathbf{u}_{\ast }\| \label{eq2.2}$$ whenever $\left\Vert {\mathbf{u}}_{k} -\mathbf{u}_{\ast } \right\Vert <\delta$, and $$\left \| F(\widehat{\mathbf{u}}_{k+1},\widehat{\lambda}_{k+1} ) - F(\mathbf{u}_{\ast},\lambda_{\ast} ) \right \| \le c_3 \left\Vert \left[
\begin{array}{c}
\widehat{\mathbf{u}}_{k+1} \\
\widehat{\lambda }_{k+1}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{x}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert \label{eq2.3}$$ whenever $\left\Vert \left[
\begin{array}{c}
\widehat{\mathbf{u}}_{k+1} \\
\widehat{\lambda }_{k+1}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert <\delta$. Note that $\mathbf{u}_k > 0$ is guaranteed. Now for all $\epsilon>0$ sufficiently small,assume that $\left\Vert \left[
\begin{array}{c}
{\mathbf{u}}_{k} \\
{\lambda }_{k}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert <\epsilon$ for $k=k_0$. By (\[eq2.1\]) and (\[eq2.2\]) we have (with $\epsilon \le \delta$) $$\|\widehat{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast } \|
\le c(1+c_2)^2\|{\mathbf{u}}_{k} - \mathbf{x}_{\ast } \|^2 \le c(1+c_2)^2\epsilon^2.$$
By and we have (with $\epsilon \le \delta$, $c\epsilon^2 \le \delta$) $$\left |\frac{1}{2}\left( 1-\widehat{\mathbf{u}}_{k+1}^{T}\widehat{\mathbf{u}}_{k+1}\right) \right | \le c_3c(1+c_2)^2\|{\mathbf{u}}_{k} -
\mathbf{u}_{\ast } \|^2 \le c_3c(1+c_2)^2\epsilon^2.$$ Then $\Vert \widehat{\mathbf{u}}_{k+1} \Vert \ge \frac{1}{2}$ (with $c_3c(1+c_2)^2\epsilon^2 < \frac{3}{8}$). Now $$\left |\|\widehat{\mathbf{u}}_{k+1}\| -1 \right | \le \frac{1}{\left |\|\widehat{\mathbf{u}}_{k+1}\| +1 \right | }2c_3c(1+c_2)^2\|{\mathbf{u}}_{k} -
\mathbf{u}_{\ast } \|^2\le \frac{4}{3}c_3c(1+c_2)^2\|{\mathbf{u}}_{k} -
\mathbf{u}_{\ast } \|^2 .$$ Then $$\begin{aligned}
\|{\mathbf{u}}_{k+1} - \mathbf{x}_{\ast } \|&=&\| {\mathbf{u}}_{k+1} -
\widehat{\mathbf{u}}_{k+1} + \widehat{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast }
\|\\
&\le &\| {\mathbf{u}}_{k+1} - \widehat{\mathbf{u}}_{k+1}\|+\| \widehat{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast } \| \\
&=& \| \left ({\mathbf{u}}_{k+1} - \|\widehat{\mathbf{u}}_{k+1}\|{\mathbf{u}}_{k+1}\right ) \|+\| \widehat{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast } \|
\\
&=&\left | \|\widehat{\mathbf{u}}_{k+1}\|-1\right | +\| \widehat{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast } \| \\
&\le & \left (1+\frac{4}{3}c_3\right ) c(1+c_2)^2\|{\mathbf{u}}_{k} -
\mathbf{u}_{\ast } \|^2.\end{aligned}$$ For $\epsilon$ with $(1+\frac{4}{3}c_3) c(1+c_2)^2 \epsilon \le
\frac{1}{1+c_2}$, we have $\|{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast } \| <
\frac{1}{1+c_2}\epsilon$ and thus $\left |{{\lambda }}_{k+1}-\lambda_{\ast}\right | \le c_2 \|\mathbf{u}_{k+1}-\mathbf{u}_{\ast
}\| < \frac{c_2}{1+c_2}\epsilon$. Therefore, $\left\Vert \left[
\begin{array}{c}
{\mathbf{u}}_{k+1} \\
{\lambda }_{k+1}\end{array}\right] -\left[
\begin{array}{c}
\mathbf{u}_{\ast } \\
\lambda_{\ast}\end{array}\right] \right\Vert = \|{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast } \|+
\left |{\lambda }_{k+1}-\lambda_{\ast}\right | < \epsilon$. We can then repeat the above process to get $\|{\mathbf{u}}_{k+1} - \mathbf{u}_{\ast }
\| \le d \|{\mathbf{u}}_{k} - \mathbf{u}_{\ast } \|^2$ for all $k\ge k_0$ and $d = \left (1+\frac{4}{3}c_3\right ) c(1+c_2)^2$. Thus $\mathbf{u}_{k}$ converges to $\mathbf{u}_{\ast}$ quadratically and then ${\lambda }_{k}$ converges to $\lambda_{\ast}$ quadratically by .
Numerical experiments {#sec:exp}
=====================
In this section, we present some numerical results to support our theory for NNI and illustrate its effectiveness. All numerical tests were performed on 4.2GHz quad-core Intel Core i7 with $32$ GB memory using Matlab R$2018b $ with machine precision $\varepsilon =2.22\times 10^{-16}$ under the macOS High Sierra. Throughout the experiments, the initial vector is $\mathbf{u}_{0}=\frac{1}{\sqrt{n}}[1,\ldots ,1]^{T}\in \mathbb{R}^{n}$. In the experiments, the stopping criterion for NNI is the relative residual $$\frac{\Vert \mathcal{A}(\mathbf{u}_k)-\lambda_k\mathbf{u}_k\Vert} {(\|\mathcal{A}(\mathbf{u}_k)\|_1\|\mathcal{A}(\mathbf{u}_k)\|_{\infty})^{1/2}} \le 10^{-12},$$ where we use the cheaply computable $(\|\mathcal{A}(\mathbf{u}_k)\|_1\|\mathcal{A}(\mathbf{u}_k)\|_{\infty})^{1/2}$ to estimate the 2-norm $\|\mathcal{A}(\mathbf{u}_k)\|$, which is more reasonable than the individual $\|\mathcal{A}(\mathbf{u}_k)\|_1$ or $\|\mathcal{A}(\mathbf{u}_k)\|_{\infty}$ with $\|\cdot\|_{\infty}$ the infinity norm of a matrix.
\[exp:FEM\] Consider the finite-difference discretization of the nonlinear eigenvalue problem (\[eq:NSLE\]) with Dirichlet boundary conditions on $[0,1]\times [0,1]$, i.e., $$\begin{aligned}
A\mathbf{u}+\Gamma \mathrm{diag} \left (\mathbf{e}-\frac{\mathbf{e}}{\mathbf{a}+\mathbf{u}^{[2]}}\right ) \mathbf{u}=\lambda \mathbf{u}, \end{aligned}$$where $A \in \mathbb{R}^{n \times n}$ is a negative 2D Laplacian matrix with Dirichlet boundary conditions.
For Example \[exp:FEM\], Figure \[fig:rand\] depicts how the relative residual evolves versus the number of iterations for NNI. It shows that NNI uses $8$ iterations to achieve the required accuracy, clearly indicating its quadratic convergence.
Figure \[fig:gamma\] shows that the magnitude of parameter $\Gamma$ affects the total number of iterations to achieve convergence. As we see, NNI requires more iterations to achieve convergence for lager parameter $\Gamma$.
Table \[tbl:outer\] reports the results obtained by NNI. In the table, $n$ specifies the dimension, $\mathbf{a}$ is a parameter to adjust the diagonal elements of $\mathrm{diag} \left (\mathbf{e}-\frac{\mathbf{e}}{\mathbf{a}+\mathbf{u}^{[2]}}\right )$. $\mathbf{a}\ge1$denotes that each element of $\mathbf{a}$ is larger than $1$, $1>\mathbf{a}>0$denotes that each element of $\mathbf{a}$ is between $0$ and $1$, $\mathbf{a}>0$denotes that each element of $\mathbf{a}$ is larger than $0$. Iterdenotes the number of iterations to achieve convergence, Residualdenotes the relative residual when NNI is terminated. From the table, we see that the number of iterations for NNI is at most $23$, clearly indicating its rapid convergence. For this example, $\mathbf{h}_{k}(\theta_k)>0$ holds with $\theta_k = 1$ for each iteration of NNI and the halving procedure is not used. These results indicate that our theory of NNI can be conservative.
------------ ------------------ -- ------ ---------- --
Parameters NNI
$n$ $\mathbf{a} $ Iter Residual
$2500$ $\mathbf{a}\ge1$ $6$ 2.52e-16
$10000$ $\mathbf{a}\ge1$ $6$ 2.38e-16
$40000$ $\mathbf{a}\ge1$ $6$ 2.79e-16
$2500$ $1>\mathbf{a}>0$ $13$ 2.15e-16
$10000$ $1>\mathbf{a}>0$ $16$ 2.25e-16
$40000$ $1>\mathbf{a}>0$ $23$ 2.50e-16
$2500$ $\mathbf{a}>0$ $13$ 1.98e-16
$10000$ $\mathbf{a}>0$ $15$ 1.34e-15
$40000$ $\mathbf{a}>0$ $21$ 7.04e-16
------------ ------------------ -- ------ ---------- --
: Numerical results for NNI[]{data-label="tbl:outer"}
Conclusion
==========
In this paper, we are concerned with the nonlinear algebraic eigenvalue problem (NAEP) generated by the discretization of the saturable nonlinear Schrödinger equation. Based on the idea of Noda’s iteration and Newton’s method, we have proposed an effective method for computing the positive eigenvectors of NAEP, called Newton–Noda iteration. It involves the selection of a positive parameter $\theta_k$ in the $k$th iteration. We have presented a halving procedure to determine the parameters $\theta_k$, starting with $\theta_k=1$ for each iteration, such that the sequence approximating target eigenvalue $\lambda_{\ast}$ is strictly monotonic increasing and bounded, and thus its global convergence is guaranteed for any initial positive unit vector. Additionally, another advantage of the presented method is its local convergence speed. We have shown that the parameter $\theta_k$ is chosen eventually equal to $1$ and locally quadratic convergence is achieved. The numerical experiments have indicated that the halving procedure will often return $\theta_k=1$ (i.e., no halving is actually used) for each $k$, and near convergence the halving procedure will always return $\theta_k=1$. These results confirm our theory and demonstrate that our theoretical results can be realistic and pronounced.
This iterative method has several nice features: Structure Preserving–It maintains positivity in its computation of positive ground state vectors, and its convergence is global and quadratic. Easy-to-implement –The structure of the new algorithm is still very simple, although its convergence analysis is rather involved for nonlinear algebraic eigenvalue problems. On the other hand, it gives an alternative approach to approximate the solution of the nonlinear Schrödinger equation by constructing a sequence. This is precisely the way we use to prove the existence of solutions of the discrete nonlinear Schrödinger equation.
Acknowledgements {#acknowledgements .unnumbered}
================
I am very grateful to the Ministry of Science and Technology in Taiwan for funding this research, and I would like to thank Chun-Hua Guo, Wen-Wei Lin and Tai-Chia Lin for their valuable comments on this paper.
[99]{}
<span style="font-variant:small-caps;">H. Berestycki and P. L. Lions</span>, *Nonlinear scalar field equations, I existence of a ground state*, Arch. Rational Mech. Anal. 82 (1983), no. 4, 313–345.
A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Vol. 9 of Classics in Applied Mathematics, SIAM, Philadelphia, PA (1994)
<span style="font-variant:small-caps;">T. Cazenave, P.L. Lions</span>, *Orbital stability of standing waves for some nonlinear Schrödinger equations*, Comm. Math. Phys. 85 (1982) 549–561.
<span style="font-variant:small-caps;">L. Elsner</span>, *Inverse iteration for calculating the spectral radius of a non-negative irreducible matrix*, Linear Algebra and Appl., 15 (1976), pp. 235–242.
<span style="font-variant:small-caps;">S. Gatz and J. Herrmann</span>, *Propagation of optical beams and the properties of two dimensional spatial solitons in media with a local saturable nonlinear refractive index*, J. Opt. Soc. Amer. B, 14 (1997), pp. 1795–1806.
R. A. Horn and C. R. Johnson, Matrix Analysis, The Cambridge University Press, Cambridge, UK (1985)
<span style="font-variant:small-caps;">M. Karlsson</span>, *Optical beams in saturable self-focusing media*, Phys. Rev. A, 46 (1992), pp. 2726– 2734.
<span style="font-variant:small-caps;">P.L. Kelley</span>, *Self-focusing of optical beams*, Phys. Rev. Lett. 15 (1965), 1005.
<span style="font-variant:small-caps;">T.-C. Lin, X. Wang, Z.-Q. Wang</span>, *Orbital stability and energy estimate of ground states of saturable nonlinear Schrödinger equations with intensity functions in $ \mathbb{R}^2$*, J. Differential Equations, 263 (2017), pp. 4750–4786
<span style="font-variant:small-caps;">L.A. Maia, E. Montefusco, and B. Pellacc</span>, *Weakly coupled nonlinear Schrödinger systems: the saturation effect*, Calculus of Variations and Partial Differential Equations 46 (2013), Issue 1-2, pp. 325–351.
<span style="font-variant:small-caps;">J. H. Marburger and E. Dawesg</span>, *Dynamical formation of a small-scale filament*, Phys. Rev. Lett., 21(8), pp. 556–558 (1968).
<span style="font-variant:small-caps;">I. M. Merhasin, B. A. Malomed, K. Senthilnathan, K. Nakkeeran, P. K. A. Wai and K. W. Chow</span>, *Solitons in Bragg gratings with saturable nonlinearities*, J. Opt. Soc. Amer. B,Vol. 24 (2007), pp. 1458–1468.
<span style="font-variant:small-caps;">T. Noda</span>, *Note on the computation of the maximal eigenvalue of a non-negative irreducible matrix*, Numer. Math., 17 (1971), pp. 382–386.
<span style="font-variant:small-caps;">P.H. Rabinowitz</span>, *On a class of nonlinear Schrdinger equations*, Z. Angew. Math. Phys. 43 (1992), no. 2, 270–291.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In [@kkmz] the authors gave a valuation theoretic characterization for a real closed field to be $\kappa$-saturated, for a cardinal $\kappa
\geq \aleph_0$. In this paper, we generalize the result, giving necessary and sufficient conditions for certain o-minimal expansion of a real closed field to be $\kappa$-saturated.
address:
- 'FB Mathematik & Statistik, Universität Konstanz, Germany'
- 'Dipartimento di Matematica, Seconda Università di Napoli, Italy'
- 'FB Mathematik & Statistik, Universität Konstanz, Germany'
author:
- Annalisa Conversano
- 'Paola D’Aquino'
- Salma Kuhlmann
title: 'Saturated o-minimal expansions of real closed fields'
---
Introduction
============
A totally ordered structure ${\ensuremath{\mathcal{M}}}= \langle M, <, \dots \rangle$ (in a countable first order language containing $<$) is o-minimal if every subset of it which is definable with parameters in $M$ is a finite union of intervals in $M$. These structures have many interesting features. We focus here on the following: For $\alpha > 0$, ${\ensuremath{\mathcal{M}}}\mbox{ is }{\ensuremath{\aleph}}_{\alpha}\mbox{-saturated}$ if and only if the underlying order $ \langle M, < \rangle \mbox{ is }{\ensuremath{\aleph}}_{\alpha}\mbox{-saturated}$ as a linearly ordered set ([@ak]). If ${\ensuremath{\mathcal{M}}}$ is an o-minimal expansion of a divisible ordered abelian group (DOAG), then $ \langle M, < \rangle$ is a dense linear order without endpoints (DLOWEP). Now, ${\ensuremath{\aleph}}_{\alpha}$-saturated DLOWEP are well understood, they are Hausdorff’s $\eta_{\alpha}$ - sets, see [@rosenstein]. The above equivalence provides therefore a characterization of ${\ensuremath{\aleph}}_{\alpha}$-saturation of such o-minimal expansions for $\alpha \not= 0$. We are reduced to characterising ${\ensuremath{\aleph}}_0$-saturation. This problem was solved in [@sgr] and in [@kkmz] for DOAG and for real closed fields, respectively. In this paper we generalize this result to power bounded o-minimal expansions of real closed fields, see Theorem \[expansions\]. Miller in [@miller1] proved a dichotomy theorem for o-minimal expansions of the real ordered field by showing that for any o-minimal expansion $\mathcal R$ of $\mathbb R$ not polynomially bounded the exponential function is definable in $\mathcal R$. Later, Miller extended this result to any o-minimal expansion of a real closed field (see [@miller2]) by replacing [*polynomially bounded*]{} by [*power bounded*]{}.
In [@dks] it was shown that a countable real closed field is recursively saturated if and only if it has an integer part which is a model of Peano Arithmetic (see [@dks] for these notions). In a forthcoming paper, we give a valuation theoretic characterization of recursively saturated real closed fields (of arbitrary cardinality), and their o-minimal expansions.
Background on o-minimal structures {#back}
==================================
We recall some properties of o-minimal structures. Let $\mathcal L$ be a countable language containing $<$, and let ${\ensuremath{\mathcal{M}}}= \langle M, <, \dots \rangle$ be an o-minimal $\mathcal L$-structure. If $A\subset M$ then the algebraic closure $\operatorname{acl}(A)$ of $A$ is the union of the finite $A$-definable sets, and the definable closure $\operatorname{dcl}(A)$ is the union of the $A$-definable singletons. In general, $\operatorname{dcl}(A)\subseteq \operatorname{acl}(A)$, but in an o-minimal structure ${\ensuremath{\mathcal{M}}}$ they coincide. For example, if ${\ensuremath{\mathcal{M}}}$ is a divisible abelian group and $A\in M$ then the definable closure of $A$ coincides with the $\mathbb Q$ vector space generated by $A$, $\operatorname{dcl}(A)=\langle A \rangle_{\mathbb Q}$. If ${\ensuremath{\mathcal{M}}}$ is a real closed field then the definable closure of $A\subset M$ is the relative real closure of the field $\mathbb Q(A)$ in $M$, i.e. $\operatorname{dcl}(A)=\mathbb Q(A)^{rc}$.
Notice that over a countable language $\mathcal L$ the cardinality of the definable closure of a set $A$ is: $$\label{carddcl}
|\operatorname{dcl}(A)|=
\left\{ \begin{array}{ccc}
\aleph_0 & \mbox{ if } & |A|\leq \aleph_0\\
|A| & \mbox{ if } & |A| > \aleph_0
\end{array}\right.$$
In [@pillaysteinhorn] it is proved that in any o-minimal structure ${\ensuremath{\mathcal{M}}}$ the operator $dcl$ is a pregeometry, i.e. it satisfies the following properties:
1. for any $A\subseteq M$, $A\subseteq \operatorname{dcl}(A)$;
2. for any $A\subseteq M$, $\operatorname{dcl}(A)\subseteq \operatorname{dcl}(\operatorname{dcl}(A))$;
3. for any $A\subseteq M$, $\operatorname{dcl}(A)=\bigcup \{ \operatorname{dcl}(F): F\subseteq A, \mbox{ \hspace{.1in}$F$ finite}\}$
4. [*(Exchange Principle)*]{} for any $A\subseteq M$, $a, b\in M$ if $a\in \operatorname{dcl}(A\cup \{ b\} )- \operatorname{dcl}(A)$ then $b\in \operatorname{dcl}(A\cup \{ a\} )$.
The Exchange Principle guarantees that in any o-minimal structure ${\ensuremath{\mathcal{M}}}$ there is a good notion of independence:
A subset $A\subset M$ is [*independent*]{} if for all $a\in A$, $a\not\in \operatorname{dcl}(A-\{ a\} )$. If $B\subset M$ we say that $A$ is [*independent over $B$*]{} if $a\not\in \operatorname{dcl}(B\cup (A-\{ a\}))$. A subset $A\subseteq M$ is said to generate $\mathcal M$ if $M=\operatorname{dcl}(A)$. An independent set $A$ that generates $\mathcal M$ is called a basis. The Exchange Principle guarantees that any independent subset of $M$ can be extended to a basis, and all basis for $\mathcal M$ have the same cardinality. So a basis for $\mathcal M$ is any maximal independent subset. The [*dimension*]{} of $\mathcal M$ is the cardinality of any basis. It is easy to extend the notion of a basis of $\mathcal M^{\prime}$ over $\mathcal M$ when $\mathcal M\preceq \mathcal M^{\prime}$. Note that $$\label{ineqdim}
\dim ( {\ensuremath{\mathcal{M}}}^{\prime}) \leq |A|$$
We recall the notion of [*prime*]{} model of a theory $T$. Let $A\subseteq \mathcal M\models T$. The model $\mathcal M$ is said to be prime over $A$ if for any $\mathcal M'\models T$ with $A
\subseteq M' $ there is an elementary mapping $f:\mathcal M\rightarrow \mathcal M'$ which is the identity on $A$. For exampe, if $T$ is the theory of real closed fields the real closure of an ordered field $F$ is prime over $F$. It is well known, see [@pillaysteinhorn], that if $\mathcal M$ is an o-minimal structure, and $A\subseteq M$ then $Th(\mathcal M)$ has a prime model over $A$, and this is unique up to $A$-isomorphism. For any subset $A\subseteq M$ it coincides with $\operatorname{dcl}(A)$. If $A=\emptyset$ then $\operatorname{dcl}(\emptyset) =P$ is the prime model of $T$.
Let us notice that if ${\ensuremath{\mathcal{M}}}$ is a real closed field, then the dimension of ${\ensuremath{\mathcal{M}}}$ over the prime field coincides with the transcendence degree of ${\ensuremath{\mathcal{M}}}$ over ${\ensuremath{\mathbb{Q}}}$.
$\aleph_{\alpha}$-saturated divisible ordered abelian groups {#DOAG}
============================================================
We summarize the required background (see [@book] and [@sgr]). Let $(G, +, 0, <)$ be a divisible ordered abelian group. For any $x\in G$ let $|x|=\max \{x,-x\}$. For non-zero $x, y \in G$ we define $x\sim y$ if there exists $n \in {\ensuremath{\mathbb{N}}}$ such that $n|x| \geq |y| $ and $ n|y| \geq |x|. $ We write $x<<y$ if $n|x| < |y|$ for all $n \in {\ensuremath{\mathbb{N}}}$. Clearly, $\sim$ is an equivalence relation. Let $\Gamma := G-\{ 0\}/\sim = \{[x] : x \in G-\{ 0\} \}$. We can define an order on $<_{\Gamma}$ in terms of $<<$ as follows, $[y]\, <_{\Gamma} [x] $ if $x <<y$ (notice the reversed order).
[*(a)*]{} $\Gamma$ is a totally ordered set under $<_{\Gamma}$, and we will refer to it as the value set of $G$.
[*(b)*]{} The map $$\begin{aligned}
v \colon &G\ \longrightarrow\ \Gamma \cup \{\infty\} \\
&0\quad \mapsto\quad \infty \\
&x\quad \mapsto\quad [x] \quad (\mbox{if }x \neq 0)\end{aligned}$$ is a valuation on $G$ as a ${\ensuremath{\mathbb{Z}}}$-module, i.e. for every $x, y \in G$: $v(x) = \infty$ if and only if $x = 0$, $v(nx) = v(x)$ for all $n \in {\ensuremath{\mathbb{Z}}}$, $n \neq 0$, and $v(x+y) \geq \min\{v(x), v(y)\}$.
For every $\gamma \in \Gamma$ the Archimedean component associated to $\gamma$ is the maximal Archimedean subgroup of $G$ containing some $x\in \gamma$. We denote it by $A_{\gamma}$. For each $\gamma$, $A_{\gamma}\subseteq (\mathbb R, +,0,<).$
Let $\lambda$ be an infinite ordinal. A sequence $(a_{\rho})_{ \rho < \lambda}$ contained in $G$ is said to be [*pseudo Cauchy*]{} (or [*pseudo convergent*]{}) if for every $\rho < \sigma < \tau$ we have $$v(a_{\sigma} - a_{\rho})\ <\ v(a_{\tau} - a_{\sigma}).$$
If $(a_{\rho})_{\rho<\lambda}$ is pseudo Cauchy sequence then for all $\rho < \sigma$ we have $$v(a_{\sigma} - a_{\rho}) = v(a_{\rho + 1} - a_{\rho}).$$
Let $(a_{\rho})_{\rho < \lambda}$ be a pseudo Cauchy sequence in $G$. We say that $x \in G$ is a [*pseudo limit*]{} of $S$ if $$v(x - a_{\rho}) = v(a_{\sigma} - a_{\rho}) = v(a_{\rho + 1} - a_{\rho}) \quad \mbox{ for all } \rho < \sigma.$$
We now recall the characterization of ${\ensuremath{\aleph}}_{\alpha}$-saturation for divisible ordered abelian groups, see [@sgr].
[@sgr] \[doag\] Let $G$ be a divisible ordered abelian group, and let ${\ensuremath{\aleph}}_{\alpha}\geq {\ensuremath{\aleph}}_0$. Then $G$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated in the language of ordered groups if and only
1. its value set is an $\eta_{\alpha}$-set
2. all its Archimedean components are isomorphic to ${\ensuremath{\mathbb{R}}}$
3. every pseudo Cauchy sequence in a divisible subgroup of value set $<{\ensuremath{\aleph}}_{\alpha}$ has a limit in $G$.
Notice that in the case of ${\ensuremath{\aleph}}_0$-saturation the necessary and sufficient conditions reduce only to (1) and (2), see [@sgr].
$\aleph_{\alpha}$-saturated real closed fields {#RCF}
===============================================
If $(R,+,\cdot ,0,1,<)$ is an ordered field then it has a natural valuation $v$, that is the natural valuation associated to the ordered abelian group $(R,+ ,0,<~)$. We will denote by $G$ the value group of $R$ with respect to $v$, i.e. $G=v(R)$. If $(R,+,\cdot ,0,1,<)$ is a real closed field then $G$ is divisible, and we will refer to the rational rank of $G$, $\operatorname{rk}(G)$, for the linear dimension of $G$ as a $\mathbb Q$-vector space.
For the natural valuation on $R$ we use the notations $\mathcal O_R=\{ r\in R:v(r)\geq 0\}$ and $\mu_R =\{ r\in R: v(r)>0\}$, for the valuation ring and the valuation ideal, respectively. The residue field $k$ is the quotient $\mathcal O_R/\mu_R$, and we recall that it is a subfield of $\mathbb R$. Notice that in the case of ordered fields there is a unique archimedean component up to isomorphism, and if the field is real closed the archimedean component is the residue field.
A notion of pseudo Cauchy sequence is easily extended to any ordered field as in the case of ordered abelian groups.
The following characterization of ${\ensuremath{\aleph}}_{\alpha}$-saturated real closed fields was obtained in [@kkmz].
[@kkmz 6.2] \[rcf\] Let $R$ be a real closed field, $v$ its natural valuation, $G$ its value group and $k$ its residue field. Let ${\ensuremath{\aleph}}_{\alpha} \geq {\ensuremath{\aleph}}_0$. Then $R$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated in the language of ordered fields if and only if
1. $G$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated,
2. $k \cong {\ensuremath{\mathbb{R}}}$,
3. every pseudo Cauchy sequence in a subfield of absolute transcendence degree less than ${\ensuremath{\aleph}}_{\alpha}$ has a pseudo limit in $R$.
In the proof of Theorem \[rcf\] the [*dimension inequality*]{} (see [@prestel]) is crucially used in the case of ${\ensuremath{\aleph}}_0$-saturation. This says that the rational rank of the value group of a finite transcendental extension of a real closed field is bounded by the transcendence degree of the extension.
$\aleph_{\alpha}$-saturated expansions of a real closed field {#main}
==============================================================
We show now a generalization of Theorem \[rcf\] to o-minimal expansions of a real closed field $\mathcal M=(M, +,\cdot ,0,1,<,\ldots )$.
The proof follows the lines of the previous characterizations. Also in this case some care is needed for ${\ensuremath{\aleph}}_0$-saturation. We need to bound the rational rank of the value group of a finite dimensional extension. (Recall from (\[carddcl\]) that the cardinality of the definable closure of a finite set is infinite.) Analogues of the dimension inequality have been proved by Wilkie and van den Dries in more general cases.
Let $T$ be the theory of an o-minimal expansion of $\mathbb R$ and assume $T$ is [*smooth*]{}, see [@wilkie]. In [@wilkie] Wilkie showed that if $\mathcal R$ is a model $T$, and $\dim ({\mathcal R})$ is finite then $\operatorname{rk}({\mathcal R}) \leq \dim({\mathcal R})$. This result has been further generalized by van den Dries in [@vdd] to [*power bounded*]{} o-minimal expansions of a real closed field. We recall that $\mathcal M$ is [*power bounded*]{} if for each definable function $f:\mathcal M \rightarrow \mathcal M $ there is $\lambda \in M$ such that $|f(x)|\leq x^{\lambda}$ for all sufficiently large $x>0$ in $M$.
[@vdd]\[inequalityvvd\] Suppose the dimension of $\mathcal M$ is finite. Then the rational rank of the value group $G$ of $\mathcal M$ is bounded by $\dim (\mathcal M)$.
\[expansions\] Let ${\ensuremath{\mathcal{M}}}= \langle M, <, +, \cdot, \dots \rangle$ be a power bounded o-minimal expansion of a real closed field, $v$ its natural valuation, $G$ its value group, $k$ its residue field, $\mathcal P \subseteq {\ensuremath{\mathcal{M}}}$ its prime model.
Then ${\ensuremath{\mathcal{M}}}$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated if and only if
1. $(G, +, 0,<)$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated,
2. $k \cong {\ensuremath{\mathbb{R}}}$,
3. for every substructure $\mathcal M^{\prime}$ with $\dim (\mathcal M^{\prime}/\mathcal P)<{\ensuremath{\aleph}}_{\alpha}$, every pseudo Cauchy sequence in $M^{\prime}$ has a pseudo limit in $M$.
We assume conditions (1), (2) and (3) and we show that ${\ensuremath{\mathcal{M}}}$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated.
Let $q$ be a complete $1$-type over ${\ensuremath{\mathcal{M}}}$ with parameters in $A \subset M$, with $|A|<{\ensuremath{\aleph}}_{\alpha}$. Let ${\ensuremath{\mathcal{M}}}_0$ be an elementary extension of ${\ensuremath{\mathcal{M}}}$ in which $q(x)$ is realized, and $x_0 \in M_0$ such that ${\ensuremath{\mathcal{M}}}_0 \models q(x_0)$.
To realize $q$ in ${\ensuremath{\mathcal{M}}}$ it is necessary and sufficient to realize the cut that $x_0$ makes in ${\ensuremath{\mathcal{M}}}^{\prime}= \operatorname{dcl}(A)\subseteq {\ensuremath{\mathcal{M}}}$ $$q^{\prime}(x) := \{b \leq x \, ; \, b \in M, \, q \vdash b \leq x \}
\cup \ \{x \leq c \, ; \ c \in M, \, q \vdash x \leq c \}.$$
As we will see in realizing the cut $q^{\prime}$ instead of type $q$ some care is needed in the case of ${\ensuremath{\aleph}}_0$-saturation. If $q^{\prime}(x)$ contains an equality, the result is obvious. So suppose that in $q^{\prime}(x)$ there are only strict inequalities.
Set $$B := \{b \in M^{\prime} \, ; \, q \vdash b < x\} \mbox{ and }
C := \{c \in M^{\prime} \, ; \, q \vdash x < c\}$$ and consider the following subset of $v(M_0)$:
$$\Delta = \{v(d - x_0) \, | \, d \in M^{\prime} \}.$$
There are three cases to consider:
1. [*Immediate transcendental case*]{}: $\Delta$ has no largest element.\
2. [ *Value transcendental case*]{}: $\Delta$ has a largest element $\gamma \not \in v(M^{\prime})$.\
3. [ *Residue transcendental case*]{}: $\Delta$ has a largest element $\gamma \in v(M^{\prime})$.\
$(a)$ $\Delta$ has no largest element. Then
$$\forall\, d \in M^{\prime}\ \exists\, d^{\prime} \in M^{\prime} : v(d^{\prime} - x_0) > v(d - x_0).$$
Let $\{v(d_{\lambda} - x_0)\}_{\lambda < \mu}$ be cofinal in $\Delta$, then $\{d_{\lambda}\}_{\lambda < \mu}$ is a pseudo Cauchy sequence in $M^{\prime}$ and $\dim ({\ensuremath{\mathcal{M}}}^{\prime}/P) \leq |A|<{\ensuremath{\aleph}}_{\alpha}$. Condition (3) implies the existence of a pseudolimit $a\in M$ of $\{d_{\lambda}\}_{\lambda < \mu}$. We claim that $a$ realizes $q^{\prime}(x)$ in ${\ensuremath{\mathcal{M}}}$. The ultrametric inequality gives $$v(a-x_0)=v(a-d_{\lambda}+d_{\lambda}-x_0)\geq \min \{ v(a-d_{\lambda}), v(d_{\lambda}-x_0)\}.$$ Moreover, from properties of pseudo Cauchy sequences we have $$v(a-d_{\lambda}) =v(d_{\lambda+1}-d_{\lambda})=v(x_0-d_{\lambda}),$$ which implies that for all $\lambda$, $v(a-x_0)\geq v(d_{\lambda}-x_0)$. Thus for all $d\in \mathcal M^{\prime}$, $v(a-x_0)>v(d-x_0)$. We want to show that $a$ fills the cut determined by $B$ and $C$, and so $a$ realizes $q^{\prime}$. Let $b\in B$, if $a\leq b$ then $a\leq b< x_0$, and this implies $v(a-x_0)\geq v(b-x_0)$, which is a contradiction. Hence $b<a$. In a similar way we can how that if $c\in C$ then $a<c$.\
$(b)$ $\Delta$ has a largest element $\gamma \not \in
v(M^{\prime})$. Fix $d_0 \in M^{\prime}$ such that $v(d_0 - x_0) =
\gamma$ is the maximum of $\Delta$. Assume $d_0 \in B$ (the case $d_0\in C$ is treated similarly). Let $\Delta_1=\{ v(c-d_0): c\in C\}$ and $\Delta_2=\{ v(b-d_0): b\in B, b>d_0 \}$.
[*Claim.*]{} $\Delta_1<\gamma <\Delta_2$.
From $d_0\in B$ it follows $v(c-x_0)<\gamma$ for all $c\in C$. Thus $$v(c-d_0)=v(c-x_0+x_0-d_0)=\min\{ v(c-d_0),v(x_0-d_0)\}=$$ $$v(c-x_0)<\gamma$$ Let $b\in B$ and $b\geq d_0$ then $v(x_0-b)\geq v(x_0-d_0)=\gamma$, and by the maximality of $\gamma$ the equality must hold. Thus, $$v(b-d_0)=v(b-x_0+x_0-d_0)\geq \min\{ v(b-d_0),v(x_0-d_0)\}=\gamma.$$ Since $\gamma \not \in v(M^{\prime})$ we have $v(b-d_0)>\gamma$, which completes the proof of the Claim.
Consider the set of formulas $$t(y) = \{v(c - d_0) < y ; c \in C \} \cup
\{y < v(b - d_0); b \in B, b > d_0\}.$$
This is a type over $G$ with parameters in $v(M^{\prime})$. Let $G^{\prime}=v(M^{\prime})$. If ${\ensuremath{\aleph}}_{\alpha}>{\ensuremath{\aleph}}_0$ then $\operatorname{card}(G^{\prime})<{\ensuremath{\aleph}}_{\alpha}$ and by hypothesis $(1)$ we can realize $t(y)$ in $G$.
If ${\ensuremath{\aleph}}_{\alpha}={\ensuremath{\aleph}}_0$ then $\mathcal M^{\prime}$ has finite dimension over the prime field $\mathcal P$, and Theorem \[inequalityvvd\] implies that the rational rank of $G^{\prime}$ is bounded by the dimension of $\mathcal M^{\prime}$ over $\mathcal P$. So, we can transform the type $t(y)$ in a type $t^{\prime}(y)$ where the parameters vary over the finite $\mathbb Q$-basis of $G^{\prime}$. Since $G$ is ${\ensuremath{\aleph}}_0$-saturated we can realize $t^{\prime}(y)$ in $G$. Let $a
\in M$, $a > 0$ such that $v(a) = g$. We claim that $a + d_0 \in M$ realizes $q^{\prime}$. From the definition of the type $t(y)$, it follows that for all $c\in C$ and for all $b\in B$ such that $b>d_0$,$$v(c-d_0)<v(a)<v(b-d_0),$$ and by order property of the valuation $v$ we have that for all $c\in C$ and for all $b\in B$ such that $b>d_0$ $$b-d_0<a< c-d_0$$ which implies for all $c\in C$ and for all $b\in B$ $$b<a+d_0<c,$$ hence $a$ realizes the type $q^{\prime}$ in ${\ensuremath{\mathcal{M}}}$.\
$(c)$ $\Delta$ has a largest element $\gamma \in v(M^{\prime})$. Let $d_0\in M^{\prime}$ and $a\in M^{\prime}$ such that $v(d_0-x_0)=\gamma =v(a)$ (without loss of generality we may assume $a>0$). [*Claim.*]{} There exist $b_0\in B$ and $c_0\in C$ such that for all $b\in B$ with $b\geq b_0$ and for all $c\in C$ with $c\leq c_0$ we have $$v(b-d_0)=\gamma =v(a)= v(c-d_0).$$ From $v(d_0-x_0)=v(a)$ it follows that there exists $n\in {\ensuremath{\mathbb{N}}}$ such that $na>|x_0-d_0|>\frac{a}{n}$. We distinguish the two cases according to $d_0\in B$ and $d_0\in C$. Assume $d_0\in B$, and let $b_0= d_0+\frac{a}{n}$ and $c_0=d_0+na$. Clearly, $b_0<x_0$, so $b_0\in B$, and $x_0<c_0$, so $c_0\in C$. Moreover, $v(b_0-d_0)=v(\frac{a}{n})=v(a)=v(na)=v(c_0-d_0)$. If $b\in B$, $b>b_0$ and $c\in C$, $c<c_0$, then the following inequalities hold $d_0<b_0<b<c<c_0$. Thus, $v(b-d_0)\leq v(b_0-d_0)=\gamma= v(c_0-d_0)\leq v(b-d_0)$. Hence, $\gamma =v(b-d_0)$. Similarly, one shows that $\gamma=v(c_0-d_0)\leq v(c-d_0)\leq v(b_0-d_0)=\gamma$, and so $\gamma=v(c-d_0)$.
Assume $d_0\in C$, and let $b_0=d_0-na$ and $c_0=d_0-\frac{a}{n}$. Similar calculations show that $v(c-d_0)=\gamma=v(b-d_0)$ for $c\in C$, $c<c_0$, and $b\in B$, $b>b_0$.
Our aim is to show that there is an element $r\in M$ which realizes the cut $q^{\prime}(x).$ It is enough to show that there is $r^{{\prime}{\prime}}\in M$ realizing $$\label{type1}
\left \{ \frac{b-d_0}{a}<x ; b\in B, b\geq b_0\right \} \cup \left \{ x<\frac{c-d_0}{a} ; c\in C, c\leq c_0\right \}.$$ Indeed, $r^{\prime}=r^{{\prime}{\prime}}a \in M$ realizes $$\label{type2}
\left \{ b-d_0<x ; b\in B, b\geq b_0\right \} \cup \left \{ x<c-d_0 ; c\in C, c\leq c_0\right \}$$ and so $r= r^{\prime}+d_0\in M$ realizes $q^{\prime}(x).$ Assume $d_0\in B$. The claim implies that for all $b\in B$, $b\geq b_0$, and for all $c\in C$, $c\leq c_0$ we have $$v\left (\frac{b-d_0}{a}\right )=v\left (\frac{x_0-d_0}{a}\right )=v\left (\frac{c-d_0}{a}\right )=0,$$ and taking residues the following inequalities hold in ${\ensuremath{\mathbb{R}}}$, the residue field $$\overline{\frac{b-d_0}{a}}<\overline{\frac{x_0-d_0}{a}}<\overline{\frac{c-d_0}{a}}.$$ (Notice that the inequalities are strict because of the maximality of $v(a)$ in $\Delta$.) The cut in ${\ensuremath{\mathbb{R}}}$ $$\left \{ \overline{\frac{b-d_0}{a}}; b\in B,b\geq b_0\right \} \cup \left \{ \overline{\frac{c-d_0}{a}}; c\in C,c\leq c_0 \right \}$$ is realized in ${\ensuremath{\mathbb{R}}}$ by $\overline{\frac{x_0-d_0}{a}}$. If $r^{{\prime}{\prime}}\in M$ is such that $ \overline{r^{{\prime}{\prime}}}=\overline{\frac{x_0-d_0}{a}}$ then $r^{{\prime}{\prime}}$ realizes (\[type1\]) in ${\ensuremath{\mathcal{M}}}$. The proof in the case $d_0\in C$ is similar and we omit it.
We now assume that ${\ensuremath{\mathcal{M}}}$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated and we show that conditions (1),(2) and (3) hold.
\(1) Let $q(x)$ be a type with set of parameters $A \subset G$ such that $\operatorname{card}(A)< {\ensuremath{\aleph}}_{\alpha}$, e.g. suppose $A=\{ g_{\mu}: \mu <\lambda\}$, where $\lambda <{\ensuremath{\aleph}}_{\alpha}$. We have to show that $q(x)$ is realized in $G$. Without loss of generality we can assume that $q(x)$ is a complete type. Let $H$ be the divisible hull of $A$ in $G$. Notice that $\operatorname{card}(H)<{\ensuremath{\aleph}}_{\alpha}$ for ${\ensuremath{\aleph}}_{\alpha}>{\ensuremath{\aleph}}_0$.
It is enough to realize in $G$ the set $$\{g \leq x \, ; \, g \in H, q(x) \vdash g \leq x \} \cup \{x \leq g \ ; \ g \in H, q(x) \vdash x \leq g \}.$$
If the set contains an equality, we are done. So suppose that we only have strict inequalities.
For every ${\mu}\in \lambda$ fix an element $a_{\mu} \in M$, $a_{\mu}> 0$, such that $v(a_{\mu}) = g_{\mu} $. If $g \in H$ and $g = q_1 g_{i_1} + \dots + q_m g_{i_m}$ with $q_1, \dots, q_m \in {\ensuremath{\mathbb{Q}}}$, then $g = v(a_{i_1}^{q_1} \cdot \dots \cdot a_{i_m}^{q_m})$ where for simplicity we choose $a_{i_j}^{q_j} > 0$ for all $j \in \{1, \dots, m\}$. Let $$H_1 = \{g \in H ; q(x) \vdash g < x\} \mbox{ and }
H_2 = \{g \in H ; q(x) \vdash x < g\}$$
and consider $$q^{\prime} (x) = \{ k a_{i_1}^{q_1} \cdot \dots \cdot a_{i_k}^{q_k} < x ; k \in {\ensuremath{\mathbb{N}}}, v(a_{i_1}^{q_1} \cdot \dots \cdot a_{i_k}^{q_k}) \in H_2 \} \cup$$ $$\{k x < a_{i_1}^{q_1} \cdot \dots \cdot a_{i_k}^{q_k} ; k \in {\ensuremath{\mathbb{N}}}, v(a_{i_1}^{q_1} \cdot \dots \cdot a_{i_k}^{q_k}) \in H_1 \}.$$
Since ${\ensuremath{\mathcal{M}}}$ is a dense linear ordering without endpoints, $q^{\prime} (x)$ is finitely realizable in ${\ensuremath{\mathcal{M}}}$. Thus $q^{\prime}(x)$ is a type in the parameters $\{ a_{\mu} \}_{\mu<\lambda}.$ Since ${\ensuremath{\mathcal{M}}}$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated it follows that $q^{\prime}(x)$ is realized in ${\ensuremath{\mathcal{M}}}$, say by $a$. Then $v(a)$ realizes $q(x)$.\
(2) Since $(M, +,0,<)$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated Theorem \[doag\] implies that all Archimedean components are isomorphic to ${\ensuremath{\mathbb{R}}}$, but there is only one Archimedean component and this is the residue field, so $k\cong \mathbb R$.\
(3) Let $(a_{\nu})_{\nu < \mu}$ be a pseudo Cauchy sequence in ${\ensuremath{\mathcal{M}}}^{\prime}$, where ${\ensuremath{\mathcal{M}}}^{\prime}$ is a substructure of ${\ensuremath{\mathcal{M}}}$ and $\dim (\mathcal M^{\prime}/\mathcal P)=\lambda $ $<{\ensuremath{\aleph}}_{\alpha}$. Let $\{ b_{\alpha} ;\alpha <\lambda \}$ be a basis of ${\ensuremath{\mathcal{M}}}^{\prime}$ over the prime field $\mathcal P$. Then all elements $a_{\nu}$ are definable in terms of finitely many elements of the basis with coefficients in the prime field $\mathcal P$. Recall that the prime field $\mathcal P$ coincides with $\operatorname{dcl}(\emptyset )$ hence every element of $\mathcal P$ is definable by a formula without paramenters. This is crucial in the case of ${\ensuremath{\aleph}}_0$-saturation. Let $$q_1(x) = \{n |x - a_{\nu + 1}| < |a_{\nu} - a_{\nu + 1}|; \nu <\mu, n \in {\ensuremath{\mathbb{N}}}\}.$$ Then $q_1(x)$ is a set of formulas in $\lambda$ parameters (in the case of ${\ensuremath{\aleph}}_0$-saturation the parameters are only finitely many). Moreover, $q_1(x)$ is finitely satisfied in ${\ensuremath{\mathcal{M}}}$ since $(a_{\mu})_{\mu < \lambda}$ is pseudo Cauchy. Hence $q_1(x)$ is a type, and a realization of $q_1(x)$ in ${\ensuremath{\mathcal{M}}}$ (which is ${\ensuremath{\aleph}}_{\alpha}$-saturated) is a pseudo limit of the sequence.
${\ensuremath{\aleph}}_{\alpha}$-saturated o-minimal expansions
================================================================
If we take any o-minimal expansion of a real closed field (not necessarily power bounded) we obtain the following analogue of Theorem \[rcf\].
\[genexpansions\] Let ${\ensuremath{\mathcal{M}}}= \langle M, <, +, \cdot, \dots \rangle$ be an o-minimal expansion of a real closed field, $v$ its natural valuation, $G$ its value group, $k$ its residue field, $\mathcal P \subset {\ensuremath{\mathcal{M}}}$ its prime model.
Then ${\ensuremath{\mathcal{M}}}$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated $\ \Longleftrightarrow\ $ for every substructure ${\ensuremath{\mathcal{M}}}^{\prime} \subset {\ensuremath{\mathcal{M}}}$ such that $\dim ({\ensuremath{\mathcal{M}}}^{\prime}/\mathcal P)<{\ensuremath{\aleph}}_{\alpha}$, then
1. $(G, <, +, v({\ensuremath{\mathcal{M}}}^{\prime}))$ is ${\ensuremath{\aleph}}_{\alpha}$-saturated,
2. $k \cong {\ensuremath{\mathbb{R}}}$,
3. every pseudo Cauchy sequence in ${\ensuremath{\mathcal{M}}}^{\prime}$ has a pseudo limit in ${\ensuremath{\mathcal{M}}}$.
The proof is analogous to that of Theorem \[expansions\], and we omit it. We just point out that in the value transcendental case the expansion $(G, <, +, v({\ensuremath{\mathcal{M}}}^{\prime}))$ of the value group is needed for ${\ensuremath{\aleph}}_0$-saturation. In the power bounded case the valuation inequality allows us to get rid of the parameters in $v({\ensuremath{\mathcal{M}}}^{\prime})$. By Miller’s dichotomy (see [@miller2]) the exponential function is definable if we are not in the power bounded case. In a forthcoming paper we further analyze Theorem \[genexpansions\] in that particular case. Finally, note that if in Theorem \[expansions\] we assume ${\ensuremath{\mathcal{M}}}$ is just a real closed field, then we obtain exactly Theorem \[rcf\]: the prime model $\mathcal P$ is the field of real algebraic numbers, and ${\ensuremath{\mathcal{M}}}^{\prime}$ is a submodel of finite dimension over $\mathcal P$ if and only if it is of finite absolute transcendence degree.
[KKMZ02]{} [[. L. Alling and S. Kuhlmann]{}, On $\eta_{\alpha}$-Groups and Fields, [*Order*]{}, [**11**]{} (1994), pp. 85–92.]{}\
[[. D’Aquino, J.F. Knight and S. Starchenko]{}, Real closed fields and models of Peano arithmetic, [*J. Symb. Logic*]{}, [**75(1)**]{} (2010), pp. 1–11.]{}\
[[. van den Dries]{}, T-Convexity and Tame Extensions II, [*J. Symb. Logic*]{}, [**62**]{} (1997), pp. 14–34.]{}\
[[.-V. Kuhlmann, S. Kuhlmann, M. Marshall, M. Zekavat]{}, Embedding ordered fields in formal power series fields, [*J. Pure Appl. Algebra*]{}, [**169**]{} (2002), pp. 71–90.]{}\
[[. Kuhlmann]{}, Groupes ab$\acute e$liens divisibles ordonn$\acute e$s, [*S$\acute e$minaire sur les Structures Alg$\acute e$briques Ordonn$\acute e$es, S$\acute e$lection d’expos$\acute e$s 1984-1987*]{}, [**Vol.1**]{} (1990), pp. 3–14.]{}\
[[. Kuhlmann]{}, [**Ordered Exponential Fields**]{}]{}, The Fields Institute Monograph Series, vol 12. Amer. Math. Soc. 2000.\
[[. Miller]{}, Exponentiation is hard to avoid, [*Proceedings of the American Mathematical Society*]{}, [**vol. 122**]{} (1994), pp. 257-259.]{}\
[[. Miller]{}, A growth dichotomy for o-minimal expansions of ordered fields, [*Logic: from foundations to applications*]{}, [**European Logic Colloquium 1993, (eds. W.Hodges et al.)**]{} (Oxford University Press 1993), pp. 385-399.]{}\
[[. Pillay and C. Steinhorn]{}, Definable sets in ordered structures,I, [*Proceedings of the American Mathematical Society*]{}, [**vol. 295**]{} (1986), pp. 565-592.]{}\
[[. Prestel]{}, [**Valued Fields**]{}]{}, [Springer,]{} [2005.]{}\
[[.G. Rosenstein]{}, [**Linear Orderings**]{}]{}, [Academic Press,]{} [1982.]{}\
[[. Wilkie]{}, Model completeness results for expansions of the ordered field of real numbers by restricted pfaffian functions and the exponetial functions, [*Journal Amer. Math. Soc.*]{}, [**9**]{} (1996), pp. 1051-1094.]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Zhi Jiang
title: On varieties of maximal Albanese dimension
---
A smooth projective complex variety $X$ has [*maximal Albanese dimension*]{} if its Albanese map $ X\to \Alb(X)$ is generically finite onto its image. These varieties have recently attracted a lot of attention and have been shown to have very special geometric properties ([@cha], [@chb], [@asia], [@fuj], [@HAC1], [@par], [@pp]).
Assume for example that $f: X\to Y$ is a surjective morphism between smooth projective varieties of the same dimension. For each positive integer $m$, denote by $P_m(X):=h^0(X,\omega_X^{\otimes m})$ the $m$-th plurigenus of $X$. We have $P_m(X)\ge P_m(Y)$, but it is in general difficult to conclude anything on $f$ if there is equality. However, when $Y$ (hence also $X$) is of general type and has maximal Albanese dimension, Hacon and Pardini proved in [@HAC1], Theorem 3.2, that if $P_m(X)=P_m(Y)$ for some $m\geq 2$, then $f$ is birational. We give in §\[exa\] examples that show that this conclusion does not hold in general.
More generally, if $X\to I(X)$ and $Y\to I(Y)$ are the respective Iitaka fibrations of $X$ and $Y$, we may assume, taking appropriate birational models, that $f$ induces a morphism $I(f): I(X)\to I(Y)$. When $Y$ has maximal Albanese dimension, but is not necessarily of general type, Hacon and Pardini proved that if $P_m(X)=P_m(Y)$ for some $m\geq 2$, then $I(f)$ has connected fibers (since $I(Y)$ is birational to $Y$ when $Y$ is of general type, this implies the result quoted above).
But in their proof, Hacon and Pardini actually do not use the assumption that $Y$ has maximal Albanese dimension; all they need is that $P_m(X)=P_m(Y)>0$ and $I(Y)$ has maximal Albanese dimension (see section 1). However, under their assumption, we prove here a much stronger conclusion.
[**Theorem 1**]{}
*Let $f: X\to Y$ be a surjective morphism between smooth complex projective varieties of the same dimension. If $Y$ has maximal Albanese dimension and $P_m(X)=P_m(Y)$ for some $m\geq 2$, the induced map $I(f): I(X)\dra I(Y)$ between the respective Iitaka models of $X$ and $Y$ is birational.*
Moreover, $f$ is birationally equivalent to a quotient by a finite abelian group.
For more details on the last statement, we refer to Theorem \[thm7\].
In another direction, it was shown by Chen and Hacon ([@cha], Theorem 4) that if $X$ is a smooth projective variety of maximal Albanese dimension, the image of the 6-canonical map $\phi_{6K_X}$ has dimension the Kodaira dimension $\kappa(X)$. If $X$ is moreover of general type, $\phi_{6K_X}$ is birational onto its image ([@asia], Corollary 4.3). We prove a common generalization of these results (Theorem \[iit\]):
[**Theorem 2**]{} [*If $X$ is a smooth complex projective variety with maximal Albanese dimension, $\phi_{5K_X}$ is a model of the Iitaka fibration of $X$.*]{}
The proof follows the ideas of [@pp2] and is based on a result from [@jiang]. We also prove that $\phi_{3K_X}$ is already a model of the Iitaka fibration of $X$ under a stronger assumption on $X$ (Theorem \[iit2\]).
The article is organized as follows. The first section is devoted to the proof of the birationality of $I(f)$. In the second section, we give a complete structure theorem for $f$ (Theorem \[thm7\]) which shows that the situation is quite restricted. In the third section, we present three examples showing that the conclusion of the above theorem can fail when the varieties do not have maximal Albanese dimension, and in the last section, we prove our results on pluricanonical maps of varieties of maximal Albanese dimension.
We work over the field of complex numbers.
Proof that $I(f)$ is birational
===============================
We begin with a general lemma (we refer to [@Laz], §11, for the definition and properties of the asymptotic multiplier ideal sheaf $\cJ(||D||)$ associated with a divisor $D$ on a smooth projective variety).
\[6\] Let $f: X\to Y$ be a surjective morphism between smooth projective varieties of the same dimension with $\kappa(Y)\geq 0$. For any $m\geq 2$, $$f_*(\cO_X(mK_X)\otimes\cJ(||(m-1)K_X||))\supset \cO_Y(mK_Y)\otimes\cJ(||(m-1)K_Y||).$$
Take $N>0$ and let $\t_{Y}: Y'\to Y$ be a log-resolution such that $$\t_Y^*|N(m-1)K_Y|=|L_1|+E_1,$$ where $|L_1|$ is base-point-free and $E_1$ is the fixed divisor. Then we take a log-resolution $\t_X: X'\to X$ such that we have a commutative diagram: $$\CD
X' @>f'>> Y' \\
@V \t_X VV @V \t_Y VV \\
X @>f>> Y
\endCD$$ and $\t_X^*|N( m-1)K_X|=|L_2|+E_2$ where $|L_2|$ is base-point-free and $E_2$ is the fixed divisor. Let $D\in |(m-1)K_{X/Y}|$. Then ${f'}^* E_1+N\t_X^*D\succeq E_2$. Hence $$\begin{aligned}
&&\cO_{X'}(K_{X'/X}+m\t_X^*K_X-\Big\lfloor\frac{1}{N}E_2 \Big\rfloor)\\&\supset&
\cO_{X'}(K_{X'/X}+m\t_X^*K_X-\t_X^*D-\Big\lfloor\frac{1}{N}{f'}^*E_1 \Big\rfloor)\\
&=&\cO_{X'}(K_{X'/X}+\t_X^*K_X+(m-1)\t_X^*f^*K_Y-\Big\lfloor\frac{1}{N}{f'}^*E_1 \Big\rfloor)\\
&=&\cO_{X'} \big(K_{X'/Y'}-\Big\lfloor\frac{1}{N}{f'}^*E_1 \Big\rfloor+{f'}^* \Big\lfloor\frac{1}{N}E_1 \Big\rfloor+{f'}^*(K_{Y'/Y}+m\t_Y^*K_Y-\Big\lfloor\frac{1}{N}E_1 \Big\rfloor) \big).\end{aligned}$$ We may assume that $N$ is sufficiently large and divisible. Then $$\t_{X*} \big(\cO_{X'}(K_{X'/X}+m\t_X^*K_X-\Big\lfloor\frac{1}{N}E_2 \Big\rfloor) \big)=\cO_X(mK_X)\otimes\cJ(||(m-1)K_X||).$$ By step 2 in the proof of [@HAC1], Theorem 3.2, we know that $K_{X'/Y'}-\big\lfloor\frac{1}{N}{f'}^*E_1 \big\rfloor+{f'}^* \big\lfloor\frac{1}{N}E_1 \big\rfloor$ is an effective divisor, hence $$\begin{aligned}
&& \t_{Y*}f'_* \Big(\cO_{X'} \big(K_{X'/Y'}-\Big\lfloor\frac{1}{N}{f'}^*E_1 \Big\rfloor +{f'}^* \Big\lfloor\frac{1}{N}E_1 \Big\rfloor\\
&&\hskip 4cm {}+
{f'}^*(K_{Y'/Y}+m\t_Y^*K_Y-\Big\lfloor\frac{1}{N}E_1 \Big\rfloor) \big) \Big)\\
& \supseteq&
\t_{Y*} \big(\cO_{Y'}(K_{Y'/Y}+m\t_Y^*K_Y-\Big\lfloor\frac{1}{N}E_1 \Big\rfloor) \big)\\
&=& \cO_Y(mK_Y)\otimes\cJ(||(m-1)K_Y||).\end{aligned}$$ This proves the lemma.
We now prove the first part of Theorem 1, stated in the introduction. We start from a surjective morphism $f: X\to Y$ between smooth projective varieties of the same dimension.
Changing the notation from the introduction, we let $ V$ and $ W$ be the respective Iitaka models of $X$ and $Y$, and we may assume, taking appropriate birational models, that we have a commutative diagram of [*morphisms*]{} $$\label{23}
\xymatrix{
X\ar[r]^{f}\ar[d]^{h_X}&Y\ar[d]^{h_Y}\ar[r]^{a_Y}& A\ar[d]^{\pi}\\
V\ar[r]^{g}& W\ar[r]^{a_W} &A/K
}$$ where $h_X$ and $h_Y$ are the respective Iitaka fibrations of $X$ and $Y$, $a_Y$ and $a_W$ are the respective Albanese morphisms of $Y$ and $W$, and $K$ is an abelian subvariety of $A:=\Alb(Y)$ (see [@HAC1], §2.1). We set $$\begin{aligned}
\cH_X:=h_{X*}(\cO_X(mK_X)\otimes\cJ(||(m-1)K_X||))&\quad&\cF_X:=a_{W*}g_*\cH_X
\\
\cH_Y:=h_{Y*}(\cO_Y(mK_Y)\otimes\cJ(||(m-1)K_Y||))& \quad&\cF_Y:=a_{W*} \cH_Y.\end{aligned}$$ When $m\geq 2$, we have $
\cF_Y\subset\cF_X$ by Lemma \[6\] and we denote by $\cQ$ the quotient sheaf $\cF_X/
\cF_Y$ on $A/K$.
[*Assume now*]{} $P_m(X)=P_m(Y)=M>0$. By Theorem 11.1.8 and Proposition 11.2.10 in [@Laz], we have $$\begin{aligned}
P_m(Y)&=&h^0(Y,
\cO_Y(mK_Y)\otimes\cJ(||mK_Y||))\\
&=&h^0(Y,
\cO_Y(mK_Y)\otimes\cJ(||(m-1)K_Y||))\\
&=&h^0(W,
\cH_Y)\\
&=&h^0(A/K, \cF_Y).\end{aligned}$$ Similarly, $$P_m(X)=h^0(V, \cH_X)=h^0(A/K, \cF_X).$$ Thus $\cH_Y\subset h_{Y*}(\cO_Y(mK_Y))$ is a nonzero torsion-free sheaf. Since $h_Y$ is a model of the Iitaka fibration of $Y$ whose general fibers are birationally isomorphic to abelian varieties ([@HAC1], Proposition 2.1), the latter sheaf has rank $1$. So the rank of $\cH_Y$ is also $1$. We have the same situation for $h_X$, hence the rank of $\cH_X$ is again $1$. On the other hand, we claim the following.
$\cQ=0$, hence $\cF_Y=\cF_X$.
In order to prove the Claim, we want to apply Proposition 2.3 in [@HAC1]. Namely, it is enough to prove $h^j(A/K,
\cF_Y\otimes P)=h^j(A/K, \cF_X\otimes P)$ for all $j\geq 0$ and all $P\in\Pic^0(A/K).$ We will first prove that when $j\geq 1$. By Lemma 2.1 in [@jiang], we have $$\label{26}
H^i(W, \cH_Y\otimes a_W^*P)=H^i(W, g_*\cH_X\otimes a_W^*P)=0,$$ for all $P\in \Pic^0(A/K)$ and all $i\geq 1$. We now prove $$\label{27}
R^ja_{W*}\cH_Y=R^ja_{W*}(g_*\cH_X)=0,$$ for all $j\geq 1$, as follows.
First we take a very ample line bundle $H$ on $A/K$ such that, for all $k\geq 1$ and $j\geq 0$, $$\label{28}H^k(A/K, R^ja_{W*}\cF_Y\otimes H)=H^k(A/K, R^ja_{W*}(g_*\cF_X)\otimes H)=0$$ and $R^ja_{W*}\cF_Y\otimes H$ and $R^ja_{W*}(g_*\cF_X)\otimes H$ are globally generated. Again by Lemma 2.1 in [@jiang], $$H^j(W, \cH_Y\otimes a_W^*H)=H^j(W, g_*\cH_X\otimes a_W^*H)=0,$$ for all $j\geq 1$. Therefore, by Leray’s spectral sequence and (\[28\]), we conclude that $$H^0(A/K, R^ja_{W*}\cH_Y\otimes H)=H^0(A/K, R^ja_{W*}(g_*\cH_X)\otimes H)=0$$ for $j\geq 1$. Since $R^ja_{W*}\cH_Y\otimes H$ and $R^ja_{W*}(g_*\cH_X)\otimes H$ are globally generated, we deduce that $R^ja_{W*}\cH_Y=R^ja_{W*}(g_*\cH_X)=0$, for all $j\geq 1$.
Applying the Leray spectral sequence to (\[26\]), we get, by (\[27\]), for all $i\geq 1$ and $P\in \Pic^0(A/K)$, $$H^i(A/K, \cF_Y\otimes P)=H^i(W, \cH_Y\otimes g^*P)=0,$$ and $$H^i(A/K, \cF_X\otimes P)=H^i(W, g_*\cH_X\otimes g^*P)=0.$$
Finally, for all $P\in\Pic^0(A/K)$, $$\begin{aligned}
h^0(A/K,
\cF_Y\otimes P)&=&\chi(A/K, \cF_Y\otimes P)=\chi(A/K,
\cF_Y)\\&=&h^0(A/K, \cF_Y)=M,\end{aligned}$$ and similarly, $$h^0(A/K, \cF_X\otimes P)=h^0(A/K, \cF_X)=M.$$
We have finished the proof of the Claim.
so that $a_W$ is generically finite onto its image $Z$, the rank of $\cH_Y$ is 1, and the rank of $\cF_X=\cF_Y=a_{W*}\cH_Y$ on $Z$ is $\deg(a_W)$. Consider the Stein factorization $$g:V\xrightarrow{p}U\xrightarrow{q}W,$$ where $p$ is an algebraic fiber space and $q$ is surjective and finite. Because $h^0(U, p_*\cH_X)=h^0(V, \cH_X)=M>0$, the nonzero torsion-free sheaf $p_*\cH_X$ has rank $\geq 1$. We can write $$\cF_X=a_{W*}g_*\cH_X=a_{W*}q_*(p_*\cH_X)$$ and conclude that the rank of $\cF_X$ on $Z$ is $\geq \deg(q)\cdot\deg(a_W)$. This implies $\deg(q)=1$ hence $g$ has connected fibers. Essentially, this is Hacon and Pardini’s proof of [@HAC1], Theorem 3.2.
We just saw that $g\circ h_X$ is an algebraic fiber space and we denote by $X_w$ a general fiber. The main ingredient is the following lemma.
\[ma\]In the above situation, the sheaf $$g_*\cH_X=(g\circ h_X)_*(\cO_X(mK_X)\otimes\cJ(||(m-1)K_X||))$$ has rank $P_m(X_w)>0$.
This lemma will be proved later. We first use it to finish the proof of the first part of Theorem 1.
Assume that $g$ is not birational. Since it is an algebraic fiber space, we have $\dim (W)<\dim (V)$. Hence by the easy addition formula ([@Mo], Corollary 1.7), we have $\dim (V)=\kappa(X)\leq
\kappa(X_w)+\dim (W)$, hence $\kappa(X_w)\geq 1$. Since $X$ is of maximal Albanese dimension, $X_w$ is also of maximal Albanese dimension, hence $P_{m}(X_w)\geq 2$ by Chen and Hacon’s characterization of abelian varieties ([@CH1], Theorem 3.2). Then, by Lemma \[ma\], the rank of $\cH_X$ on $Z$ is $\deg(a_W)\cdot P_m(X_w)$ ($\geq 2\deg(a_W)$), which is a contradiction. This concludes the proof.
In order to prove Lemma \[ma\], we begin with an easy lemma.
\[5\]Let $X$ be a smooth projective variety, let $D_1$ be a divisor on $X$ with nonnegative Iitaka dimension, and let $D_2$ be an effective divisor on $X$. We have an inclusion $$\cJ(||D_1+D_2||)\supset
\cJ(||D_1||)\otimes\cO_X(-D_2).$$
Take $N>0$ such that $|ND_1|\neq\emptyset$. Choose a log-resolution $$\mu:
X'\to X$$ for $ND_1$, $ND_2$, and $N(D_1+D_2)$. Write $$\begin{aligned}
\mu^*(|ND_1|)&=&|W_1|+E_1\\
\mu^*(|ND_2|)&=&|W_2|+E_2\\
\mu^*(|N(D_1+D_2)|)&=&|W_3|+E_3,\end{aligned}$$ where $E_1$, $E_2$, and $E_3$ are the fixed divisors and $|W_1|$, $|W_2|$, and $|W_3|$ are free linear series. We have $$\begin{gathered}
N\mu^*D_2\succeq E_2\qquad \textmd{and} \qquad E_1+E_2\succeq E_3,\end{gathered}$$ hence $$\begin{aligned}
\mu_*(K_{X'/X}-\Big\lfloor\frac{1}{N}E_3 \Big\rfloor)&\supset&
\mu_*(K_{X'/X}-\Big\lfloor\frac{1}{N}(E_1+E_2) \Big\rfloor)\\
&\supset&\mu_*(K_{X'/X}-\Big\lfloor\frac{1}{N}(E_1+N\mu^*D_2) \Big\rfloor)\\
&=&\mu_*(K_{X'/X}-\Big\lfloor\frac{1}{N}E_1 \Big\rfloor)\otimes\cO_X(-D_2).\end{aligned}$$ By the definition of asymptotic multiplier ideal sheaves, this proves Lemma \[5\].
We will reduce Lemma \[ma\] to Proposition 3.6 in [@jiang]. Since $Y$ is of maximal Albanese dimension and $h_Y$ is a model of the Iitaka fibration of $Y$, by a theorem of Kawamata (see also Theorem 3.2 in [@jiang]), there exists an étale cover $\pi_Y:
\widetilde{Y}\to Y$ induced by an étale cover of $A$ and a commutative diagram: $$\xymatrix{
\widetilde{Y}\ar[r]^{\pi_Y}\ar@{-->}[d]_{h_{\aY}}& Y\ar[d]^{h_Y}\\
\WW\ar[r]^{b_{\WW}}&W,}$$ where $\WW$ is a smooth projective variety of general type, the rational map $h_{\aY}$ is a model of the Iitaka fibration of $\aY$, and $b_{\WW}$ is generically finite and surjective.
Let $\aX$ be a connected component of $X\times_Y\widetilde{Y}$, denote by $\pi_{\aX}$ the induced morphism $\aX\to X$, and denote by $f_{\aX}$ the induced morphism $\aX\to\aY$. Denote by $k$ and $k_{\aX}$ respectively the morphism $g\circ h_X=h_Y\circ
f$ and the map $h_{\aY}\circ f_{\aX}$. After birational modifications of $\aX$, we may suppose that $k_{\aX}$ is a morphism such that $k_{\aX}(E)$ is a proper subvariety of $\WW$, where $E$ is the $\pi_{\aX}$-exceptional divisor. All in all, we have the commutative diagram: $$\xymatrix@R=30pt@C=30pt@M=+5pt{\aX\ar[r]^{\pi_{\aX}}\ar[d]^{f_{\aX}}\ar@/_2pc/[dd]_{k_{\aX}}&X\ar[d]^f\ar@/^2pc/[dd]^{k}\\
\widetilde{Y}\ar[r]^{\pi_Y}\ar@{-->}[d]^{h_{\aY}}& Y\ar[d]^{h_Y}\\
\WW\ar[r]^{b_{\WW}}&W .}$$ We then take the Stein factorization: $$k_{\aX}:\aX\xrightarrow{k_1} W_1\xrightarrow{b_{W_1}}\WW.$$ The important point is that $W_1$ is still of general type. Again by taking birational modifications of $\aX$ and $W_1$, we may assume that $k_1: \aX\to W_1$ is an algebraic fiber space between smooth projective varieties. We can apply Proposition 3.6 in [@jiang] to the following diagram: $$\xymatrix{
\widetilde{X}\ar[rr]^{\pi_{\aX}}\ar[d]^{k_1}&&X\ar[d]^k\\
W_1\ar[rr]_{b_{\WW}\circ b_{W_1}}&&W.}$$ It follows that the sheaf $$k_*(\cO_X(mK_X)\otimes\cJ(||(m-1)K_{X/W}+k^*K_W||))\otimes\cO_W(-(m-2)K_W)$$ has rank $P_m(X_w)$. By Lemma 3.4 in [@jiang], the line bundle $(m-1)K_{X/W}+k^*K_W$ has nonnegative Iitaka dimension. By Lemma \[5\], $$\cJ(||(m-1)K_X||)\supset \cJ(||(m-1)K_{X/W}+k^*K_W||)\otimes
\cO_X(-(m-2)k^*K_{W}).$$ Therefore, $$\begin{aligned}
&&k_*\big(\cO_X(mK_X) \big)\\
&\supset& k_* \big(\cO_X(mK_X)\otimes\cJ(||(m-1)K_X||) \big)
\\
&\supset& k_* \big(\cO_X(mK_X)\otimes\cJ(||(m-1)K_{X/W}+k^*K_W||) \big)\otimes\cO_W \big(-(m-2)K_W \big).\end{aligned}$$ Since the rank of the first and the third sheaves are both $P_m(X_w)$, so is the rank of the second.
A complete description of $f: X\to Y$
=====================================
By using Kawamata’s Theorem 13 in [@KA] (see also Theorem 3.2 in [@jiang]), we obtain the following complete description of $f$.
\[thm7\] Let $f: X\to Y$ be a surjective morphism of smooth projective varieties of the same dimension, with $Y$ of maximal Albanese dimension.
If $P_m(X)=P_m(Y)$ for some $m\geq 2$, there exist
- a normal projective variety $V_X$ of general type,
- an abelian variety $A_X$,
- a finite abelian group $G$ which acts faithfully on $V_X$ and on $A_X$ by translations,
- a subgroup $G_2$ of $G$,
such that
- $X$ is birational to $(A_X\times V_X)/G$, where $G$ acts diagonally on $A_X\times V_X$,
- $Y$ is birational to $(A_Y\times V_Y)/G_1$, where $\star$ $V_Y=V_X/G_2$ and $A_Y=A_X/G_2$, $\star$ $G_1:=G/G_2$ acts diagonally on $A_Y\times V_Y$,
- $f$ is birational to the quotient morphism $(A_X\times V_X)/G\to
(A_Y\times V_Y)/G_1$.
In the diagram (\[23\]), we already know that $g: V\to W$ is birational so we may assume that $V=W$ and $g$ is the identity. We then consider the diagram: $$\label{29}
\xymatrix{
X\ar[r]^{f}\ar[d]^{h_X}&Y\ar[r]^{a_Y}\ar[d]^{h_Y}&A\ar[d]\\
V\ar@{=}[r]& V\ar[r]^{a_V} & A/K. }$$ Taking the Stein factorizations for $f$ and $a_Y$, we may assume that $X$ and $Y$ are normal and $f$ and $a_Y$ are finite. Similarly we take the Stein factorization for $Y\xrightarrow{a_Y} A\to A/K$ and may assume that $V$ is normal and $a_V$ is finite.
By Poincaré reducibility, there exists an isogeny $B\to
A/K$ such that $A\times_{A/K}B\simeq K\times B$. We denote by $H$ the kernel of this isogeny. Apply the étale base change $B\to A/K$ to diagram (\[29\]) and get $$\label{30}
\xymatrix{
\ax\ar[r]^{\af}\ar[d]^{h_{\ax}}&\ay\ar[r]^-{a_{\ay}}\ar[d]^{h_{\ay}}& K\times B\ar[d]\\
\av\ar@{=}[r]& \av\ar[r]^{a_{\av}} & B, }$$ where
- $\av=V\times_{A/K}B$ and $\ay=Y\times_V\av$ (which are connected because $a_{Y}$ and $a_V$ are the Albanese maps),
- $\ax=X\times_Y\ay$ (which is also connected because $\ax=X\times_Y\ay=X\times_Y(Y\times_V\av)=X\times_V\av$),
- $h_X: X\to V$ is an algebraic fiber space.
Let $A_X$ and $A_Y$ be the respective general fibers of $h_{\ax}$ and $h_{\ay}$. We have the following induced diagram from (\[30\]): $$\xymatrix{ A_X\ar[r]_{\beta}\ar@/^1pc/[rr]^{\alpha_X}& A_Y\ar[r]_{\alpha_Y}& K
}$$ By Proposition 2.1 in [@HAC1], $A_X$ and $A_Y$ are birational to abelian varieties. Hence the morphisms $\alpha_X$ and $\alpha_Y$ are birationally equivalent to étale covers. Since $a_{\ay}$ and $\a_{\ay}\circ \af$ are finite, $\alpha_X$ and $\alpha_Y$ are also finite. Thus $\alpha_X$ and $\alpha_Y$ are isogenies of abelian varieties by Zariski’s Main Theorem. We denote by $\widetilde{G}$, $\widetilde{G}_1$, and $\widetilde{G}_2$ the abelian groups $\Ker(A_X\to K)$, $\Ker(A_Y\to K)$, and $\Ker(A_X\to A_Y)$ respectively. Then $\widetilde{G}_1=\widetilde{G}/\widetilde{G}_2$ and $A_Y=A_X/\widetilde{G}_2$. Let $k\in K$ be a general point, let $V_Y$ be the normal variety $a_{\ay}^{-1}(k\times B)$, and let $V_X$ be the normal variety $\af^{-1}a_{\ay}^{-1}(k\times B)$.
We know that $A_X$ and $A_Y$ respectively act on $\ax$ and $\ay$ in such a way that $\af$ is equivariant for the $A_X$-action on $\ax$ and the $A_Y$-action on $\ay$. Furthermore, the actions induce a faithful $\overline{G}$-action on $V_X$ and a faithful $\widetilde{G}_1$-action on $V_Y$, and we have an $A_X$-equivariant isomorphism $\ax\simeq(A_X\times
V_X)/\widetilde{G}$ and an $A_Y$-equivariant isomorphism $\ay\simeq
(A_Y\times V_Y)/\widetilde{G}_1$, where $\widetilde{G}$ acts on $A_X\times V_X$ diagonally and $\widetilde{G}_1$ acts on $A_Y\times
V_Y$ diagonally.
The induced morphism $$\xymatrix{
V_X\ar[r]^{\af\vert_{V_X}}\ar[d]^{h_{\ax}}& V_Y\ar[d]^{h_{\ay}}\\
V\ar@{=}[r]&V }$$ is equivariant for the $\widetilde{G}$-action on $V_X$ and the $\widetilde{G}_1$-action on $V_Y$. Thus $V_Y=V_X/\widetilde{G}_2$ and $\af\vert_{V_X}$ is the quotient morphism.
Thus we obtain $A_Y=A_X/\widetilde{G_2}$ and $V_Y=V_X/\widetilde{G_2}$, and $\af: \ax\to\ay$ is the quotient morphism $(A_X\times V_X)/\widetilde{G}\to (A_Y\times V_Y)/\widetilde{G}_1$, so $$\af: \ax=(A_X\times V_X)/\widetilde{G}\to \ay=(A_Y\times V_Y)/\widetilde{G}_1$$ is also the quotient morphism.
Let $G=\Ker(A_X\times B\to A)$ and $G_1=\Ker(A_Y\times B\to A)$. We have exact sequences of groups $$1\to \widetilde{G}\to G\to H\to 1
\quad \rm{and}\; \quad 1\to \widetilde{G}_1\to
G_1\to H\to 1.$$ Then $X=(A_X\times V_X)/G$ and $Y=(A_Y\times V_Y)/G_1$, and $f$ is the quotient map. This proves Theorem \[thm7\] with $G_2= \widetilde{G}_2\subset G$.
Examples {#exa}
========
In the next two examples, we see that the conclusion of our theorem does not hold in general, even for surfaces of general type.
\[e1\]Let $C_1$ and $C_2$ be smooth projective curves of genus $2$ with respective hyperelliptic involutions $i_1$ and $i_2$. Define $Y$ to be the minimal resolution of singularities of $(C_1\times C_2)/(i_1, i_2)$. Let $X$ be the blow-up of $C_1\times C_2$ at the $36$ fixed points of $(i_1, i_2)$. There is a 2-to-1 morphism $f: X\to Y$. We have $K_Y^2=\frac{1}{2}K_{C\times C}^2=4$ and $c_2(Y)=\frac{1}{2}(c_2(C_1\times C_2)-36)+72=56$. Since $Y$ is a minimal surface, we have $P_2(Y)=K_Y^2+\frac{1}{12}(K_Y^2+c_2(Y))=9$. We also have $P_2(X)=P_2(C_1\times C_2)=3\times 3=9$. Hence we have a nonbirational morphism $f: X\to Y$ between smooth projective surfaces of general type with $q(Y)=0$ (so that $Y$ does not have maximal Albanese dimension!) and $P_2(X)=P_2(Y)=9$.
It turns out that the situation in the case of surfaces of general type can be completely worked out. More precisely, one can show that if $f: S\rightarrow T$ is a nonbirational morphism between smooth projective surfaces of general type such that $P_m(S)=P_m(T)$ for some $m\geq 2$, then $m=2$ and one of the following occurs:
- either $S$ is birational to the product of two smooth projective curves of genus 2, and $f$ is birationally equivalent to the quotient by the diagonal hyperelliptic involution (see Example \[e1\] above);
- or $S$ is birational to the theta divisor of the Jacobian of a smooth projective curve of genus 3 and $f$ is birationally equivalent to the bicanonical map of $S$;
- $S$ is birational to a double cover of a principally polarized abelian surface branched along a divisor in $ |2\Theta|$ having at most double points and $f$ is birationally equivalent to the bicanonical map of $S$.
In higher dimensions, we have many more examples.
\[e2\](Compare with [@K], Proposition 8.6.1) We denote by $\mathbf{P}(a_0^{s_0}, \ldots, a_k^{s_k})$ the weighted projective space with $s_j$ coordinates of weight $a_i$ (see [@D]). For any integer $k\geq 3$, denote by $P_X$ the weighted projective space $\mathbf{P}(1, (2k)^{4k+5}, (2k+1)^{4k-3})$ with coordinates $x_i$ and by $P_Y$ the weighted projective space $\mathbf{P}(2, (2k)^{4k+5}, (2k+1)^{4k-3})$ with coordinates $y_i$. As in the proof of Proposition 8.6.1 in [@K], one can check that $P_X$ and $P_Y$ both have canonical singularities. There is a natural degree-2 morphism $\varepsilon: P_X\to P_Y$ defined by $y_0=x_0^2$ and $y_i=x_i$ for $i\geq 1$.
Let $Y'$ be a general hypersurface of weighted degree $d=16k^2+8k$ in $P_Y$ and let $X'$ be the pull-back by $\varepsilon$ of $Y'$. Since $2k(2k+1)|d$ and $Y'$ is general, $X'$ is also general and both $X'$ and $Y'$ have canonical singularities. Take resolutions $X\to X'$ and $Y\to Y'$ such that $\varepsilon$ induces a degree-2 morphism $f: X\to Y$. The canonical sheaves are $\omega_{X'}=\cO_{X'}(2)$ and $\omega_{Y'}=\cO_{Y'}(1)$. Since both $X'$ and $Y'$ have canonical singularities, we have, for any integer $m\geq 0$, $$P_m(X)=h^0(X', \cO_{X'}(2m)) \quad \textrm{and}\quad P_m(Y)=h^0(Y', \cO_{Y'}(m)).$$ It follows from Theorem in 1.4.1 in [@D] that for $m$ even and $<2k$, we have $P_m(X)=P_m(Y)=1$. By Theorem 4.2.2 and Corollary 2.3.6 in [@D], $q(X)=q(Y)=0$.
Under the assumptions of our theorem, one might expect that $f: X\to Y$ be birational to an étale morphism. However the example below (see also Example 1 in [@CH4]) shows that this is not the case in general.
Let $G=\mathbb{Z}_{rs}$ and let $G_2=s\mathbb{Z}_{rs}$ be the subgroup of $G$ generated by $s$, with $s\geq 2$ and $r\geq
2$. Let $G_1=G/G_2\simeq \mathbb{Z}_s$. Consider an elliptic curve $E$, let $B_1$ and $B_2$ be two points on $E$, and let $L$ be a line bundle of degree $1$ such that $B=(rs-a)B_1+aB_2\in |tmL|$ with $1\leq a\leq m-2$ and $(a, rs)=1$. Taking the normalization of the $(rs)$-th root of $B$, we get a smooth curve $C$ and a Galois cover $\pi: C\to E$ with Galois group $G$. By construction, $\pi$ ramifies at two points, $B_1$ and $B_2$. Following [@Be] §VI.12, we have $h^0(C,
\omega_C^2)^G=2$.
Let $L^{(i)}$ be $L^i(-\lfloor\frac{iB}{rs}\rfloor)$ and denote $(L^{(i)})^{-1}$ by $L^{-(i)}$. Then, by Proposition 9.8 in [@K1], $$\pi_*\cO_C=\bigoplus_{i=0}^{rs-1}L^{-(i)}.$$ Let $C_1$ be the curve $$\underline{Spec}(\bigoplus_{i=0}^{s-1}L^{-(ri)}),$$ where $\bigoplus_{i=0}^{s-1}L^{-(ri)}$ has the subalgebra structure of $\pi_*\cO_C$. Consider the Stein factorization $$\pi:C\xrightarrow{g} C_1\xrightarrow{\pi_1} E.$$ Then $C_1=C/G_2$ and $\pi_1$ is a Galois cover with Galois group $G_1$ which also ramifies only at $B_1$ and $B_2$. Hence we again have $$h^0(C_1,
\omega_{C_1}^2)^{G_1}=2.$$
Finally we take an abelian variety $K$ such that $G$ acts freely on $K$ by translations and set $K_1=K/G_2$. Let $$\widetilde{X}=C\times
K\quad{\rm ,}\quad \widetilde{Y}=C_1\times K$$ and $$X=\widetilde{X}/G=(C\times
K)/G\quad{\rm ,}\quad Y=\widetilde{Y}/G=(C_1\times K_1)/G_1,$$ where $G$ and $G_1$ act diagonally. Hence $\widetilde{X}$ and $\widetilde{Y}$ are étale covers of $X$ and $Y$ respectively. There is a natural finite dominant morphism $f: X\to Y$ of degree $r$. Since its lift $\widetilde{f}: \widetilde{X}\to \widetilde{Y}$ is not étale, $f$ is not étale.
Since $$H^0(X, \omega_X^2)\simeq H^0(\widetilde{X}, \omega_{\widetilde{X}}^2)^G \simeq H^0(C, \omega_C^2)^G$$ and $$H^0(Y, \omega_Y^2)\simeq H^0(\widetilde{Y}, \omega_{\widetilde{Y}}^2)^{G_1} \simeq H^0(C_1, \omega_{C_1}^2)^{G_1},$$ we have $$P_2(X)=P_2(Y)=2.$$
Pluricanonical maps of varieties of maximal Albanese dimension {#last}
==============================================================
Let $X$ be a smooth projective variety of maximal Albanese dimension. As mentioned in the introduction, Chen and Hacon proved in [@cha] and [@asia] that $\phi_{6K_X}(X)$ has dimension $\kappa(X)$; if $X$ is moreover of general type, $\phi_{6K_X}$ is birational onto its image. They also showed that if $\chi(\omega_X)>0$, the map $\phi_{3K_X}$ is already birational onto its image. Pareschi and Popa provided in [@pp2], §6, a conceptual approach to these theorems based on their regularity and vanishing theorems.
We prove a unifying statement for varieties with maximal Albanese dimension which are not necessarily of general type. The proof is parallel to that of Pareschi and Popa.
In this section, we will always assume $f: X\rightarrow Y$ is a birational model of the Iitaka fibration of $X$.
\[iit\] If $X$ is a smooth projective variety with maximal Albanese dimension, the linear system $|\cO_X(5K_X)\otimes f^*P|$ induces the Iitaka fibration. In particular, $\phi_{5K_X}$ is a model of the Iitaka fibration of $X$.
We may as in (\[23\]) assume that we have a diagram $$\begin{aligned}
\xymatrix{ X\ar[d]_f\ar[r]^-{a_X}&\Alb(X)\ar[d]^{f_*}\\
Y\ar[r]^-{a_Y}&\Alb(Y)}\end{aligned}$$ where $f$ is the Iitaka fibration of $X$ and $a_X$ and $a_Y$ are the respective Albanese morphisms of $X$ and $Y$.
Since $f$ is a model of the Iitaka fibration of $X$, $f_*(\omega_X^2\otimes\cJ(||K_X||))$ is a torsion-free rank $1$ sheaf on $Y$. We now use the following lemma ([@jiang], Lemma 2.1):
\[mul\]Suppose that $f: X\rightarrow Y$ is a surjective morphism between smooth projective varieties, $L$ is a $\Q$-divisor on $X$, and the Iitaka model of $(X, L)$ dominates $Y$. Assume that $D$ is a nef $\Q$-divisor on $Y$ such that $L+f^*D$ is a divisor on $X$. Then we have$$H^i(Y,
R^jf_*(\cO_X(K_X+L+f^*D)\otimes \cJ(||L||)\otimes Q))=0,$$ for all $i\geq 1$, $j\geq 0$, and all $Q\in \Pic^0(X)$.
By Lemma \[mul\], we have $$H^i(Y, f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q))=0$$ for all $i\geq 1$ and $Q\in\Pic^0(X)$. As in Lemma 2.6 in [@jiang], $R^ja_{Y*}(f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q))=0$ for all $j\geq 1$. Hence $$\begin{aligned}
&&H^i(\Alb(Y), a_{Y*}f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q))\\
&=&H^i(Y, f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q))\\
&=&0,\end{aligned}$$ for all $i\geq 1$ and $Q\in\Pic^0(X)$. Thus for any $Q\in V_0(\omega_X)\subset \Pic^0(X)$, $a_{Y*}f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q)$ is a nonzero IT-sheaf of index $0$ and in particular, it is $M$-regular. By [@pp2], Corollary 5.3, $a_{Y*}f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q)$ is continuously globally generated. Since $a_Y$ is generically finite, the exceptional locus $Z_1$ of $a_Y$ is a proper closed subset of $Y$. Then $f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q)$ is continuously globally generated away from $Z_1$. By definition, this means that for any open subset $V\subset\Pic^0(Y)$, the evaluation map $$\begin{gathered}
\bigoplus_{P\in V}H^0(Y, f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q)\otimes P)\otimes P^{-1}
\\\rightarrow f_*(\cO_X(2K_X)\otimes\cJ(||K_X||)\otimes Q)\end{gathered}$$ is surjective away from $Z_1$.
Now we claim that there exists an open dense subset $U\subset Y-Z_1$ such that the sheaf $$a_{Y*}(\cI_y\otimes f_*(\cO_X(3K_X)\otimes \cJ(||2K_X||)))$$ is $M$-regular for any $y\in U$.
We first assume the claim and finish the proof of the theorem.
We conclude by the claim that $\cI_y\otimes f_*(\cO_X(3K_X)\otimes \cJ(||2K_X||))$ is continuously globally generated away from $Z_1$. Denote respectively by $\cL$ and $\cL_1$ the rank-$1$ torsion-free sheaves $f_*(\cO_X(2K_X)\otimes\cJ(||K_X||))$ and $f_*(\cO_X(3K_X)\otimes\cJ(||2K_X||))$. Let $U_1$ be a dense open subset of $Y-Z_1$ such that $\cL$ and $\cL_1$ are locally free on $U_1$. Then by [@pp3], Proposition 2.12, $\cL_1\otimes \cL$ is very ample over $U\cap U_1$. We have $\cL\otimes\cL_1\hookrightarrow f_*(\cO_X(5K_X))$, thus $f_*(\cO_X(5K_X))$ is very ample on a dense open subset of $Y$. This concludes the proof of the theorem.
For the claim, let $$U\subset U_1 \bigcap
\big(Y-\bigcup_{T_i}\bigcap_{Q\in T_i}\Bs(|f_*(\omega_X\otimes Q)|)\big)$$ be any dense open subset of $Y$, where $T_i$ runs through all the components of $V_0(\omega_X)$ and $\Bs(|f_*(\omega_X\otimes Q)|)$ denotes the locus where the evaluation map $$H^0(Y, f_*(\omega_X\otimes Q))\otimes\cO_Y\rightarrow f_*(\omega_X\otimes Q)$$ is not surjective. For each component $T_i$ of $V_0(\omega_X)$, we may write $T_i=P_i+f^*S_i$, where $S_i$ is a subtorus of $\Pic^0(Y)$ and $P_i\in\Pic^0(X)$ (see [@GL2]).
Again, by Lemma \[mul\], we have $$H^i(Y, \cL_1\otimes Q)=0$$ for all $i\geq 1$ and any $Q\in \Pic^0(Y)$. For $y\in U$, consider the exact sequence $$0\rightarrow \cI_y\otimes \cL_1\rightarrow \cL_1\rightarrow \mathbb{C}_y\rightarrow 0.$$ We push forward this short sequence to $\Alb(Y)$. Since $y\in U$, we have $$0\rightarrow a_{Y*}(\cI_y\otimes \cL_1)\rightarrow a_{Y*}\cL_1\rightarrow \mathbb{C}_{a_Y(y)}\rightarrow 0.$$
Hence $H^i(Y, a_{Y*}(\cI_y\otimes \cL_1)\otimes Q)=0$ for any $i\geq 2$ and $Q\in \Pic^0(Y)$. We now assume that $a_{Y*}(\cI_y\otimes \cL_1)$ is not $M$-regular. Then by definition of $M$-regularity, we have $$\begin{aligned}
\label{nMregular}\codim_{\Pic^0(Y)}V_1(a_{Y*}(\cI_y\otimes \cL_1))\leq 1.\end{aligned}$$
Hence $y$ is a base-point of all sections in $H^0(Y, \cL_1\otimes P_s)$, for all $s\in V_1(a_{Y*}(\cI_y\otimes \cL_1))$.
On the other hand, by [@jiang Lemma 2.2], $$\dim H^0(X, \cO_X(3K_X)\otimes f^*P)$$ is constant for $P\in\Pic^0(Y)$. Then, $$\begin{aligned}
\dim H^0(Y, \cL_1\otimes P)&=&\dim H^0(Y, \cL_1)=
\dim H^0(X, \cO_X(3K_X))\\&=&\dim H^0(X, \cO_X(3K_X)\otimes f^*P).\end{aligned}$$ Hence the inclusion $$H^0(X, \cO_X(3K_X)\otimes\cJ(||2K_X||)\otimes f^*P)\hookrightarrow H^0(X, \cO_X(3K_X)\otimes f^*P)$$ is an isomorphism. Therefore, $y\in \Bs(|f_*\cO_X(3K_X)\otimes P_s|)$, for all $s\in V_1(a_{Y*}(\cI_y\otimes \cL_1))$.
Since $y\in Y-\bigcup_{T_i}\bigcap_{Q\in T_i}\Bs(|f_*(\omega_X\otimes Q)|)$, let $V_i\subset S_i$ be a dense open subset such that $y\notin \Bs|f_*(\omega_X\otimes Q)|$, for any $Q\in P_i+f^*V_i$.
We may shrink $U$ so that $f_*(\cO_X(K_X)\otimes P_i)$ and $f_*(\cO_X(2K_X)\otimes P_i^{-1})$ are locally free on $U$ for all $i$. Moreover, we can require that, for each $i$, the multiplication $$f_*(\cO_X(K_X)\otimes P_i)\otimes f_*(\cO_X(2K_X)\otimes P_i^{-1})\rightarrow f_*\cO_X(3K_X)$$ is an isomorphism on $U$, since both sheaves are of rank $1$.
We then conclude that $y$ is a base point of all sections of $$H^0(Y, f_*(\cO_X(2K_X)\otimes P_i^{-1})\otimes Q^{'})$$ where $Q^{'}\in V_1(a_{Y*}(\cI_y\otimes \cL_1))-V_i$.
We may further shrink $U$ so that $$f_*(\cO_X(2K_X)\otimes \cJ(||K_X||)\otimes P_i^{-1})|_U=f_*(\cO_X(2K_X)\otimes P_i^{-1})|_U$$ is locally free for each $i$. Then $y\in U$ belongs to $$\Bs|f_*(\cO_X(2K_X)\otimes \cJ(||K_X||)\otimes P_i^{-1})\otimes Q^{'}|$$ for each $Q^{'}\in V_1(a_{Y*}(\cI_y\otimes \cL_1))-V_i$.
By [@cha], Theorem 1, the union of all the $S_i$ generates $\Pic^0(Y)$. Hence by (\[nMregular\]), for some $i$, $V_1(a_{Y*}(\cI_y\otimes \cL_1))-V_i$ contains an open subset of $\Pic^0(Y)$ and this contradicts the fact that $f_*(\cO_X(2K_X)\otimes \cJ(||K_X||)\otimes P_i^{-1})$ is continuous globally generated away from $Z_1$. This concludes the proof of the claim.
Our Theorem \[iit\] is just an analog of Theorem 6.7 in [@pp2]. The main point is just that $a_{Y*}f_*(\cO_X(2K_X)\otimes\cJ(||K_X||))$ is $M$-regular. On the other hand, if $X$ is of general type, of maximal Albanese dimension, and if moreover $a_X(X)$ is not ruled by tori, Pareschi and Popa proved that $a_{X*}\omega_X$ is $M$-regular, which is the main ingredient of the proof of Theorem 6.1 in [@pp2]. If $X$ is not of general type, $a_X(X)$ is always ruled by tori of dimension $n-\kappa(X)$. But we still have:
\[iit2\]If $X$ is a smooth projective variety with maximal Albanese dimension $n$, and if its Albanese image $a_X(X)$ is not ruled by tori of dimension $>n-\kappa(X)$, the map $\phi_{3K_X}$ is a model of the Iitaka fibration of $X$.
We just need to show that under our assumptions, and with the notation of the proof of Theorem \[iit\], $a_{Y*}f_*(\omega_X)$ is $M$-regular. The rest is the same as the proof of Theorem \[iit\]. By Kawamata’s theorem [@KA Theorem 13], we have the following commutative diagram:
$$\begin{aligned}
\xymatrix{
\widehat{Y}\times \widetilde{K}\ar[d]^{pr_1}& \widetilde{X}\ar[l]_(.4){\mu}\ar[r]^-{\pi_X}\ar[d]^{\widehat{f}}&X\ar[r]^(.4){a_X}\ar[d]^f& \Alb(X)\ar[d]^{f_*}\\
\widehat{Y}\ar@{=}[r]&\widehat{Y}\ar[r]^{b_Y}&Y\ar[r]^-{a_Y}&\Alb(Y),}\end{aligned}$$
where $\pi_X$ is birationally equivalent to a finite étale cover of $X$ induced by isogeny of $\Alb(X)$, $\mu$ is a birational morphism, $\widetilde{K}$ is an abelian variety isogenous to $\ker f_*$, $\widehat{Y}$ is a smooth projective variety of general type, and $b_Y$ is generically finite. We set $g_Y=a_Y\circ b_Y$.
Since $a_X(X)$ is not ruled by tori of dimension $>n-\kappa(X)$, we conclude that $g_Y(\widehat{Y})=a_Y(Y)$ is not ruled by tori. We make the following:
[**Claim:**]{} $g_{Y*}\omega_{\widehat{Y}}$ is $M$-regular.
We first see how the Claim implies Theorem \[iit2\]. Since $\widetilde{K}$ is an abelian variety, we have obviously $pr_{1*}\omega_{\widehat{Y}\times\widetilde{K}}=\omega_{\widehat{Y}}$. Hence $$g_{Y*}pr_{1*}\omega_{\widehat{Y}\times\widetilde{K}}=g_{Y*}pr_{1*}\mu_*\omega_{\widetilde{X}}=
a_{Y*}f_*\pi_{X*}\omega_{\widetilde{X}}$$ is $M$-regular on $\Alb(Y)$. On the other hand, $\omega_X$ is a direct summand of $\pi_{X*}\omega_{\widetilde{X}}$ since $\pi_X$ is birationally equivalent to an étale cover. Therefore, $a_{Y*}f_*\omega_X$ is a direct summand of $g_{Y*}pr_{1*}\omega_{\widehat{Y}\times\widetilde{K}}$ and hence is $M$-regular.
We now prove the Claim.
We first define the following subset of $\Pic^0(Y)$ for any $i\geq 0$: $$V_i(\widehat{Y}, \Pic^0(Y)):=\{P\in \Pic^0(Y): H^i(\widehat{Y}, \omega_{\widehat{Y}}\otimes g_Y^*P)\neq 0\}.$$ Since the image of $g_Y: \widehat{Y}\rightarrow \Alb(Y)$ is not ruled by tori, the same argument in the last part of the proof of Theorem 3 in [@EL] shows that $\codim_{\Pic^0(Y)}V_i(\widehat{Y}, \Pic^0(Y))>i$ for any $i\geq 1$. On the other hand, by Grauert-Riemenschneider vanishing, $R^ig_{Y*}\omega_{\widehat{Y}}=0$ for any $i\neq 0$. Thus $$H^i(\widehat{Y}, \omega_{\widehat{Y}}\otimes g_Y^*P)\simeq H^i(\Alb(Y), g_{Y*}\omega_{\widehat{Y}}\otimes P).$$ Hence we have $V_i(g_{Y*}\omega_{\widehat{Y}})=V_i(\widehat{Y}, \Pic^0(Y))$ as subset of $\Pic^0(Y)$. This finishes the proof of the Claim.
[CH2]{} A. Beauville, Complex algebraic surfaces, [*London Math. Soc. Student Text*]{} [**34**]{}, Cambridge University Press, 1992.
J.A. Chen and C.D. Hacon, Pluricanonical maps of varieties of maximal Albanese dimension, [*Math. Ann.*]{} [**320**]{} (2001), 367–380.
J.A. Chen and C.D. Hacon, On algebraic fiber spaces over varieties of maximal Albanese dimension, [*Duke Math. J.*]{} [**111**]{} (2002), 159–175.
J.A. Chen and C.D. Hacon, Linear series of irregular varieties, [*Algebraic geometry in East Asia (Kyoto, 2001)*]{}, 143–153, World Scientific 2002.
J.A. Chen and C.D. Hacon, Characterization of abelian varieties, [*Invent. Math.*]{} [**143**]{} (2001), 435–447.
J.A. Chen and C.D. Hacon, Varieties with $P_3(X)=4$ and $q(X)=\dim(X)$, [*Ann. Sc. Norm. Super. Pisa*]{} (5) [**3**]{} (2004), 399–425.
I. Dolgachev, Weighted projective varieties, in [*Group actions and vector fields (Vancouver, B.C., 1981),*]{} 34–71, Lecture Notes in Math. [**956**]{}, Springer, Berlin, 1982.
L. Ein and R. Lazarsfeld, Singularities of theta divisors and the birational geometry of irregular varieties, [*J. Amer. Math. Soc.*]{} [**10**]{} (1997), 243–258.
O. Fujino, Algebraic fiber spaces whose general fibers are of maximal Albanese dimension, [*Nagoya Math. J.*]{} [**172**]{} (2003), 111–127.
M. Green and R. Lazarsfeld, Higher obstructions to deforming cohomology groups of line bundles, [*J. Amer. Math. Soc.*]{} [**4**]{} (1991), 87–103.
C.D. Hacon and R. Pardini, On the birational geometry of varieties of maximal Albanese dimension, [*J. Reine Angew. Math.*]{} [**546**]{} (2002), 177–199.
Z. Jiang, An effective version of a theorem of Kawamata on the Albanese map, to appear in [*Commun. Contemp. Math*]{}.
Y. Kawamata, Characterization of abelian varieties, [*Compos. Math.*]{} [**43**]{} (1983), 253–276.
J. Kollár, Shafarevich maps and plurigenera of algebraic varieties, [*Invent. Math.*]{} [**113**]{} (1993), 177–215.
J. Kollár, [*Shafarevich Maps and Automorphic Forms*]{}, Princeton University Press, 1995.
R. Lazarsfeld, [*Positivity in algebraic geometry I & II*]{}, Ergebnisse der Mathematik und ihrer Grenzgebiete [**48**]{} and [**49**]{}, Springer-Verlag, Heidelberg, 2004.
S. Mori, Classification of higher-dimensional varieties, [*Algebraic Geometry, Browdoin 1985*]{}, [Proc. Symp. Pure Math. Vol]{} [**46**]{} (1987), 269–331.
R. Pardini, The Severi inequality $K^2\geq 4\chi$ for surfaces of maximal Albanese dimension, [*Invent. Math.*]{} [**159**]{} (2005), 669–672.
G. Pareschi and M. Popa, Strong generic vanishing and a higher-dimensional Castelnuovo-de Franchis inequality, [*Duke Math. J.*]{} [**150**]{} (2009), 269–285.
G. Pareschi and M. Popa, Regularity on Abelian varieties III: relationship with generic vanishing and applications, to appear in the Proceedings of the Clay Mathematics Institute.
G. Pareschi and M. Popa, Regularity on abelian varieties. I, [*J. Amer. Math. Soc.*]{} [**16**]{} (2003), no. 2, 285–302.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: abstract
author:
- |
Seth James Nielson, Caleb E. Spare, Dan S. Wallach\
[[email protected], [email protected], [email protected]]{}
bibliography:
- 'peer2peer.bib'
- 'proposal.bib'
- 'twngan.bib'
- 'attackstaxonomy.bib'
- 'prior\_work.bib'
date: 'Computer Science Department, Rice University'
title: Building Better Incentives for Robustness in BitTorrent
---
Introduction {#intro}
============
Background
==========
Incentives Design {#incentivesdesign}
=================
Methodology
===========
Evaluation
==========
Discussion and Future Work {#discussion}
==========================
Related Work {#related}
============
Conclusion
==========
Acknowledgements {#acknowledgements .unnumbered}
================
The authors wish to thank Johan Pouwelse for collecting and sharing his traces from many real BitTorrent swarms. We also acknowledge Ed Knightly, Eugene Ng, Dan Sandler, and Devika Subramanian for many helpful discussions on this paper. Scott Crosby offered incredible assistance in performance tuning our simulator. This research was supported, in part, by NSF grants CNS-0524211 and CNS-0509297.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We classify the simple quantum group modules with finite dimensional weight spaces when the quantum parameter $q$ is transcendental and the Lie algebra is not of type $G_2$. This is part $2$ of the story. The first part being [@DHP1]. In [@DHP1] the classification is reduced to the classification of torsion free simple modules. In this paper we follow the procedures of [@Mathieu] to reduce the classification to the classification of infinite dimensional admissible simple highest weight modules. We then classify the infinite dimensional admissible simple highest weight modules and show among other things that they only exist for types $A$ and $C$. Finally we complete the classification of simple torsion free modules for types $A$ and $C$ completing the classification of the simple torsion free modules.'
author:
- Dennis Hasselstrøm Pedersen
bibliography:
- 'lit.bib'
title: 'Irreducible quantum group modules with finite dimensional weight spaces. II'
---
Introduction {#sec:intr-notat}
============
This is part 2 of the classification of simple quantum group modules with finite dimensional weight spaces. In this paper we focus on the non root of unity case. Let $\mathfrak{g}$ be a simple Lie algebra. Let $q\in {\mathbb{C}}$ be a non root of unity and let $U_q$ be the quantized enveloping algebra over ${\mathbb{C}}$ with $q$ as the quantum parameter (defined below). We want to classify all simple weight modules for $U_q$ with finite dimensional weight spaces. In the papers [@Fernando] and [@Mathieu] this is done for $\mathfrak{g}$-modules. Fernando proves in [@Fernando] that the classification of simple $\mathfrak{g}$ weight modules with finite dimensional weight spaces essentially boils down to classifying two classes of simple modules: Finite dimensional simple modules over a reductive Lie algebra and so called ’torsion free’ simple modules over a simple Lie algebra. The classification of finite dimensional modules is well known in the classical case (as well as the quantum group case) so the remaining problem is to classify the torsion free simple modules. O. Mathieu classifies all torsion free $\mathfrak{g}$-modules in [@Mathieu]. The classification uses the concept of a $\mathfrak{g}$ coherent family which are huge $\mathfrak{g}$ modules with weight vectors for every possible weight, see [@Mathieu Section 4]. Mathieu shows that every torsion free simple module is a submodule of a unique irreducible semisimple coherent family [@Mathieu Proposition 4.8] and each of these irreducible semisimple coherent families contains an admissible simple highest weight module as well [@Mathieu Proposition 6.2 ii)]. This reduces the classification to the classification of admissible simple highest weight modules. In this paper we will follow closely the methods described in [@Mathieu]. We will focus only on the case when $q$ is not a root of unity. The root of unity case is studied in [@DHP1]. Some of the results of [@Mathieu] translate directly to the quantum group case but in several cases there are obstructions that need to be handled differently. In particular the case by case classification in types A and C is done differently. This is because our analog of $\mathcal{EXT}(L)$ given an admissible simple infinite dimensional module $L$ is slightly different from the classical case see e.g. Section \[sec:an-example-u\_qm\]. The proof when reducing to types A and C in [@Fernando] and [@Mathieu] uses some algebraic geometry to show that torsion free modules can only exist in types A and C. In this paper we show that infinite dimensional admissible simple highest weight modules only exist in types A and C and use this fact to show that torsion free modules can not exist for modules other than types A and C. For this we have to restrict to transcendental $q$. Specifically we use Theorem \[thm:integral\]. If this theorem is true for a general non-root-of-unity $q$ we can remove this restriction. The author is not aware of such a result in the litterature.
Main results
------------
To classify simple weight modules with finite dimensional modules we follow the procedures of S. Fernando and O. Mathieu in [@Fernando] and [@Mathieu]. The analog of the reduction done in [@Fernando] is taken care of in the quantum group case in [@DHP1] so what remains is to classify the torsion free modules. We will first recall some results from [@DHP1] and [@DHP-twist] concerning the reduction and some formulas for commutating root vectors. This is recalled in Section \[sec:nonroot-unity-case\] and Section \[sec:u\_a-calculations\]. In Section \[sec:class-tors-free-1\] we do some prelimary calculations concerning Ore localization and certain ’twists’ of modules necessary to define the ’Coherent families’ of Section \[sec:coherent-families\]. Here we don’t define the concept of a general coherent family but instead directly define the analog of coherent irreducible semisimple extensions $\mathcal{EXT}(L)$ of an admissible simple infinite dimensional module $L$. In analog with the classical case we show that for any admissible simple infinite dimensional module $L$, $\mathcal{EXT}(L)$ contains a submodule isomorphic to a simple highest weight module, see Theorem \[thm:EXT\_contains\_highest\_weight\]. We also prove a result in the other direction: If $\mathfrak{g}$ is such that there exists a simple infinite dimensional admissible module $L$ then there exists a torsion free $U_q(\mathfrak{g})$-module, see Theorem \[thm:existence\_of\_torsion\_free\_modules\]. So the existence of torsion free modules over the quantized enveloping algebra of a specific $\mathfrak{g}$ is equivalent to the existence of an admissible infinite dimensional highest weight simple module over $U_q(\mathfrak{g})$. Using this we show that torsion free modules exist only for types $A$ and $C$ in the Sections \[sec:class-admiss-simple\], \[sec:rank-2-calculations\], \[sec:class-admiss-modul\], \[sec:quantum-shale-weyl\] and \[sec:class-admiss-modul-1\] where we also classify the admissible simple highest weight modules which are infinite dimensional. Finally in Section \[sec:type-a\_n-calc\] and Section \[sec:type-c\_n-calc\] we complete the classification in types $A$ and $C$, respectively, by showing exactly which submodules of $\mathcal{EXT}(L(\lambda))$ are torsion free for a $\lambda$ of a specific form see Theorem \[thm:clas\_of\_b\_such\_that\_twist\_is\_torsion\_free\] and Theorem \[thm:clas\_C\].
Acknowledgements
----------------
I would like to thank my advisor Henning H. Andersen for great supervision and many helpful comments and discussions and Jacob Greenstein for introducing me to this problem when I was visiting him at UC Riverside in the fall of 2013. The authors research was supported by the center of excellence grant ’Center for Quantum Geometry of Moduli Spaces’ from the Danish National Research Foundation (DNRF95).
Notation
--------
We will fix some notation: We denote by $\mathfrak{g}$ a fixed simple Lie algebra over the complex numbers ${\mathbb{C}}$. We assume $\mathfrak{g}$ is not of type $G_2$ to avoid unpleasant computations.
Fix a triangular decomposition of $\mathfrak{g}$: $\mathfrak{g} =
\mathfrak{g}^- \oplus \mathfrak{h} \oplus \mathfrak{g}^+$: Let $\mathfrak{h}$ be a maximal toral subalgebra and let $\Phi \subset
\mathfrak{h}^*$ be the roots of $\mathfrak{g}$ relative to $\mathfrak{h}$. Choose a simple system of roots $\Pi =
\{\alpha_1,\dots,\alpha_n\} \subset \Phi$. Let $\Phi^+$ (resp. $\Phi^-$) be the positive (resp. negative) roots. Let $\mathfrak{g}^{\pm}$ be the positive and negative part of $\mathfrak{g}$ corresponding to the simple system $\Pi$. Let $W$ be the Weyl group generated by the simple reflections $s_i :=
s_{\alpha_i}$. For a $w\in W$ let $l(w)$ be the length of $W$ i.e. the smallest amount of simple reflections such that $w=s_{i_1}\cdots
s_{i_{l(w)}}$. Let $(\cdot|\cdot)$ be a standard $W$-invariant bilinear form on $\mathfrak{h}^*$ and $\left<\alpha,\beta^\vee\right>
= \frac{2(\alpha|\beta)}{(\beta|\beta)}$. Since $(\cdot|\cdot)$ is standard we have $(\alpha|\alpha)=2$ for any short root $\alpha\in
\Phi$ and since $\mathfrak{g}$ is not of type $G_2$ we have $(\beta|\beta)=4$ for any long root $\beta\in \Phi$. Let $Q={\operatorname{span}_{{\mathbb{Z}}}\ensuremath{\left\{\alpha_1,\dots,\alpha_n\right\}}}$ denote the root lattice and $\Lambda={\operatorname{span}_{{\mathbb{Z}}}\ensuremath{\left\{\omega_1,\dots,\omega_n\right\}}}\subset \mathfrak{h}^*$ the integral lattice where $\omega_i\in \mathfrak{h}^*$ is the fundamental weights defined by $(\omega_i|\alpha_j)=\delta_{ij}$.
Let $U_v=U_v(\mathfrak{g})$ be the corresponding quantized enveloping algebra defined over $\mathbb{Q}(v)$, see e.g. [@Jantzen] with generators $E_\alpha,F_\alpha,K_\alpha^{\pm 1}$, $\alpha\in\Pi$ and certain relations which can be found in chapter 4 of [@Jantzen]. We define $v_\alpha = v^{(\alpha|\alpha)/2}$ (i.e. $v_\alpha = v$ if $\alpha$ is a short root and $v_\alpha = v^2$ if $\alpha$ is a long root) and for $n\in{\mathbb{Z}}$, $[n]_v
=\frac{v^n-v^{-n}}{v-v{^{-1}}}$. Let $[n]_\alpha := [n]_{v_\alpha} =
\frac{v_\alpha^n-v_\alpha^{-n}}{v_\alpha-v_\alpha{^{-1}}}$. We omit the subscripts when it is clear from the context. For later use we also define the quantum binomial coefficients: For $r\in {\mathbb{N}}$ and $a\in
{\mathbb{Z}}$: $${a \brack r}_v = \frac{[a][a-1]\cdots [a-r+1]}{[r]!}$$ where $[r]! := [r][r-1]\cdots [2][1]$. Let $A={\mathbb{Z}}[v,v{^{-1}}]$ and let $U_A$ be Lusztigs $A$-form, i.e. the $A$ subalgebra generated by the divided powers $E_\alpha^{(n)}:=\frac{1}{[n]_\alpha!}E_\alpha^{n}$, $F_\alpha^{(n)}:=\frac{1}{[n]_\alpha!}F_\alpha^{n}$ and $K_\alpha^{\pm
1}$, $\alpha\in\Pi$.
Let $q\in {\mathbb{C}}^*={\mathbb{C}}\backslash\{0\}$ be a nonzero complex number that is not a root of unity and set $U_q = U_A {\otimes}_A {\mathbb{C}}_q$ where ${\mathbb{C}}_q$ is the $A$-algebra ${\mathbb{C}}$ where $v$ is sent to $q$.
We have a triangular decomposition of Lusztigs $A$-form $U_A = U_A^-
{\otimes}U_A^0 {\otimes}U_A^+$ with $U_A^-=\left<
F_\alpha^{(n)}|\alpha\in \Pi,n\in {\mathbb{N}}\right>\in U_A$, $U_A^+=\left< E_\alpha^{(n)}|\alpha\in \Pi,n\in {\mathbb{N}}\right>\in U_A$ and $U_A^0 = \left< K_\alpha^{\pm 1}, { K_\alpha ; c \brack
r}|\alpha\in \Pi, c\in {\mathbb{Z}}, r\in {\mathbb{N}}\right>$ where $${K_\alpha ; c \brack r} := \prod_{j=1}^r \frac{ K_\alpha v_\alpha^{c-j+1}-K_\alpha{^{-1}}v_\alpha^{-c+j-1}}{v_\alpha^{j}-v_\alpha^{-j}}.$$ We have the corresponding triangular decomposition of $U_q$: $U_q =
U_q^- {\otimes}U_q^0 {\otimes}U_q^+$ with $U_q^{\pm} = U_A^{\pm}
{\otimes}_A {\mathbb{C}}_q$ and $U_q^0 = U_A^0 {\otimes}_A {\mathbb{C}}_q$.
For a $q\in {\mathbb{C}}^*$ define ${a \brack r}_q$ as the image of ${a \brack
r}_v$ in ${\mathbb{C}}_q$. We will omit the subscript from the notation when it is clear from the context. $q_\beta\in {\mathbb{C}}$ and $[n]_\beta\in
{\mathbb{C}}$ are defined as the image of $v_\beta\in A$ and $[n]_\beta\in
A$, respectively abusing notation. Similarly, we will abuse notation and write ${K_\alpha ; c \brack r}$ also for the image of ${K_\alpha ;
c \brack r}\in U_A$ in $U_q$. Define for $\mu\in Q$, $K_\mu =
\prod_{i=1}^n K_{\alpha_i}^{a_i}$ if $\mu = \sum_{i=1}^n a_i \alpha_i$ with $a_i\in {\mathbb{Z}}$.
There is a braid group action on $U_v$ which we will describe now. We use the definition from [@Jantzen Chapter 8]. The definition is slightly different from the original in [@MR1066560 Theorem 3.1] (see [@Jantzen Warning 8.14]). For each simple reflection $s_i$ there is a braid operator that we will denote by $T_{s_i}$ satisfying the following: $T_{s_i}:U_v\to U_v$ is a ${\mathbb{Q}}(v)$ automorphism. For $i\neq j \in \{1,\dots,n\}$ $$\begin{aligned}
T_{s_i}(K_\mu)=&K_{s_i(\mu)}
\\
T_{s_i}(E_{\alpha_i}) =& -F_{\alpha_i}K_{\alpha_i}
\\
T_{s_i}(F_{\alpha_i})=& - K_{\alpha_i}{^{-1}}E_{\alpha_i}
\\
T_{s_i}(E_{\alpha_j})=&
\sum_{i=0}^{-\left<\alpha_j,\alpha_i^\vee\right>} (-1)^i
v_{\alpha_i}^{-i} E_{\alpha_i}^{(r-i)}E_{\alpha_j}E_{\alpha_i}^{(i)}
\\
T_{s_i}(F_{\alpha_j})=&
\sum_{i=0}^{-\left<\alpha_j,\alpha_i^\vee\right>} (-1)^i
v_{\alpha_i}^{i} E_{\alpha_i}^{(i)}E_{\alpha_j}E_{\alpha_i}^{(r-i)}.\end{aligned}$$ The inverse $T_{s_i}{^{-1}}$ is given by conjugating with the ${\mathbb{Q}}$-algebra anti-automorphism $\Psi$ from [@MR1066560 section 1.1] defined as follows: $$\begin{aligned}
\Psi(E_{\alpha_i}) = E_{\alpha_i}, \quad \Psi(F_{\alpha_i}) =
F_{\alpha_i}, \quad \Psi(K_{\alpha_i}) = K_{\alpha_i}{^{-1}}, \quad
\Psi(q) = q.\end{aligned}$$ The braid operators $T_{s_i}$ satisfy braid relations so we can define $T_w$ for any $w\in W$: Choose a reduced expression of $w$: $w=s_{i_1}\cdots s_{i_n}$. Then $T_w = T_{s_{i_1}}\cdots T_{s_{i_n}}$ and $T_w$ is independent of the chosen reduced expression, see e.g. [@MR1066560 Theorem 3.2]. We have $T_w(K_\mu)=K_{w(\mu)}$. The braid group operators restrict to automorphisms $U_A\to U_A$ and extend to automorphisms $U_q\to U_q$.
Let $M$ be a $U_q$-module and $\lambda: U_q^0 \to {\mathbb{C}}$ a character (i.e. an algebra homomorphism into ${\mathbb{C}}$). Then $$M_\lambda = \{ m\in M | \forall u\in U_q^0, u m = \lambda(u)m\}.$$ Let $X$ denote the set of characters $U_q^0\to {\mathbb{C}}$. Since $U_q^0{\cong}{\mathbb{C}}[X_1^{\pm 1},\dots,X_n^{\pm 1}]$ we can identify $X$ with $({\mathbb{C}}^*)^n$ by $U_q^0\ni \lambda \mapsto
(\lambda(K_{\alpha_1}),\dots,\lambda(K_{\alpha_n}))\in ({\mathbb{C}}^*)^n$.
Basic definitions
-----------------
\[sec:intr-notat-2\] Let $\operatorname{wt}M$ denote all the weights of $M$, i.e. $\operatorname{wt}M = \{
\lambda\in X | M_\lambda \neq 0 \}$.
For $\mu\in \Lambda$ and $b\in {\mathbb{C}}^*$ define the character $b^\mu$ by $b^\mu(K_\alpha) = b^{(\mu|\alpha)}$, $\alpha\in \Pi$. In particular for $b=q$ we get $q^\mu(K_\alpha)=q^{(\mu|\alpha)}$. We say that $M$ only has integral weights if $\lambda(K_\alpha)\in \pm
q_\alpha^{\mathbb{Z}}$ for all $\lambda \in \operatorname{wt}M$, $\alpha\in \Pi$.
There is an action of $W$ on $X$. For $\lambda\in X$ define $w\lambda$ by $$(w\lambda)(u) = \lambda(T_{w{^{-1}}}(u))$$ Note that $w q^\mu = q^{w(\mu)}$.
\[sec:intr-notat-1\] Let $M$ be a $U_q$-module and $w\in W$. Define the twisted module ${^w}M$ by the following:
As a vector space ${^w}M=M$ but the action is given by twisting with $w{^{-1}}$: For $m\in {^w}M$ and $u \in U_q$: $$u\cdot m = T_{w{^{-1}}}(u)m.$$
We also define ${^{{\overline}{w}}}M$ to be the inverse twist, i.e. for $m\in {^{{\overline}{w}}}M$, $u\in U_q$: $$u \cdot m = T_{w{^{-1}}}{^{-1}}(u) m.$$ Hence for any $U_q$-module ${^{{\overline}{w}}}(^{w}M) = M =
{^w}(^{{\overline}{w}}M)$.
Note that $\operatorname{wt}{^w}M = w(\operatorname{wt}M)$ and that ${^w}(^{w'}M){\cong}{^{ww'}}M$ for $w,w'\in W$ with $l(ww')=l(w)+l(w')$ because the braid operators $T_w$ satisfy braid relations. Also ${^{{\overline}{w}}}(^{{\overline}{w'}}M) {\cong}{^{{\overline}{w'w}}}M$
\[def:1\] We define the category $\mathcal{F}=\mathcal{F}(\mathfrak{g})$ as the full subcategory of $U_q-\operatorname{Mod}$ such that for every $M\in \mathcal{F}$ we have
1. $M$ is finitely generated as a $U_q$-module.
2. $M = \bigoplus_{\lambda\in X} M_\lambda$ and $\dim M_\lambda <
\infty$.
Note that the assignment $M\mapsto {^w}M$ is an endofunctor on $\mathcal{F}$ (in fact an auto-equivalence).
Let $w_0$ be the longest element in $W$ and let $s_{i_1}\cdots
s_{i_N}$ be a reduced expression of $w_0$. We define root vectors $E_\beta$ and $F_\beta$ for any $\beta\in \Phi^+$ by the following:
First of all set $$\beta_{j} = s_{i_1}\cdots s_{i_{j-1}}(\alpha_{i_j}), \, \text{ for } i=1,\dots,N$$ Then $\Phi^+ = \{\beta_1,\dots,\beta_N\}$. Set $$E_{\beta_j} = T_{s_{i_1}}\cdots T_{s_{i_{j-1}}}(E_{\alpha_{i_j}})$$ and $$F_{\beta_j} = T_{s_{i_1}}\cdots T_{s_{i_{j-1}}}(F_{\alpha_{i_j}})$$ In this way we have defined root vectors for each $\beta\in\Phi^+$. These root vectors depend on the reduced expression chosen for $w_0$ above. For a different reduced expression we might get different root vectors. It is a fact that if $\beta\in\Pi$ then the root vectors $E_\beta$ and $F_\beta$ defined above are the same as the generators with the same notation (cf. e.g. [@Jantzen Proposition 8.20]) so the notation is not ambigious in this case. By “Let $E_\beta$ be a root vector” we will just mean a root vector constructed as above for some reduced expression of $w_0$.
Reductions {#sec:nonroot-unity-case}
==========
We recall the following results from [@DHP1].
\[prop:2\] Let $\beta$ be a positive root and $E_\beta,F_\beta$ root vectors corresponding to $\beta$. Let $M\in\mathcal{F}$. The sets $M^{[\beta]}=\{m\in M| \dim \left<E_\beta\right> m < \infty \}$ and $M^{[-\beta]}=\{m\in M| \dim \left<F_\beta\right> m < \infty \}$ are submodules of $M$ and independent of the chosen root vectors $E_\beta$, $F_\beta$.
This is shown for $E_\beta$ in Proposition 2.3 and Lemma 2.4 in [@DHP1] and the proofs are the same for $F_\beta$.
Let $M\in \mathcal{F}$. Let $\beta\in\Phi$. $M$ is called $\beta$-free if $M^{[\beta]}=0$ and $\beta$-finite if $M^{[\beta]}=M$.
Suppose $L\in \mathcal{F}$ is a simple module and $\beta$ a root. Then by Proposition \[prop:2\] $L$ is either $\beta$-finite or $\beta$-free.
Let $M\in \mathcal{F}$. Define $F_M = \{\beta \in \Phi| \text{$M$ is
$\beta$-finite}\}$ and $T_M = \{ \beta \in \Phi | \text{$M$ is
$\beta$-free} \}$. For later use we also define $F_M^s := F_M \cap
(-F_M)$ and $T_M^s := T_M \cap (-T_M)$ to be the symmetrical parts of $F_M$ and $T_M$.
Note that $\Phi = F_L \cup T_L$ for a simple module $L$ and this is a disjoint union.
A module $M$ is called torsion free if $T_M = \Phi$.
\[prop:3\] Let $L\in \mathcal{F}$ be a simple module and $\beta$ a root. $L$ is $\beta$-free if and only if $q^{{\mathbb{N}}\beta}\operatorname{wt}L \subset \operatorname{wt}L$.
Proposition 2.9 in [@DHP1].
\[prop:8\] Let $L\in \mathcal{F}$ be a simple module. $T_L$ and $F_L$ are closed subsets of $Q$. That is, if $\beta,\gamma\in F_L$ (resp. $\beta,\gamma\in T_L$) and $\beta+\gamma\in \Phi$ then $\beta+\gamma \in F_L$ (resp. $\beta+\gamma\in T_L$).
Proposition 2.10 and Proposition 2.11 in [@DHP1].
\[thm:Lemire\] Let $\lambda\in X$. There is a $1-1$ correspondence between simple $U_q$-modules with weight $\lambda$ and simple $(U_q)_0$-modules with weight $\lambda$ given by: For $V$ a $U_q$-module, $V_\lambda$ is the corresponding simple $(U_q)_0$-module.
Theorem 2.21 in [@DHP1].
\[thm:classification\] Let $L\in\mathcal{F}$ be a simple $U_q(\mathfrak{g})$-module. Then there exists a $w\in W$, subalgebras $U_q(\mathfrak{p}),U_q(\mathfrak{l}),U_q(\mathfrak{u}),U_q(\mathfrak{u}^-)$ of $U_q$ with $U_q = U_q(\mathfrak{u}^-) U_q(\mathfrak{p})$, $U_q(\mathfrak{p})= U_q(\mathfrak{l})U_q(\mathfrak{u})$ and a simple $U_q(\mathfrak{l})$-module $N$ such that ${^w}L$ is the unique simple quotient of $U_q{\otimes}_{U_q(\mathfrak{p})}N$ where $N$ is considered a $U_q(\mathfrak{p})$-module with $U_q({\mathfrak{u}})$ acting trivially.
Furthermore there exists subalgebras $U_{fr},U_{fin}$ of $U_q(\mathfrak{l})$ such that $U_q(\mathfrak{l}){\cong}U_{fr}{\otimes}U_{fin}$ and simple $U_{fr}$ and $U_{fin}$ modules $X_{fr}$ and $X_{fin}$ where $X_{fin}$ is finite dimensional and $X_{fr}$ is torsion free such that $N {\cong}X_{fin}{\otimes}X_{fr}$ as a $U_{fr}{\otimes}U_{fin}$-module.
$U_{fr}$ is the quantized enveloping algebra of a semisimple Lie algebra $\mathfrak{t}=\mathfrak{t}_1\oplus \cdots \oplus
\mathfrak{t}_r$ where $\mathfrak{t}_1,\dots,\mathfrak{t}_r$ are some simple Lie algebras. There exists simple torsion free $U_q(\mathfrak{t}_i)$-modules $X_i$, $i=1,\dots,r$ such that $X_{fr}{\cong}X_1{\otimes}\cdots {\otimes}X_r$ as $U_q(\mathfrak{t}_1){\otimes}\cdots {\otimes}U_q(\mathfrak{t}_r)$-modules.
Theorem 2.23 in [@DHP1].
So the problem of classifying simple modules in $\mathcal{F}$ is reduced to the problem of classifying finite dimensional simple modules and classifying simple torsion free modules of $U_q(\mathfrak{t})$ where $\mathfrak{t}$ is a simple Lie algebra.
$U_A$ calculations {#sec:u_a-calculations}
==================
In this section we recall from [@DHP-twist] some formulas for commuting root vectors with each other that will be used later on. Recall that $A={\mathbb{Z}}[v,v{^{-1}}]$ where $v$ is an indeterminate and $U_A$ is the $A$-subspace of $U_v$ generated by the divided powers $E_{\alpha}^{(n)}$ and $F_\alpha^{(n)}$, $n\in{\mathbb{N}}$.
\[sec:twisting-functors-2\] Let $x\in (U_q)_\mu$ and $y\in (U_q)_\gamma$ then we define $$[x,y]_q:=xy-q^{-(\mu|\gamma)}yx$$
\[thm:DP\] Suppose we have a reduced expression of $w_0 = s_{i_1}\cdots
s_{i_N}$ and define root vectors $F_{\beta_1},\dots,F_{\beta_N}$. Let $i<j$. Let $A={\mathbb{Z}}[q,q{^{-1}}]$ and let $A'$ be the localization of $A$ in $[2]$ if the Lie algebra contains any $B_n,C_n$ or $F_4$ part. Then $$[F_{\beta_j},F_{\beta_i}]_q=F_{\beta_j}F_{\beta_i}-q^{-(\beta_i|\beta_j)}F_{\beta_i}F_{\beta_j}\in {\operatorname{span}_{A'}\ensuremath{\left\{F_{\beta_{j-1}}^{a_{j-1}}\cdots F_{\beta_{i+1}}^{a_{i+1}}\right\}}}$$
[@Levendorski-Soibelman Proposition 5.5.2]. A proof following [@DP Theorem 9.3] can also be found in [@DHP-twist Theorem 2.9].
Let $u\in U_A$ and $\beta\in \Phi^+$. Define $\operatorname{ad}(F_\beta^i)(u)
:=[[\dots[u,F_\beta]_q\dots]_q,F_\beta]_q$ and ${\widetilde}{\operatorname{ad}}(F_\beta^i)(u)
:=[F_\beta,[\dots,[F_\beta,u]_q\dots]]_q$ where the commutator is taken $i$ times from the left and right respectively.
\[prop:17\] Let $a\in{\mathbb{N}}$, $u\in (U_A)_\mu$ and $r=\left<\mu,\beta^\vee\right>$. In $U_A$ we have the identities $$\begin{aligned}
u F_{\beta}^{a} =& \sum_{i=0}^a v_\beta^{(i-a)(r+i)} {a\brack
i}_\beta F_\beta^{a-i}\operatorname{ad}(F_\beta^{i})(u)
\\
=& \sum_{i=0}^{a} (-1)^i v_\beta^{a(r+i)-i} {a\brack i}_\beta
F_\beta^{a-i} {\widetilde}{\operatorname{ad}}(F_\beta^{i})(u)
\end{aligned}$$
Proposition 2.13 in [@DHP-twist].
Let $s_{i_1}\dots s_{i_N}$ be a reduced expression of $w_0$ and construct root vectors $F_{\beta_i}$, $i=1,\dots,N$. In the next lemma $F_{\beta_i}$ refers to the root vectors constructed as such. In particular we have an ordering of the root vectors.
\[lemma:22\] Let $n\in {\mathbb{N}}$. Let $1\leq j<k\leq N$.
$\operatorname{ad}(F_{\beta_j}^{i})(F_{\beta_k}^{n})=0$ and ${\widetilde}{\operatorname{ad}}(F_{\beta_k}^{i})(F_{\beta_j}^{n})=0$ for $i\gg 0$.
Lemma 2.16 in [@DHP-twist].
Ore localization and twists of localized modules {#sec:class-tors-free-1}
================================================
In this section we present some results towards classifying simple torsion free modules following [@Mathieu].
We need the equivalent of Lemma 3.3 in [@Mathieu]. The proofs are essentially the same but for completeness we include most of the proofs here.
A cone $C$ is a finitely generated submonoid of the root lattice $Q$ containing $0$. If $L$ is a simple module define the cone of $L$, $C(L)$, to be the submonoid of $Q$ generated by $T_L$.
\[lemma:11\] Let $L\in\mathcal{F}$ be an infinite dimensional simple module. Then the group generated by the submonoid $C(L)$ is $Q$.
Compare [@Mathieu] Lemma 3.1
First consider the case where $T_L \cap (-F_L) = \emptyset$. Then in this case we have $\Phi = T_L^s \cup F_L^s$. We claim that $T_L^s$ and $F_L^s$ correspond to different connected components of the Dynkin diagram: Suppose $\alpha \in F_L^s$ is a simple root and suppose $\alpha'\in \Pi$ is a simple root that is connected to $\alpha$ in the Dynkin diagram. So $\alpha+\alpha'$ is a root. There are two possibilities. Either $\alpha+\alpha' \in F_L$ or $\alpha+\alpha'\in T_L$. If $\alpha+\alpha'\in F_L$: Since $F_L^s$ is symmetric we have $-\alpha\in F_L^s$ and since $F_L$ is closed (Proposition \[prop:8\]) $\alpha' = \alpha+\alpha' + (-\alpha) \in
F_L$. If $\alpha+\alpha'\in T_L$ and $\alpha' \in T_L$ then we get similarly $\alpha \in T_L$ which is a contradiction. So $\alpha'\in
F_L$. We have shown that if $\alpha\in F_L$ then any simple root connected to $\alpha$ is in $F_L$ also. So $F_L$ and $T_L$ contains different connected components of the Dynkin diagram. Since $L$ is simple and infinite we must have $\Phi = T_L^s$ and therefore $C(L)=Q$.
Next assume $T_L \cap (-F_L) \neq \emptyset$. By Lemma 4.16 in [@Fernando] $P_L=T_L^s\cup F_L$ and $P_L^-=T_L\cup F_L^s$ are two opposite parabolic subsystems of $\Phi$. So we have that $T_L\cap (-F_L)$ and $(-T_L)\cap F_L$ must be the roots corresponding to the nilradicals $\mathfrak{v}^\pm$ of two opposite parabolic subalgebras $\mathfrak{p}^\pm$ of $\mathfrak{g}$. Since we have $\mathfrak{g}=\mathfrak{v}^+ + \mathfrak{v}^- +
[\mathfrak{v}^+,\mathfrak{v}^-]$ we get that $T_L\cap (-F_L)$ generates $Q$. Since $C(L)$ contains $T_L\cap (-F_L)$ it generates $Q$.
We define $\rho$ and $\delta$ like in [@Mathieu Section 3]:
Let $x\geq 0$ be a real number. Define $\rho(x) =
\operatorname{Card}B(x)$ where $B(x)=\{\mu\in Q| \sqrt{(\mu|\mu)}
\leq x\}$
Let $M$ be a weight module with support lying in a single $Q$-coset, say $q^Q \lambda:=\{q^\mu \lambda|\mu\in Q\}$. The density of $M$ is $\delta(M) = {\lim \inf}_{x\to \infty} \rho(x){^{-1}}\sum_{\mu\in
B(x)} \dim M_{q^{\mu}\lambda} $
For a cone $C$ we define $\delta(C) = {\lim \inf}_{x\to \infty}
\rho(x){^{-1}}\operatorname{Card}(C \cap B(x))$
\[lemma:12\] There exists a real number ${\varepsilon}> 0$ such that $\delta(L)>{\varepsilon}$ for all infinite dimensional simple modules $L$.
Note that since $q^{C(L)} \lambda \subset \operatorname{wt}L$ for all $\lambda\in
\operatorname{wt}L$ we have $\delta(L) \geq \delta(C(L))$.
Since $C(L)$ is the cone generated by $T_L$ and $T_L\subset \Phi$ (a finite set) there can only be finitely many different cones.
Since there are only finitely many different cones attached to infinite simple dimensional modules and since any cone $C$ that generates $Q$ has $\delta(C)>0$ we conclude via Lemma \[lemma:11\] that there exists an ${\varepsilon}>0$ such that $\delta(L)>{\varepsilon}$ for all infinite dimensional simple modules.
A module $M\in \mathcal{F}$ is called admissible if its weights are contained in a single coset of $X / q^Q$ and if the dimensions of the weight spaces are uniformly bounded. $M$ is called admissible of degree $d$ if $d$ is the maximal dimension of the weight spaces in $M$.
Of course all finite dimensional simple modules are admissible but the interesting admissible modules are the infinite dimensional simple ones. In particular simple torsion free modules are admissible. We show later that each infinite dimensional admissible simple module $L$ gives rise to a ’coherent family’ $\mathcal{EXT}(L)$ containing at least one torsion free module and at least one simple highest weight module that is admissible of the same degree.
\[lemma:10\] Let $M\in \mathcal{F}$ be an admissible module. Then $M$ has finite Jordan-Hölder length.
The length of $M$ is bounded by $A+\delta(M)/ {\varepsilon}$ where $A=
\sum_{\lambda\in Y} \dim M_\lambda$ and $Y=\{\nu \in X | \, \nu =
\sigma q^\mu, |\left< \mu,\alpha^\vee\right>|\leq 1,
\sigma(K_{\alpha})\in\{\pm 1\}\, \text{ for all } \alpha\in \Pi\}$. Check [@Mathieu Lemma 3.3] for details. Here we use the fact that finite dimensional simple quantum group modules have the same character as their corresponding Lie algebra simple modules. This is proved for transcendental $q$ in [@Jantzen Theorem 5.15] and for general non-roots-of-unity in [@APW Corollary 7.7].
\[lemma:27\] Let $\beta$ be a positive root and let $F_\beta$ be a corresponding root vector. The set $\{F_\beta^{n}|n\in {\mathbb{N}}\}$ is an Ore subset of $U_q$.
A proof can be found in [@HHA-kvante] for $\beta$ a simple root. If $\beta$ is not simple then $F_\beta$ is defined as $T_w(F_\alpha)$ for some $w\in W$ and some $\alpha\in \Pi$. Since $S:= \{F_\alpha^{n}|n\in {\mathbb{N}}\}$ is an Ore subset of $U_q$ we get for any $n\in {\mathbb{N}}$ and $u\in U_q$ that $$F_\alpha^{n} U_q \cap u S \neq \emptyset.$$ Let $u'\in U_q$ and set $u= T_w{^{-1}}(u')$, then from the above $$\emptyset \neq T_w(F_\alpha^{n}) T_w(U_q) \cap T_w(u) T_w(S) = F_\beta^n U_q \cap u' T_w(S).$$ Since $T_w(S) = \{ F_\beta^n | n\in {\mathbb{N}}\}$ we have proved the lemma.
We denote the Ore localization of $U_q$ in the above set by $U_{q(F_\beta)}$.
\[lemma:37\] Let $p$ be Laurent polynomial. If $$p(q^{r_1},\dots,q^{r_n})=0$$ for all $r_1,\dots,r_n \in {\mathbb{N}}$ then $p=0$.
If $n=1$ we have a Laurent polynomial of one variable with infinitely many zero-points so $p=0$. Let $n>1$, then for constant $r_1 \in {\mathbb{N}}$, $p(q^{r_1},-,\dots,-)$ is a Laurent polynomial in $n-1$ variables equal to zero in $(q^{r_2},\dots,q^{r_n})$ for all $r_2,\dots,r_n\in{\mathbb{N}}$ so by induction $p(q^{r_1},c_2,\dots,c_n)=0$ for all $c_2,\dots,c_n$. Now for arbitrary $c_2,\dots,c_n\in
{\mathbb{C}}^*$ we get $p(-,c_2,\dots,c_n)$ is a Laurent polynomial in one variable that is zero for all $q^{r_1}$, $r_1\in {\mathbb{N}}$ hence $p(c_1,\dots,c_n)=0$ for all $c_1\in {\mathbb{C}}^*$.
The next lemma is crucial for the rest of the results in this paper. We will use this result again and again.
\[lemma:9\] Let $\beta\in \Phi^+$ and let $F_\beta$ be a corresponding root vector. There exists automorphisms ${\varphi}_{F_{\beta}
,b}:U_{q(F_\beta)}\to U_{q(F_\beta)}$ for each $b\in {\mathbb{C}}^*$ such that ${\varphi}_{F_{\beta} ,q^i}(u)=F_\beta^{-i} u F_\beta^{i}$ for $i\in{\mathbb{Z}}$ and such that for $u\in U_{q(F_\beta)}$ the map ${\mathbb{C}}^*
\to U_{q(F_\beta)}$, $b\mapsto {\varphi}_{F_{\beta} ,b}(u)$ is of the form $b \mapsto p(b)$ for some Laurent polynomial $p \in
U_{q(F_\beta)} [X,X{^{-1}}]$. Furthermore for $b,b'\in {\mathbb{C}}^*$, ${\varphi}_{F_\beta,b}\circ {\varphi}_{F_\beta,b'} = {\varphi}_{F_\beta,bb'}$.
We can assume $\beta$ is simple since if $F_\beta=T_w(F_{\alpha'})$ for some $\alpha'\in\Pi$ then we can just define the homomorphism on $T_w(E_{\alpha}),T_w(K_\alpha^{\pm 1}),T_w(F_\alpha)$ for $\alpha\in\Pi$ i.e. in this case we define ${\varphi}_{F_{\beta} ,b}(u) =
T_w( {\varphi}_{\alpha',b}(T_w{^{-1}}(u)))$ where we extend $T_w$ to a homomorphism $T_w:U_{q(F_{\alpha'})}\to U_{q(F_\beta)}$ by $T_w(F_{\alpha'}{^{-1}}) = F_\beta{^{-1}}$.
So $\beta$ is assumed simple. For $b\in {\mathbb{C}}^*$ define $b_\beta=b^{(\beta|\beta)/2}$ i.e. $b_\beta=b$ if $\beta$ is short and $b_\beta=b^2$ when $\beta$ is long. We will define the map on the generators $E_\alpha,K_\alpha,F_\alpha$ for $\alpha\in \Pi$. If $\alpha=\beta$ the map is defined as follows: $$\begin{aligned}
{\varphi}_{F_{\beta} ,b}(F_\beta^{\pm 1}) =& F_\beta^{\pm 1}
\\
{\varphi}_{F_{\beta} ,b}(K_\beta^{\pm 1}) =& b_\beta^{\mp 2}
K_\beta^{\pm 1}
\\
{\varphi}_{F_{\beta} ,b}(E_\beta) =& E_\beta + F_\beta{^{-1}}\frac{(b_\beta-b_\beta{^{-1}})(q_\beta b_\beta{^{-1}}K_\beta -
q_\beta{^{-1}}b_\beta K_\beta{^{-1}})}{(q_\beta-q_\beta{^{-1}})^{2}}.
\end{aligned}$$
Assume $\alpha\neq \beta$. Let $r=\left<\alpha,\beta^\vee
\right>$. Note that $\operatorname{ad}(F_\beta^{-r+1})(F_\alpha)=0$ because this is one of the defining relations of $U_q$. We define the map as follows: $$\begin{aligned}
{\varphi}_{F_{\beta} ,b}(F_\alpha) =& \sum_{i=0}^{-r}
b_\beta^{-r-i}q_\beta^{i(i+r)} \prod_{t=1}^i \frac{b_\beta
q_{\beta}^{1-t}-b_\beta{^{-1}}q_{\beta}^{t-1}}{q_\beta^t -
q_\beta^{-t}} F_\beta^{-i}
\operatorname{ad}(F_\beta^i)(F_\alpha)
\\
{\varphi}_{F_{\beta} ,b}(K_\alpha) =& b_\beta^{-r}K_\alpha
=b^{-(\alpha|\beta)}K_\alpha
\\
{\varphi}_{F_{\beta} ,b}(E_\alpha) =& E_\alpha.
\end{aligned}$$ Note that if $b=q^{j}$ for some $j\in{\mathbb{Z}}$ then $\prod_{t=1}^i
\frac{b_\beta q_{\beta}^{1-t}-b_\beta{^{-1}}q_{\beta}^{t-1}}{q_\beta^t
- q_\beta^{-t}}={j \brack i}_\beta$. Since the map $b\mapsto
{\varphi}_{F_{\beta} ,b}(u)$ is of the form $b \mapsto \sum_{i=1}^r
p_i(b) u_i$ with $p_i$ Laurent polynomial in $b$ for each generator of $U_q$ it is of this form for all $u\in U_q$. It’s easy to check that ${\varphi}_{F_{\beta} ,b}(u) = F_\beta^{-i} u F_\beta^i$ when $b=
q^i$, $i \in {\mathbb{N}}$. So ${\varphi}_{F_{\beta} ,b}$ satisfies the generating relations of $U_q$ for $b= q^i$, $i \in {\mathbb{N}}$. By Lemma \[lemma:37\] ${\varphi}_{F_{\beta} ,b}$ must satisfy the generating relations for all $b\in {\mathbb{C}}$.
Consider the last claim of the lemma: Let $u\in U_q$, then by the above $b\mapsto {\varphi}_{F_\beta,b}(u)$ is a Laurent polynomial and so $b \mapsto {\varphi}_{F_\beta,bb'}(u)$ and ${\varphi}_{F_\beta,b}({\varphi}_{F_\beta,b'}(u))$ for a constant $b'\in
{\mathbb{C}}^*$ is a Laurent polynomial as well. Now we know from above that for $b'=q^j$ for some $j\in {\mathbb{Z}}$ and $i\in {\mathbb{Z}}$: $$\begin{aligned}
{\varphi}_{F_\beta,q^i} \circ {\varphi}_{F_\beta,b'}(u) =& F_\beta^{-i}
F_\beta^{-j} u F_\beta^j F_\beta^i
\\
=& F_\beta^{-i-j}u F_\beta^{i+j}
\\
=& {\varphi}_{F_\beta,q^i q^j}(u)
\end{aligned}$$ So ${\varphi}_{F_\beta,b}({\varphi}_{F_\beta,q^j}(u))={\varphi}_{F_\beta,b q^j}(u)$ for all $b\in {\mathbb{C}}^*$ since both sides are Laurent polynomials in $b$ and they are equal in infinitely many points. In the same way we get the result for all $b'\in {\mathbb{C}}$.
Note that if $\beta$ is long then the above automorphism is a Laurent polynomial in $b^2$. So if $b_1^2=b_2^2$ for $b_1,b_2\in {\mathbb{C}}^*$ then ${\varphi}_{F_\beta,b_1} = {\varphi}_{F_\beta,b_2}$. We could have defined another automorphism ${\varphi}'_{F_\beta,b} :=
{\varphi}_{F_\beta,b^{2/(\beta|\beta)}}$ and proved the lemma above with the modification that ${\varphi}'_{F_{\beta},q_\beta^i}(u) = F_{\beta}^{-i}
u F_{\beta}^i$. The author has chosen the first option to avoid having to write the $\beta$ in $q_\beta$ all the time in results like Lemma \[lemma:29\] and Corollary \[cor:6\]. On the other hand this choice means that we have to take some squareroots sometimes when doing concrete calculations involving long roots see e.g the proof of Lemma \[lemma:30\]. The choice of squareroot doesn’t matter by the above.
We can use the formulas in Section \[sec:u\_a-calculations\] to find the value of ${\varphi}_{F_{\beta},b}(F_{\beta'})$ and ${\varphi}_{F_\beta,b}(E_{\beta'})$ for general root vectors $F_\beta$, $F_{\beta'}$ and $E_{\beta'}$, $\beta,\beta'\in \Phi^+$.
\[prop:25\] Let $s_{i_1}\dots s_{i_N}$ be a reduced expression of $w_0$ and define root vectors $F_{\beta_1},\dots,F_{\beta_N}$ and $E_{\beta_1},\dots,E_{\beta_N}$ using this expression (i.e. $F_{\beta_j}=T_{s_{i_1}}\dots
T_{s_{i_{j-1}}}(F_{\alpha_{i_j}})$ and $E_{\beta_j}=T_{s_{i_1}}\dots
T_{s_{i_{j-1}}}(E_{\alpha_{i_j}})$). Let $1\leq j<k\leq N$ and set $r=\left<\beta_k,\beta_j^\vee\right>$. $$\begin{aligned}
{\varphi}_{F_{\beta_j},b}(F_{\beta_k}^n) =& \sum_{i\geq 0}
q_{\beta_j}^{i(nr+i)} b_{\beta_j}^{-nr-i} \prod_{t=1}^i
\frac{q_{\beta_j}^{1-t}b_{\beta_j} -
q_{\beta_j}^{t-1}b_{\beta_j}{^{-1}}}{q_{\beta_j}^t-q_{\beta_j}^{-t}}
F_{\beta_j}^{-i} \operatorname{ad}(F_{\beta_j}^i)(F_{\beta_k}^n)
\\
{\varphi}_{F_{\beta_k},b}(F_{\beta_j}^n) =& \sum_{i\geq 0} (-1)^i
q_{\beta_k}^{-i} b_{\beta_k}^{nr+i} \prod_{t=1}^i
\frac{q_{\beta_k}^{1-t}b_{\beta_k} -
q_{\beta_k}^{t-1}b_{\beta_k}{^{-1}}}{q_{\beta_k}^t-q_{\beta_k}^{-t}}
F_{\beta_k}^{-i} {\widetilde}{\operatorname{ad}}(F_{\beta_k}^i)(F_{\beta_j}^n)
\\
{\varphi}_{F_{\beta_j},b}(E_{\beta_k}) =& \sum_{i\geq 0}
b_{\beta_j}^{-i} \prod_{t=1}^i \frac{q_{\beta_j}^{1-t}b_{\beta_j}
-
q_{\beta_j}^{t-1}b_{\beta_j}{^{-1}}}{q_{\beta_j}^t-q_{\beta_j}^{-t}}
F_{\beta_j}^{-i} u_i
\\
{\varphi}_{F_{\beta_k},b}(E_{\beta_j}) =& \sum_{i\geq 0}
b_{\beta_k}^{i} \prod_{t=1}^i \frac{q_{\beta_k}^{1-t}b_{\beta_k} -
q_{\beta_k}^{t-1}b_{\beta_k}{^{-1}}}{q_{\beta_k}^t-q_{\beta_k}^{-t}}
F_{\beta_k}^{-i} {\widetilde}{u_i}
\end{aligned}$$ for some $u_i,{\widetilde}{u_i}\in U_q$ (independent of $b$) such that $u_i={\widetilde}{u_i}=0$ for $i\gg 0$. In particular for any $j,k \in
\{1,\dots,N\}$: $$\begin{aligned}
{\varphi}_{F_{\beta_j},-1}(F_{\beta_k}) =
(-1)^{(\beta_j|\beta_k)}F_{\beta_k}
\\
{\varphi}_{F_{\beta_j},-1}(E_{\beta_k}) = E_{\beta_k}.
\end{aligned}$$
Note that the sums are finite because of Lemma \[lemma:22\].
By Proposition \[prop:17\] we have for any $a \in {\mathbb{N}}$ $$\begin{aligned}
F_{\beta_k}^{n} F_{\beta_j}^{a} =& \sum_{i=0}^a
q_{\beta_j}^{(i-a)(nr+i)} {a\brack i}_{\beta_j}
F_{\beta_j}^{a-i}\operatorname{ad}(F_{\beta_j}^{i})(F_{\beta_k}^n)
\\
=& \sum_{i=0}^\infty q_{\beta_j}^{i(nr+i)}q_{\beta_j}^{-a(nr+i)}
\prod_{t=1}^i \frac{q_{\beta_j}^{1-t}q_{\beta_j}^a -
q_{\beta_j}^{t-1}q_{\beta_j}^{-a}}{q_{\beta_j}^t-q_{\beta_j}^{-t}}
F_{\beta_j}^{a-i}\operatorname{ad}(F_{\beta_j}^{i})(F_{\beta_k}^n).
\end{aligned}$$ Here we use the fact that ${a \brack i}_{\beta_j} = 0$ for $i>a$. So $$\begin{aligned}
F_{\beta_j}^{-a}F_{\beta_k}^{n} F_{\beta_j}^{a} =& \sum_{i\geq 0}
q_{\beta_j}^{i(nr+i)}q_{\beta_j}^{-a(nr+i)} \prod_{t=1}^i
\frac{q_{\beta_j}^{1-t}q_{\beta_j}^a -
q_{\beta_j}^{t-1}q_{\beta_j}^{-a}}{q_{\beta_j}^t-q_{\beta_j}^{-t}}
F_{\beta_j}^{-i}\operatorname{ad}(F_{\beta_j}^{i})(F_{\beta_k}^n).
\end{aligned}$$ Now using the fact that ${\varphi}_{F_{\beta_j},q^{a}}(F_{\beta_k}^n)=
F_{\beta_j}^{-a}F_{\beta_k}^{n} F_{\beta_j}^{a}$, the fact that ${\varphi}_{F_{\beta_j},b}(F_{\beta_k}^n)$ is Laurent polynomial and Lemma \[lemma:37\] we get the first identity. The second identity is shown similarly by using the second identity in Proposition \[prop:17\].
To prove the last two identities we need to calculate $F_{\beta_j}^{-a}E_{\beta_k}^n F_{\beta_j}^a$ (resp. $F_{\beta_k}^{-a}E_{\beta_j}^n F_{\beta_k}^a$) for any $a\in{\mathbb{N}}$. Let $w=s_{i_1}\cdots s_{i_{j-1}}$ and $w'=s_{i_{j+1}}\cdots s_{i_{k-1}}$. Then $E_{\beta_j}=T_w(E_{\alpha_{i_j}})$ and $F_{\beta_k}=T_wT_{s_{i_j}}T_{w'}(F_{\alpha_{i_k}})$. $$\begin{aligned}
E_{\beta_j} F_{\beta_k}^a =& T_w \left(E_{\alpha_{i_j}}
T_{s_{i_j}}T_{w'}(F_{\alpha_{i_k}}^a)\right)
\\
=& T_wT_{s_{i_j}}\left( -K_{\alpha_{i_j}}{^{-1}}F_{\alpha_{i_j}}
T_{w'}(F_{\alpha_{i_k}}^a) \right).
\end{aligned}$$ Expand $s_{i_{j}}\cdots s_{i_N}$ from the right to a reduced expression $s_{i_j}\cdots s_{i_N}s_{m_1}\cdots s_{m_{j-1}}$ of $w_0$. Do the same with $s_{i_{j+1}}\cdots s_{i_N}s_{m_1}\cdots
s_{m_{j-1}}$ to get a reduced expression $s_{i_{j+1}}\cdots
s_{i_N}s_{m_1}\cdots s_{m_{j}}$. We claim that if we use the reduced expression $s_{i_{j+1}}\cdots s_{i_N}s_{m_1}\cdots s_{m_{j}}$ to construct roots $\beta_1'\dots,\beta_N'$ and root vectors $F_{\beta_j'}'$ then $F_{\beta_N'}' = T_{s_{i_{j+1}}}\cdots
T_{s_{i_N}}T_{s_{m_1}}\cdots T_{s_{m_{j-1}}}(F_{\alpha_{m_j}}) =
F_{\alpha_{i_j}}$. This is easy to see since $\beta_N'$ is positive but $s_{i_j}\beta_N' = w_0(\alpha_{m_j})<0$. We have $T_{w'}(F_{\alpha_{i_k}}^a) = F_{\beta_{k-j}'}^a$. Since $k-j<N$ we can use what we just calculated above: (set $d=k-j$) $$\begin{aligned}
F_{\beta_{d}'}'^{-a}F_{\beta_N'}' F_{\beta_{d}'}'^{a} =&
\sum_{i\geq 0} q_{\beta_{d}'}^{i(r+i)}q_{\beta_{d}'}^{-a(r+i)}
\prod_{t=1}^i \frac{q_{\beta_{d}'}^{1-t}q_{\beta_{d}'}^a -
q_{\beta_{d}'}^{t-1}q_{\beta_d'}^{-a}}{q_{\beta_d'}^t-q_{\beta_d'}^{-t}}
F_{\beta_d'}'^{-i}\operatorname{ad}(F_{\beta_d'}'^{i})(F_{\beta_N'}').
\end{aligned}$$ so $$\begin{aligned}
F_{\beta_k}^{-a} E_{\beta_j} F_{\beta_k}^a =& K_{\beta_j}T_w
T_{s_{i_j}}\left( \sum_{i\geq 0}
q_{\beta_{d}'}^{i(r+i)}q_{\beta_{d}'}^{-ai} \prod_{t=1}^i
\frac{q_{\beta_{d}'}^{1-t}q_{\beta_{d}'}^a -
q_{\beta_{d}'}^{t-1}q_{\beta_d'}^{-a}}{q_{\beta_d'}^t-q_{\beta_d'}^{-t}}
F_{\beta_d'}'^{-i}\operatorname{ad}(F_{\beta_d'}'^{i})(F_{\beta_N'}') \right).
\end{aligned}$$ This shows the third identity. The fourth is shown similarly.
Setting $b=-1$ in the above formulas we get the last claim of the proposition.
Let $M$ be a $U_{q(F_\beta)}$-module. We define a new module ${\varphi}_{F_{\beta} ,b}.M$ (with elements ${\varphi}_{F_{\beta} ,b}.m$, $m\in M$) where the module structure is given by composing with the above automorphism ${\varphi}_{F_{\beta} ,b}$. – i.e. $u{\varphi}_{F_{\beta}
,b}.m = {\varphi}_{F_{\beta} ,b}.{\varphi}_{F_{\beta} ,b}(u)m$ for all $u\in
U_{q(F_\beta)}$, $m\in M$.
Note that $\operatorname{wt}{\varphi}_{F_{\beta} ,b}.M = b^{-\beta}\operatorname{wt}M$ where $b^{-\beta}$ is the character such that $b^{-\beta}(K_\alpha) =
b^{-\left( \alpha| \beta \right)}$ for $\alpha\in\Pi$.
The homomorphisms from Lemma \[lemma:9\] preserve degree so we can restrict to $(U_{q(F_\beta)})_0$ which we will do in the next lemma. The twist of a $(U_{q(F_\beta)})_0$-module is defined in the same way as the definition above. It is an important fact of these twists that they do not neccesarily preserve simplicity of $U_q$-modules: If $L$ is a $U_{q(F_\beta)}$-module that is simple as a $U_q$-module then ${\varphi}_{F_\beta,b}.L$ can be nonsimple as a $U_q$-module for some $b\in {\mathbb{C}}^*$, see e.g. Lemma \[lemma:24\].
\[lemma:29\] Let $M$ be a $U_{q(F_\beta)}$-module. Let $i\in {\mathbb{Z}}$. Then $${\varphi}_{F_{\beta} , q^i}.M {\cong}M$$ as $U_{q(F_\beta)}$-modules. Furthermore for $\lambda\in \operatorname{wt}M$ we have an isomorphism of $(U_{q(F_\beta)})_0$-modules: $${\varphi}_{F_{\beta}, q^i}.M_\lambda {\cong}M_{q^{-i \beta} \lambda}.$$
The isomorphism in both cases is given by ${\varphi}_{F_{\beta} , q^i}.m
\mapsto F_\beta^im$, ${\varphi}_{F_{\beta} , q^i}.M\to M$. The inverse is given by multiplying by $ F_\beta^{-i}$. By Lemma \[lemma:9\]: For $u\in U_{q(F_\beta)}$, $m\in M$; ${\varphi}_{F_{\beta} ,q^i}(u) =
F_\beta^{-i}uF_\beta^{i}$ so $u {\varphi}_{F_{\beta} ,q^i}.m =
{\varphi}_{F_{\beta} ,q^i}.F_\beta^{-i}uF_\beta^{i}m \mapsto F_\beta^i
F_\beta^{-i}uF_\beta^{i}m = uF_\beta^i m$. Thus the given map is a homomorphism.
\[def:commuting\_roots\] Let $\Sigma\subset \Phi^+$. Then $\Sigma$ is called a set of commuting roots if there exists an ordering of the roots in $\Sigma$; $\Sigma=\{\beta_1,\dots,\beta_s\}$ such that for some reduced expression of $w_0$ and corresponding construction of the root vectors $F_\beta$ we have: $[F_{\beta_j},F_{\beta_i}]_q=0$ for $1\leq i < j \leq s$.
For any subset $I\subset \Pi$, let $Q_I$ be the subgroup of $Q$ generated by $I$, $\Phi_I$ the root system generated by $I$ , $\Phi_I^+=\Phi^+ \cap \Phi_I$ and $\Phi_I^- = -\Phi_I^+$.
The following three lemmas have exactly the same proofs as their counterparts ([@DHP1 Lemma 5.6], [@DHP1 Lemma 5.22] and [@DHP1 Lemma 5.23]) in the root of unity case in [@DHP1]. We include the proofs here as well for completeness.
We have the following equivalent of Lemma 4.1 in [@Mathieu]:
\[lemma:7\]
1. Let $I\subset \Pi$ and let $\alpha\in I$. There exists a set of commuting roots $\Sigma'\subset \Phi_I^+$ with $\alpha\in\Sigma'$ such that $\Sigma'$ is a basis of $Q_I$.
2. Let $J,F$ be subsets of $\Pi$ with $F\neq \Pi$. Let $\Sigma'
\subset \Phi_J^+ \backslash \Phi_{J\cap F}^+$ be a set of commuting roots which is a basis of $Q_J$. There exists a set of commuting roots $\Sigma$ which is a basis of $Q$ such that $\Sigma' \subset \Sigma \subset \Phi^+ \backslash \Phi_F^+$
The first part of the proof is just combinatorics of the root system so it is identical to the first part of the proof of Lemma 4.1 in [@Mathieu]: Let us first prove assertion $2.$: If $J$ is empty we can choose $\alpha\in \Pi\backslash F$ and replace $J$ and $\Sigma'$ by $\{\alpha\}$. So assume from now on that $J\neq
\emptyset$. Set $J' = J \backslash F$, $p= |J'|$, $q=|J|$. Let $J_1,\dots, J_k$ be the connected components of $J$ and set $J_i' =
J' \cap J_i$, $F_i = F \cap J_i$, and $\Sigma_i' = \Sigma\cap
\Phi_{J_i}$, for any $1\leq i \leq k$. Since $\Sigma'\subset \Phi_J$ is a basis of $Q_J$, each $\Sigma_i'$ is a basis of $Q_{J_i}$. Since $\Sigma_i'$ lies in $\Phi_{J_i}^+ \backslash \Phi_{F_i}^+$, the set $J_i'=J_i\backslash F_i$ is not empty. Hence $J'$ meets every connected component of $J$. Therefore we can write $J=\{\alpha_1,\dots, \alpha_q\}$ in such a way that $J'=\{\alpha_1,\dots,\alpha_p\}$ and, for any $s$ with $p+1\leq
s\leq q$, $\alpha_s$ is connected to $\alpha_i$ for some $i<s$. Since $\Pi$ is connected we can write $\Pi \backslash J =
\{\alpha_{q+1},\dots, \alpha_n\}$ in such a way that for any $s\geq
q+1$, $\alpha_s$ is connected to $\alpha_i$ for some $i$ with $1\leq
i < s$. So $\Pi = \{\alpha_1,\dots, \alpha_n\}$ such that for $s> p$ we have that $\alpha_s$ is connected to some $\alpha_i$ with $1\leq
i < s$.
Let $\Sigma' = \{\beta_1,\dots,\beta_q\}$. We will define $\beta_{q+1},\dots,\beta_l$ inductively such that for each $s\geq
q$, $\{\beta_1,\dots,\beta_s\}$ is a commuting set of roots which is a basis of $\Phi_{\{\alpha_1,\dots,\alpha_s\}}$. So assume we have defined $\beta_1,\dots,\beta_s$. Let $w_s$ be the longest word in $s_{\alpha_1},\dots,s_{\alpha_s}$ and let $w_{s+1}$ be the longest word in $s_{\alpha_1},\dots,s_{\alpha_{s+1}}$. Choose a reduced expression of $w_s$ such that the corresponding root vectors $\{F_{\beta_k}\}_{k=1}^s$ satisfies $[F_{\beta_j},F_{\beta_i}]_q=0$ for $i<j$. Choose a reduced expression of $w_{s+1}=w_{s}w'$ starting with the above reduced expression of $w_s$. Let $N_s$ be the length of $w_s$ and $N_{s+1}$ be the length of $w_{s+1}$. So we get an ordering of the roots generated by $\{\alpha_1,\dots,\alpha_{s+1}\}$: $\Phi_{\{\alpha_1,\dots,\alpha_{s+1}\}}^+=\{\gamma_1,\dots,\gamma_{N_s},\gamma_{N_s+1},\dots,\gamma_{N_{s+1}}\}$ with $\Phi_{\{\alpha_1,\dots,\alpha_{s}\}}^+=\{\gamma_1,\dots,\gamma_{N_s}\}$. Consider $\gamma_{N_s+1}=w_s(\alpha_{s+1})$. Since $w_s$ only consists of the simple reflections corresponding to $\alpha_1,\dots,\alpha_s$ we must have that $\gamma_{N_s+1}=\alpha_{s+1}+\sum_{i=1}^s m_i
\alpha_i$ for some coefficients $m_i\in {\mathbb{N}}$. So $\{\beta_1,\dots,\beta_s,\gamma_{N_s+1}\}$ is a basis of $\Phi_{\{\alpha_1,\dots,\alpha_{s+1}\}}$. From Theorem \[thm:DP\] we get for $1\leq i \leq s$ $$[F_{\gamma_{N_s+1}},F_{\beta_i}]_q \in {\operatorname{span}_{{\mathbb{C}}}\ensuremath{\left\{F_{\gamma_{N_s}}^{a_{N_s}}\cdots F_{\gamma_{2}}^{a_2}|a_i\in {\mathbb{N}}\\right\}}}}$$ But since $\{\gamma_1,\dots,\gamma_{N_s}\} =
\Phi^+_{\{\alpha_1,\dots,\alpha_s\}}$ and since $\gamma_{N_s+1}=\alpha_{s+1}+\sum_{i=1}^s m_i \alpha_i$ we get $[F_{\gamma_{N_s+1}},F_{\beta_i}]_q=0$.
All that is left is to show that $\gamma_{N_s+1}\not \in \Phi_F$. By the above we must have that $\alpha_{s+1}$ is connected to some $\alpha_i\in J'$. We will show that the coefficient of $\alpha_i$ in $\gamma_{N_s+1}$ is nonzero. Otherwise $(\gamma_{N_s+1}|\alpha_i)<0$ and so $\gamma_{N_s+1}+\alpha_i\in
\Phi_{\{\alpha_1,\dots,\alpha_{s+1}\}}$ and by Theorem 1 in [@Papi], $\gamma_{N_s+1}+\alpha_i = \gamma_{j}$ for some $1 <
j \leq s$. This is impossible since $\gamma_{N_s+1}+\alpha_i\not \in
\Phi_{\{\alpha_1,\dots,\alpha_{s}\}}$. So we can set $\beta_{s+1} =
\gamma_{N_s+1}$ and the induction step is finished.
To prove assertion $1.$ it can be assumed that $I=\Pi$. Thus assertion $1.$ follows from assertion $2.$ with $J=\{\alpha\}$ and $F=\emptyset$.
\[lemma:6\] Let $L\in \mathcal{F}$ be a simple module. Then there exists a $w\in
W$ such that $w(F_L\backslash F_L^s) \subset \Phi^+$ and $w(T_L\backslash T_L^s) \subset \Phi^-$.
Since $L$ is simple we have $\Phi= F_L\cup T_L$. By Proposition \[prop:8\] $F_L$ and $T_L$ are closed subsets. Then Lemma 4.16 in [@Fernando] tells us that there exists a basis $B$ of the root system $\Phi$ such that the antisymmetrical part of $F_L$ is contained in the positive roots $\Phi_B^+$ corresponding to the basis $B$ and the antisymmetrical part of $T_L$ is contained in the negative roots $\Phi_B^-$ corresponding to the basis. Since all bases of a root system are $W$-conjugate the claim follows.
\[lemma:26\] Let $L$ be an infinite dimensional admissible simple module. Let $w\in W$ be such that $w(F_L\backslash F_L^s) \subset \Phi^+$. Let $\alpha\in \Pi$ be such that $-\alpha\in w(T_L)$ (such an $\alpha$ always exists). Then there exists a commuting set of roots $\Sigma$ with $\alpha\in \Sigma$ which is a basis of $Q$ such that $ -\Sigma
\subset w(T_L)$.
Set $L' = {^w}L$. Since $w(T_L) = T_{{^{w}}L}=T_{L'}$ we will just work with $L'$. Then $F_{L'}\backslash F_{L'}^s \subset \Phi^+$.
Note that it is always possible to choose a simple root $\alpha\in
-T_{L'}$ since $L'$ is infinite dimensional: If this was not possible we would have $\Phi^- \subset F_{L'}$. But since $F_{L'}\backslash F_{L'}^s \subset \Phi^+$ this would imply $F_L =
\Phi$.
Set $F = F_{L'}^s \cap \Pi$. Since $L'$ is infinite dimensional $F\neq \Pi$. By Lemma \[lemma:7\] $2.$ applied with $J=\{\alpha\}=\Sigma'$ there exists a commuting set of roots $\Sigma$ that is a basis of $Q$ such that $\Sigma \subset \Phi^+
\backslash \Phi^+_F$. Since $F_{L'}\backslash F_{L'}^s \subset
\Phi^+$ we have $\Phi^- = T_{L'}^- \cup (F_{L'}^s)^-$. To show $-\Sigma \subset T_{L'}$ we show $\left( \Phi^- \backslash \Phi^-_F
\right)\cap F_{L'}^s=\emptyset$ or equivalently $(F_{L'}^s)^-
\subset \Phi_F^-$.
Assume $\beta\in F_{L'}^s\cap \Phi^+$, $\beta = \sum_{\alpha\in \Pi}
a_\alpha \alpha$, $a_\alpha\in{\mathbb{N}}$. The height of $\beta$ is the sum $\sum_{\alpha\in \Pi} a_\alpha$. We will show by induction on the height of $\beta$ that $-\beta\in \Phi_F^-$. If the height of $\beta$ is $1$ then $\beta$ is a simple root and so $\beta\in
F$. Clearly $-\beta\in \Phi_F^-$ in this case. Assume the height of $\beta$ is greater than $1$. Let $\alpha'\in \Pi$ be a simple root such that $\beta-\alpha'$ is a root. There are two possibilities: $-\alpha' \in T_{L'}$ or $\pm \alpha' \in F_{L'}^s$.
In the first case where $-\alpha'\in T_{L'}$ we must have $-\beta +
\alpha' \in F_{L'}^s$ since if $-\beta + \alpha' \in T_L$ then $-\beta = (-\beta + \alpha') - \alpha' \in T_{L'}$. So $\beta-\alpha' \in F_{L'}^s$ and $\beta \in F_{L'}^s$. Since $F_{L'}$ is closed (Proposition \[prop:8\]) we get $-\alpha' =
(\beta-\alpha') - \beta \in F_L$ which is a contradiction. So the first case ($-\alpha' \in T_{L'}$) is impossible.
In the second case since $F_{L'}$ is closed we get $\pm (\beta -
\alpha') \in F_{L'}$ i.e. $\beta- \alpha' \in F_{L'}^s$. By the induction $-(\beta-\alpha') \in \Phi_F^-$ and since $-\beta =
-(\beta-\alpha') - \alpha'$ we are done.
\[prop:11\] Let $\Sigma = \{\beta_1,\dots,\beta_r\}$ be a set of commuting roots. The set $\{q^a F_{\beta_1}^{a_1}\cdots
F_{\beta_r}^{a_r}|a_i\in {\mathbb{N}},a\in {\mathbb{Z}}\}$ is an Ore subset of $U_q$.
We will prove it by induction over $r$. $r=1$ is Lemma \[lemma:27\].
Let $S_r = \{q^a F_{\beta_1}^{a_1}\cdots F_{\beta_r}^{a_r}|a_i\in
{\mathbb{N}}, a\in {\mathbb{Z}}\}$. Let $a_1,\dots,a_r \in {\mathbb{N}}$, $a\in {\mathbb{Z}}$ and $u\in U_q$, then we need to show that $$\label{eq:2}
q^a F_{\beta_1}^{a_1}\cdots F_{\beta_{r}}^{a_{r}} U_q \cap u S_{r} \neq \emptyset.$$ and $$\label{eq:3}
U_q q^a F_{\beta_1}^{a_1}\cdots F_{\beta_{r}}^{a_{r}} \cap S_{r} u \neq \emptyset.$$ By Lemma \[lemma:27\] there exists ${\widetilde}{u}\in U_q$ and $b \in
{\mathbb{N}}$ such that $$\label{eq:1}
F_{\beta_r}^{a_r}{\widetilde}{u} = u
F_{\beta_r}^{b}.$$ By induction $$q^a F_{\beta_1}^{a_1}\cdots F_{\beta_{r-1}}^{a_{r-1}} U_q \cap {\widetilde}{u} S_{r-1} \neq \emptyset$$ so $$q^a F_{\beta_r}^{a_r} F_{\beta_1}^{a_1}\cdots F_{\beta_{r-1}}^{a_{r-1}} U_q \cap F_{\beta_r}^{a_r}{\widetilde}{u} S_{r-1} \neq \emptyset$$ Since $\Sigma$ is a set of commuting roots $F_{\beta_r}^{a_r}
F_{\beta_1}^{a_1}\cdots F_{\beta_{r-1}}^{a_{r-1}} = q^k
F_{\beta_1}^{a_1}\cdots F_{\beta_{r-1}}^{a_{r-1}}F_{\beta_r}^{a_r}$ for some $k\in {\mathbb{Z}}$. Using this and we get $$\emptyset \neq q^{a+k}F_{\beta_1}^{a_1}\cdots F_{\beta_{r}}^{a_{r}} U_q \cap u F_{\beta_r}^{b}S_{r-1} \subset q^a F_{\beta_1}^{a_1}\cdots F_{\beta_{r}}^{a_{r}} U_q \cap u S_{r}$$ where $F_{\beta_r}^bS_{r-1}\subset S_r$ because $F_{\beta_r}$ q-commutes with all the other root vectors.
is shown similarly.
\[lemma:16\] Let $\nu \in X$ and let $\Sigma=\{\beta_1,\dots,\beta_n\}$ be a basis of $Q$. Then there exists $\mathbf{b}=(b_1,\dots,b_n)\in
({\mathbb{C}}^*)^n$ such that $$\nu = b_1^{\beta_1}b_2^{\beta_2}\cdots b_n^{\beta_n}$$ and there are only finitely many different $\mathbf{b}\in
({\mathbb{C}}^*)^n$ satisfying this.
If $\gamma_1,\gamma_2\in X$ satisfy $\gamma_1(K_{\beta_i})=
\gamma_2(K_{\beta_i})$ for $i=1,\dots,n$ then $\gamma_1=\gamma_2$ because $\{\beta_1,\dots,\beta_n\}$ is a basis of $Q$. Since for $a_1,\dots,a_n\in{\mathbb{C}}^*$, $a_1^{\beta_1}a_2^{\beta_2}\cdots
a_n^{\beta_n}(K_{\beta_i}) =
a_1^{\left(\beta_i|\beta_1\right)}a_2^{\left(\beta_i|\beta_2\right)}\cdots
a_n^{\left(\beta_i|\beta_n \right)}$ we have to solve the system in $n$ unknown variables $x_1,\dots,x_n$: $$\begin{aligned}
x_1^{\left(\beta_1|\beta_1\right)}x_2^{\left(\beta_1|\beta_2\right)}\cdots
x_n^{\left(\beta_1|\beta_n \right)} =& \nu(K_{\beta_1})
\\
x_1^{\left(\beta_2|\beta_1 \right)}x_2^{\left(\beta_2|\beta_2
\right)}\cdots x_n^{\left(\beta_2|\beta_n \right)} =&
\nu(K_{\beta_2})
\\
\vdots&
\\
x_1^{\left(\beta_n|\beta_1\right)}x_2^{\left( \beta_n| \beta_2
\right)}\cdots x_n^{\left(\beta_n|\beta_n \right)} =&
\nu(K_{\beta_n}).
\end{aligned}$$ Let $c_j\in {\mathbb{C}}$, $j=1,\dots,n$ be such that $\nu(K_{\beta_j})=e^{c_j}$. There is a choice here since any $c_j+2k\pi i$, $k\in{\mathbb{Z}}$ could be chosen instead. Consider the linear system in $n$ unknowns $X_1,\dots,X_n$ $$\begin{aligned}
\left(\beta_1|\beta_1 \right) X_1+\left( \beta_1| \beta_2 \right)
X_2 \cdots \left( \beta_1 | \beta_n \right)X_n =& c_1
\\
\left(\beta_2|\beta_1 \right) X_1+\left( \beta_2 | \beta_2
\right)X_2 \cdots \left( \beta_2| \beta_n \right)X_n =& c_2
\\
\vdots&
\\
\left(\beta_n|\beta_1\right) X_1+\left( \beta_n| \beta_2
\right)X_2 \cdots \left( \beta_n| \beta_n \right)X_n =& c_n.
\end{aligned}$$ This system has a unique solution $a_1,\dots,a_n\in {\mathbb{C}}$ since the matrix $(\left(\beta_i|\beta_j\right))_{i,j}$ is invertible. So $x_i
= e^{a_i}$ is a solution to the above system. Any other solution to the original system corresponds to making a different choice when taking the logarithm of $\nu(K_{\beta_i})$. So another solution would be of the form $x_i = e^{a_i + a'_i}$ where $a'_i$, $i=1,\dots,n$ is a solution to a system of the form: $$\begin{aligned}
\left(\beta_1|\beta_1 \right) X_1+\left( \beta_1| \beta_2 \right)
X_2 \cdots \left( \beta_1 | \beta_n \right)X_n =& 2k_1\pi i
\\
\left(\beta_2|\beta_1 \right) X_1+\left( \beta_2 | \beta_2
\right)X_2 \cdots \left( \beta_2| \beta_n \right)X_n =& 2k_2 \pi i
\\
\vdots&
\\
\left(\beta_n|\beta_1\right) X_1+\left( \beta_n| \beta_2
\right)X_2 \cdots \left( \beta_n| \beta_n \right)X_n =& 2k_n \pi
i.
\end{aligned}$$ for some $k_1,\dots,k_n\in {\mathbb{Z}}$. Since $A= (\left(\beta_i|\beta_j
\right))_{i,j}$ is a matrix with only integer coefficients we have $A{^{-1}}= \frac{1}{\det A} {\widetilde}{A}$ for some ${\widetilde}{A}$ with only integer coefficients. So the solution to the system above is integer linear combinations in $\frac{2k_i \pi i}{\det A}$, $i=1,\dots,n$ hence $\{(e^{a'_1},\dots,e^{a'_n})|(a'_1,\dots, a'_n) \text{ is a
solution to the above system} \}$ has fewer than $n \det A$ elements so it is a finite set.
In the next definition we would like to compose the ${\varphi}$’s for different $\beta$. In particular let $\Sigma=\{\beta_1,\dots,\beta_n\}$ be a set of commuting roots and $F_{\beta_1},\dots,F_{\beta_n}$ corresponding root vectors. Let $F_\Sigma:=\{q^a F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}|a_i\in
{\mathbb{N}},a\in{\mathbb{Z}}\}$ and let $U_{q(F_\Sigma)}$ be the Ore localization in $F_\Sigma$. For $i<j$ we have $$F_{\beta_i}^{-k} F_{\beta_j} F_{\beta_i}^{k} = q^{-k(\beta_i|\beta_j)} F_{\beta_j}$$ or equivalently ${\varphi}_{F_{\beta_i},q^k}(F_{\beta_j}) =
\left(q^k\right)^{-(\beta_i|\beta_j)} F_{\beta_j}$. This implies ${\varphi}_{F_{\beta_i},b}(F_{\beta_j})=b^{-(\beta_i|\beta_j)} F_{\beta_j}$ for $b\in {\mathbb{C}}^*$ because $b\mapsto
{\varphi}_{F_{\beta_i},b}(F_{\beta_j})$ is Laurent polynomial. Similarly ${\varphi}_{F_{\beta_j},b}(F_{\beta_i})=b^{(\beta_i|\beta_j)}
F_{\beta_i}$. This shows that we can define ${\varphi}_{F_\beta,b}(F_{\beta'}{^{-1}}) = {\varphi}_{F_\beta,b}(F_{\beta'}){^{-1}}$ for $\beta,\beta'\in\Sigma$ extending ${\varphi}_{F_\beta,b}$ to a homomorphism $U_{q(F_\Sigma)}\to U_{q(F_\Sigma)}$. Also note that the ${\varphi}$’s commute because $$\begin{aligned}
F_{\beta_i}^{-k_1}F_{\beta_j}^{-k_2} u F_{\beta_j}^{k_2}
F_{\beta_i}^{k_1} =& q^{k_1k_2(\beta_i|\beta_j)}
F_{\beta_j}^{-k_2}F_{\beta_i}^{-k_1} u
q^{-k_1k_2(\beta_i|\beta_j)}F_{\beta_i}^{k_1}F_{\beta_j}^{k_2}
\\
=& F_{\beta_j}^{-k_2}F_{\beta_i}^{-k_1} u
F_{\beta_i}^{k_1}F_{\beta_j}^{k_2}\end{aligned}$$
\[def:twist\_by\_weight\] Let $\Sigma=\{\beta_1,\dots,\beta_r\}$ be a set of commuting roots and let $F_{\beta_1},\dots,F_{\beta_r}$ be corresponding root vectors such that $[F_{\beta_j},F_{\beta_i}]_q=0$ for $i<j$. Let $U_{q(F_\Sigma)}$ denote the Ore localization of $U_q$ in the Ore set $F_\Sigma:=\{q^a F_{\beta_1}^{a_1}\cdots
F_{\beta_n}^{a_r}|a_i\in {\mathbb{N}},a\in{\mathbb{Z}}\}$. Said in words we invert $F_\beta$ for all $\beta\in \Sigma$.
Let $M$ be a $U_{q}$-module. We define $M_{F_\Sigma}$ to be the $U_{q(F_\Sigma)}$-module $U_{q(F_\Sigma)} {\otimes}_{U_q} M$. Let $\mathbf{b} = (b_1,\dots,b_r)\in ({\mathbb{C}}^*)^r$. Then for a $U_{q(F_\Sigma)}$-module $N$ we define ${\varphi}_{F_{\Sigma},\mathbf{b}}.N$ to be the twist of the module by ${\varphi}_{F_{\beta_1},b_1}\circ \dots \circ {\varphi}_{F_{\beta_r},b_r}$.
For $\mathbf{i}=(i_1,\dots,i_r)\in {\mathbb{Z}}^r$ define $q^{\mathbf{i}} =
(q^{i_1},\dots,q^{i_r})\in ({\mathbb{C}}^*)^r$ and $q^{{\mathbb{Z}}^r}=\{q^{\mathbf{i}}|\mathbf{i}\in{\mathbb{Z}}^r\}\subset
({\mathbb{C}}^*)^r$.
For $\mathbf{b}=(b_1,\dots,b_r)\in ({\mathbb{C}}^*)^r$ we set $\mathbf{b}^\Sigma := b_1^{\beta_1}\cdots b_r^{\beta_r}\in X$. If $\Sigma$ is a basis of $Q$ then the map $\mathbf{b} \mapsto
\mathbf{b}^\Sigma$ is surjective by Lemma \[lemma:16\] but not neccesarily injective.
\[cor:6\] Let $\Sigma$ be a set of commuting roots that is a ${\mathbb{Z}}$ basis of $Q$, let $F_\Sigma$ be an Ore subset corresponding to $\Sigma$, let $M$ be a $U_{q(F_\Sigma)}$-module and let $\mathbf{i}=(i_1,\dots,i_n)\in {\mathbb{Z}}^n$. Then $${\varphi}_{F_{\Sigma} ,q^{\mathbf{i}}}.M {\cong}M$$ as $U_{q(F_\Sigma)}$-modules. Furthermore for $\lambda\in \operatorname{wt}M$ we have an isomorphism of $(U_{q(F_\Sigma)})_0$-modules: $${\varphi}_{F_\Sigma,q^{\mathbf{i}}}.M_\lambda {\cong}M_{\left(q^{-\mathbf{i}}\right)^\Sigma \lambda} = M_{q^{-\mu}\lambda}$$ where $\mu = \sum_{j=1}^n i_j \beta_j$.
The corollary follows from Lemma \[lemma:29\] because $\Sigma$ is a ${\mathbb{Z}}$ basis of $Q$.
Let $L$ be an admissible module of degree $d$. The essential support of $L$ is defined as $${\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L) := \{ \lambda \in \operatorname{wt}L | \dim L_\lambda = d\}$$
\[lemma:13\] Let $M$ be an admissible module. Let $\Sigma\subset \Phi^+$ be a set of commuting roots and $F_\Sigma$ a corresponding Ore subset. Assume $-\Sigma\subset T_M$. Then for $\lambda \in X$: $$\dim (M_{F_\Sigma})_\lambda = \max_{\mu\in {\mathbb{Z}}\Sigma}\{\dim M_{q^\mu\lambda}\}$$ and if $\dim M_\lambda = \max_{\mu\in {\mathbb{Z}}\Sigma}\{\dim
M_{q^\mu\lambda}\}$ then $(M_{F_\Sigma})_\lambda {\cong}M_\lambda$ as $(U_q)_0$-modules.
In particular if $\Sigma\subset T_M$ as well then $M_{F_\Sigma}{\cong}M$ as $U_q$-modules.
Compare to Lemma 4.4(ii) in [@Mathieu].
We have $\Sigma=\{\beta_1,\dots,\beta_r\}$ for some $\beta_1,\dots,\beta_r\in \Phi^+$ and corresponding root vectors $F_{\beta_1},\dots,F_{\beta_r}$. Let $\lambda\in X$ and set $d=\max_{\mu\in {\mathbb{Z}}\Sigma}\{\dim M_{q^\mu\lambda}\}$. Let $V$ be a finite dimensional subspace of $(M_{F_\Sigma})_\lambda$. Then there exists a homogenous element $s\in F_\Sigma$ such that $sV\subset M$. Let $\nu\in {\mathbb{Z}}\Sigma$ be the degree of $s$. So $sV
\subset M_{q^\nu \lambda}$ hence $\dim sV \leq d$. Since $s$ acts injectively on $M_{F_\Sigma}$ we have $\dim V \leq d$. Now the first claim follows because $F_\beta^{\pm 1}$ acts injectively on $M_{F_\Sigma}$ for all $\beta\in \Sigma$.
We have an injective $U_q$-homomorphism from $M$ to $M_{F_\Sigma}$ sending $m\in M$ to $1{\otimes}m\in M_{F_\Sigma}$ that restricts to a $(U_q)_0$-homomorphism from $M_\lambda$ to $(M_{F_\Sigma})_\lambda$. If $\dim M_\lambda = d$ then this is surjective as well. So it is an isomorphism. The last claim follow because $\pm \Sigma \subset T_M$ implies $\dim M_\lambda = \dim
M_{q^\mu\lambda}$ for any $\mu\in {\mathbb{Z}}\Sigma$; so $M_\lambda {\cong}(M_{F_\Sigma})_\lambda$ for any $\lambda\in X$. Since $M$ is a weight module this implies that $M {\cong}M_{F_\Sigma}$ as $U_q$-modules.
\[lemma:24\] Let $L$ be a simple infinite dimensional admissible module. Let $\beta\in (T_L^s)^+$. Then there exists a $b\in {\mathbb{C}}^*$ such that ${\varphi}_{F_\beta,b}.L_{F_\beta}$ contains a simple admissible $U_q$-submodule $L'$ with $T_{L'}\subset T_L$ and $\beta\not \in
T_L$.
Since $\beta\in T_L^s$ we have $L{\cong}L_{F_\beta}$ as $U_q$-modules by Lemma \[lemma:13\]. So we will consider $L$ as a $U_{q(F_\beta)}$-module via this isomorphism when taking twist etc.
Let $E_\beta$ and $F_\beta$ be root vectors corresponding to $\beta$. Let $\lambda\in \operatorname{wt}L$. Consider $F_\beta E_\beta$ as a linear operator on $L_\lambda$. Since ${\mathbb{C}}$ is algebraically closed $F_\beta E_\beta$ must have an eigenvalue $c_\beta$ and an eigenvector $v\in L_\lambda$. By (the proof of) Lemma \[lemma:9\] $$F_\beta E_\beta {\varphi}_{F_{\beta} ,b} . v = {\varphi}_{F_{\beta} ,b}.(c_\beta - (q_\beta-q_\beta{^{-1}})^{-2}(b_\beta-b_\beta{^{-1}})(q_\beta b_\beta{^{-1}}\lambda(K_\beta) - q_\beta{^{-1}}b_\beta \lambda(K_\beta){^{-1}}) v.$$ The Laurent polynomial, in $b$, $c_\beta -
(q-q{^{-1}})^{-2}(b_\beta-b_\beta{^{-1}})(b_\beta \lambda(K_\beta) -
b_\beta{^{-1}}\lambda(K_\beta{^{-1}}))$ has a zero point $c\in {\mathbb{C}}^*$.
Thus ${\varphi}_{F_{\beta} ,c}.L$ contains an element $v'$ such that $F_\beta E_\beta v'=0$ and since $F_\beta$ acts injectively on ${\varphi}_{F_{\beta} ,c}.L$, we have $E_\beta v'=0$. Set $V=\{m\in
{\varphi}_{F_{\beta} ,c}.L| E_\beta^N m = 0, N>>0\}=({\varphi}_{F_{\beta}
,c}.L)^{[\beta]}$. By Proposition \[prop:2\] this is a $U_q$-submodule of the $U_q$-module ${\varphi}_{F_{\beta} ,c}.L$. It is nonzero since $v'\in V$. By Lemma \[lemma:10\] $V$ has a simple $U_q$-submodule $L'$.
We want to show that $T_{L'} \subset T_L$. Assume $\gamma \in
T_{L'}$. Then $q^{{\mathbb{N}}\gamma}\operatorname{wt}L' \subset \operatorname{wt}L'$. But since $\operatorname{wt}L' \subset c^{-\beta} \operatorname{wt}L$ we get for some $\nu \in \operatorname{wt}L$, $q^{{\mathbb{N}}\gamma}c^{-\beta} \nu \subset c^{-\beta} \operatorname{wt}L$ or equivalently $q^{{\mathbb{N}}\gamma} \nu \subset \operatorname{wt}L$. But this shows that $\gamma \not \in F_L$ and since $L$ is a simple $U_q$-module this implies that $\gamma \in T_L$. By construction we have $\beta\not \in T_{L'}$.
Coherent families {#sec:coherent-families}
=================
For a $U_q$-module $M\in\mathcal{F}$ define $\operatorname{Tr}^M: X \times (U_q)_0 \to
{\mathbb{C}}$ by $\operatorname{Tr}^M(\lambda,u) = \operatorname{Tr}u|_{M_\lambda}$.
\[lemma:32\] Let $M,N\in \mathcal{F}$ be semisimple $U_q$-modules. If $\operatorname{Tr}^M =
\operatorname{Tr}^N$ then $M{\cong}N$.
Theorem 7.19 in [@Lam] states that this is true for modules over a *finite dimensional* algebra. So we will reduce to the case of modules over a finite dimensional algebra. Let $L$ be a composition factor of $M$ and $\lambda$ a weight of $L$. Then the multiplicity of the $U_q$-composition factor $L$ in $M$ is the multiplicity of the $(U_q)_0$-composition factor $L_\lambda$ in $M_\lambda$ by Theorem \[thm:Lemire\]. $M_\lambda$ is a finite dimensional $(U_q)_0$-module. Let $I$ be the kernel of the homomorphism $(U_q)_0 \to \operatorname{End}_{{\mathbb{C}}}(M_\lambda )$ given by the action of $(U_q)_0$. Then $(U_q)_0 / I$ is a finite dimensional ${\mathbb{C}}$ algebra and $M_\lambda$ is a module over $(U_q)_0/
I$. Furthermore since $\operatorname{Tr}^M(\lambda,u)=0$ for all $u\in I$ the trace of an element $u\in (U_q)_0$ is the same as the trace of $u+I
\in (U_q)_0/I$ on $M_\lambda$ as a $(U_q)_0/I$-module. So if $\operatorname{Tr}^M
= \operatorname{Tr}^N$ the multiplicity of $L_\lambda$ in $M_\lambda$ and $N_\lambda$ are the same and hence the multiplicity of $L$ in $M$ is the same as in $N$.
We will use the Zariski topology on $({\mathbb{C}}^*)^n$: $V$ is a closed set if it is the zero-points of a Laurent polynomial $p\in {\mathbb{C}}[X_1^{\pm
1},\dots,X_n^{\pm 1}]$.
\[prop:24\] Let $L$ be an infinite dimensional admissible simple module of degree $d$. Let $\Sigma$ be a set of commuting roots that is a basis of $Q$ and $w\in W$ such that $-\Sigma\subset w(T_L)$. Let $F_\Sigma$ be a corresponding Ore subset. Let $\lambda\in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L)$. The set $$\{ \mathbf{b}\in ({\mathbb{C}}^*)^n |\, \, {^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,\mathbf{b}}.\left( \left( {^w}L \right)_{F_\Sigma} \right)_{w(\lambda)} \right) \text{ is a simple $(U_q)_0$-module} \}$$ is a Zariski open set of $({\mathbb{C}}^*)^n$.
The $(U_q)_0$-module $V:={^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,\mathbf{b}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right)$ is simple if and only if the bilinear map $B_{\mathbf{b}}(u,v)\in (U_q)_0 \times (U_q)_0 \mapsto
\operatorname{Tr}\left( uv|_{{^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,\mathbf{b}}.\left(
\left( {^w}L \right)_{F_\Sigma} \right)_{w(\lambda)}
\right)}\right)$ has maximal rank $d^2$: The map factors through $\operatorname{End}_{{\mathbb{C}}}(V)\times \operatorname{End}_{{\mathbb{C}}}(V)$ given by the representation $(U_q)_0\to \operatorname{End}_{{\mathbb{C}}}(V)$ on $V$. $B_{\mathbf{b}}$ has maximal rank $d^2$ if and only if the representation is surjective onto $\operatorname{End}_{{\mathbb{C}}}(V)$ which is equivalent to $V$ being simple.
For any finite dimensional subspace $E\subset (U_q)_0$, the set $\Omega_E$ of all $\mathbf{b}$ such that $B_{\mathbf{b}}|_E$ has rank $d^2$ is either empty or the non-zero points of the Laurent polynomial $\det M$ for some $d^2\times d^2$ minor $M$ of the matrix $\left( B_{\mathbf{b}}(e_i,e_j) \right)_{i,j}$ where $\{e_i\}$ is a basis of $E$. Therefore $\Omega = \cup_E \Omega_E$ is open.
For a module $M$ that is a direct sum of modules of finite length we define $M^{ss}$ to be the unique (up to isomorphism) semisimple module with the same composition factors as $M$.
\[lemma:1\] Let $L$ be an infinite dimensional simple admissible $U_q$-module of degree $d$, $w\in W$ and $\Sigma=\{\beta_1,\dots,\beta_n\}\subset
\Phi^+$ a set of commuting roots that is a basis of $Q$ such that $-\Sigma\subset w(T_L)$. Let $F_{\Sigma}$ be a corresponding Ore subset to $\Sigma$. Let $\mathbf{c}\in ({\mathbb{C}}^*)^n$ and let $L'$ be another infinite dimensional $U_q$-module such that $L'$ is contained in ${^{{\overline}{w}}}\left({\varphi}_{F_{\Sigma},\mathbf{c}}.({^w}L)_{F_{\Sigma}}\right)^{ss}$ (i.e. $L'$ is a composition factor of ${^{{\overline}{w}}}\left({\varphi}_{F_{\Sigma},\mathbf{c}}.({^w}L)_{F_\Sigma}\right)$). Assume that $\Sigma'=\{\beta_1',\dots,\beta_n'\} \subset \Phi^+$ is another set of commuting roots that is a basis of $Q$ and $w' \in W$ is such that $-\Sigma' \subset w'(T_{L'})$. Let $F_{\Sigma'}$ be a corresponding Ore subset.
Define $a_{i,j}\in {\mathbb{Z}}$ by $w(w'){^{-1}}(\beta_i') = \sum_{j=1}^n
a_{i,j} \beta_j$ and define $f:({\mathbb{C}}^*)^n \to ({\mathbb{C}}^*)^n$ by $$f(b_1,\dots,b_n)= \left( \prod_{i=1}^n b_i^{a_{i,1}},\dots,\prod_{i=1}^{n}b_i^{a_{i,n}}\right).$$ Then $L'$ is admissible of degree $d$ and $${^{{\overline}{w'}}}\left( {\varphi}_{F_{\Sigma'},\mathbf{b}}.({^{w'}}L')_{F_{\Sigma'}}\right)^{ss} {\cong}{^{{\overline}{w}}}\left( {\varphi}_{F_{\Sigma},f(\mathbf{b})\mathbf{c}}.({^w}L)_{F_{\Sigma}}\right)^{ss}$$
We will show that $\operatorname{Tr}^{{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},\mathbf{b}}.({^{w'}}L')_{F_{\Sigma'}}\right)^{ss}}
= \operatorname{Tr}^{{^{{\overline}{w}}}\left(
{\varphi}_{F_{\Sigma},f(\mathbf{b})\mathbf{c}}.({^w}L)_{F_{\Sigma}}\right)^{ss}}$.
Let $\lambda\in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L)$. Then $w(\lambda)\in
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^w}L)$. As a $(U_q)_0$-module we have $\left(
{^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,\mathbf{c}}.({^w}L)_{F_\Sigma}
\right) \right)^{ss}{\cong}\bigoplus_{\mathbf{i}\in {\mathbb{Z}}^n}
{^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,q^{\mathbf{i}}\mathbf{c}}.\left(
\left( {^w}L \right)_{F_\Sigma} \right)_{w(\lambda)}
\right)^{ss}$ (Corollary \[cor:6\]). Let $\lambda'\in{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')$. Then $L'_{\lambda'}$ is a $(U_q)_0$-submodule of ${^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,q^{\mathbf{j}}\mathbf{c}}.\left( \left( {^w}L
\right)_{F_\Sigma} \right)_{w(\lambda)} \right)^{ss}$ for some $\mathbf{j}\in{\mathbb{Z}}^n$. We can assume $\mathbf{j}=0$ by replacing $\mathbf{c}$ with $q^{\mathbf{j}}\mathbf{c}$ (note that we have then $(\mathbf{c}{^{-1}})^\Sigma = w\left( \lambda' \lambda{^{-1}}\right)$). So $L'_{\lambda'}$ is a $(U_q)_0$-submodule of ${^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,\mathbf{c}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right)^{ss}$. For any other $\mu\in
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')$ there is a unique $\mathbf{j}_\mu'\in {\mathbb{Z}}^n$ such that $\mu = (w'){^{-1}}\left(\left(
q^{-\mathbf{j}_\mu'}\right)^{\Sigma'}\right) \lambda'$ and a unique $\mathbf{j}_\mu\in {\mathbb{Z}}^n$ such that $w{^{-1}}\left(\left(
q^{-\mathbf{j}_\mu}\mathbf{c}{^{-1}}\right)^\Sigma\right)
\lambda=\mu$. For such $\mathbf{j}_\mu$, $L_\mu'$ is a submodule of ${^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,q^{\mathbf{j}_\mu}\mathbf{c}}.\left( \left( {^w}L
\right)_{F_\Sigma} \right)_{w(\lambda)}
\right)^{ss}$.
$f$ is bijective, $f(q^{{\mathbb{Z}}^n})=q^{{\mathbb{Z}}^n}$, $f(\mathbf{b})^\Sigma = w(w'){^{-1}}\left(\mathbf{b}^{\Sigma'}\right)$ for all $\mathbf{b}\in ({\mathbb{C}}^*)^n$ and for any $\mu\in
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')$, $f(q^{\mathbf{j}'_\mu})=q^{\mathbf{j}_\mu}$. For a Laurent polynomial $p$, $p\circ f$ is Laurent polynomial as well. Since $q^{{\mathbb{N}}^n}$ is Zariski dense in $({\mathbb{C}}^*)^n$ (Lemma \[lemma:37\]) and $f$ is a Laurent polynomial the set $D=\{q^{\mathbf{j}_\mu}\mathbf{c}\in ({\mathbb{C}}^*)^n|\mu\in
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')\}$ is Zariski dense. By Proposition \[prop:24\] the $(U_q)_0$-module ${^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,\mathbf{b}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right)$ is simple for all $\mathbf{b}\in
\Omega$ for some Zariski open set $\Omega$ of $({\mathbb{C}}^*)^n$. Since $D$ is dense and $\Omega$ is open $D\cap \Omega$ is nonempty. So there exists a $\mu_0 \in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')$ such that ${^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,q^{\mathbf{j}_{\mu_0}}
\mathbf{c}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right)$ is simple and contains the nonzero simple $(U_q)_0$-module $L'_{\mu_0}$ as a submodule. Thus $L'_{\mu_0} {\cong}{^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,q^{\mathbf{j}_{\mu_0}} \mathbf{c}}.\left( \left(
{^w}L \right)_{F_\Sigma} \right)_{w(\lambda)} \right)$. We get now from Lemma \[lemma:13\] that $L'$ is admissible of degree $d$ and that for every $\mu \in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')$, $$\begin{aligned}
L'_\mu {\cong}& {^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,q^{\mathbf{j}_\mu}
\mathbf{c}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right)
\\
{\cong}& {^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,f(q^{\mathbf{j}'_\mu})
\mathbf{c}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right).
\end{aligned}$$
By Lemma \[lemma:13\], Corollary \[cor:6\] and the definition of $\mathbf{j}'_\mu$ we have for any $\mu\in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')$ $$\begin{aligned}
{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},q^{\mathbf{j}'_\mu}}.\left(({^{w'}}L')_{F_{\Sigma'}}\right)_{w'(\lambda')}
\right) {\cong}L'_{\mu}.
\end{aligned}$$ Let $u\in (U_q)_0$. We see that for $\mathbf{b} =
q^{\mathbf{j}'_\mu}$ $$\begin{aligned}
\operatorname{Tr}u|_{{^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,f(\mathbf{b})
\mathbf{c}}.\left( \left( {^w}L \right)_{F_\Sigma}
\right)_{w(\lambda)} \right)} = \operatorname{Tr}u|_{L'_\mu} = \operatorname{Tr}u|_{{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},\mathbf{b}}.\left(({^{w'}}L')_{F_{\Sigma'}}\right)_{w'(\lambda')}
\right)}.
\end{aligned}$$ Since $\mathbf{b}\mapsto \operatorname{Tr}u|_{{^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,f(\mathbf{b}) \mathbf{c}}.\left( \left( {^w}L
\right)_{F_\Sigma} \right)_{w(\lambda)} \right)^{ss}}$ and $\mathbf{b}\mapsto \operatorname{Tr}u|_{{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},\mathbf{b}}.\left(({^{w'}}L')_{F_{\Sigma'}}\right)_{w'(\lambda')}
\right)^{ss}}$ are both Laurent polynomials and equal on the Zariski dense subset $\{q^{\mathbf{j}'_\mu}|\mu\in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L')\}$ they are equal for all $\mathbf{b}\in ({\mathbb{C}}^*)^n$. Thus by Lemma \[lemma:32\] $$\begin{aligned}
{^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,f(\mathbf{b}) \mathbf{c}}.\left(
\left( {^w}L \right)_{F_\Sigma} \right)_{w(\lambda)}
\right)^{ss} {\cong}{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},\mathbf{b}}.\left(({^{w'}}L')_{F_{\Sigma'}}\right)_{w'(\lambda')}
\right)^{ss}
\end{aligned}$$ as $(U_q)_0$-modules. Since (by Corollary \[cor:6\]) $$\begin{aligned}
{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},\mathbf{b}}.({^{w'}}L')_{F_{\Sigma'}}\right)^{ss}
{\cong}\bigoplus_{\mathbf{i}\in {\mathbb{Z}}^n} {^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},q^{\mathbf{i}}\mathbf{b}}.\left(({^{w'}}L')_{F_{\Sigma'}}\right)_{w'(\lambda')}\right)^{ss}
\end{aligned}$$ and $$\begin{aligned}
{^{{\overline}{w}}}\left(
{\varphi}_{F_{\Sigma},f(\mathbf{b})\mathbf{c}}.({^w}L)_{F_{\Sigma}}\right)^{ss}
{\cong}\bigoplus_{\mathbf{i}\in{\mathbb{Z}}^n}{^{{\overline}{w}}}\left(
{\varphi}_{F_{\Sigma},q^{\mathbf{i}}
f(\mathbf{b})\mathbf{c}}.\left(({^w}L)_{F_{\Sigma}}\right)_{w(\lambda)}\right)^{ss}
\end{aligned}$$ we get $$\begin{aligned}
{^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},\mathbf{b}}.({^{w'}}L')_{F_{\Sigma'}}\right)^{ss}
{\cong}& \bigoplus_{\mathbf{i}\in {\mathbb{Z}}^n} {^{{\overline}{w'}}}\left(
{\varphi}_{F_{\Sigma'},q^{\mathbf{i}}\mathbf{b}}.\left(({^{w'}}L')_{F_{\Sigma'}}\right)_{w'(\lambda')}\right)^{ss}
\\
{\cong}& \bigoplus_{\mathbf{i}\in{\mathbb{Z}}^n}{^{{\overline}{w}}}\left(
{\varphi}_{F_{\Sigma}, f(q^{\mathbf{i}}
\mathbf{b})\mathbf{c}}.\left(({^w}L)_{F_{\Sigma}}\right)_{w(\lambda)}\right)^{ss}
\\
{\cong}& \bigoplus_{\mathbf{i}\in{\mathbb{Z}}^n}{^{{\overline}{w}}}\left(
{\varphi}_{F_{\Sigma}, q^{\mathbf{i}} f(
\mathbf{b})\mathbf{c}}.\left(({^w}L)_{F_{\Sigma}}\right)_{w(\lambda)}\right)^{ss}
\\
{\cong}& {^{{\overline}{w}}}\left(
{\varphi}_{F_{\Sigma},f(\mathbf{b})\mathbf{c}}.({^w}L)_{F_{\Sigma}}\right)^{ss}
\end{aligned}$$ as $(U_q)_0$-modules. By Theorem \[thm:Lemire\] this implies they are isomorphic as $U_q$-modules as well.
Corollary \[cor:6\] tells us that twisting with an element of the form $q^{\mathbf{i}}$ gives us a module isomorphic to the original module. Thus it makes sense to write ${\varphi}_{F_\Sigma,t}.M$ for a $t
\in ({\mathbb{C}}^*)^n / q^{{\mathbb{Z}}^n}$ and a $U_{q(F_\Sigma)}$-module $M$. Just choose a representative for $t$. Any representative gives the same $U_{q(F_\Sigma)}$-module up to isomorphism.
Let $L$ be an admissible simple module. Assume for a $w\in W$ that $\Sigma\subset -w(T_L)$ is a set of commuting roots that is a basis of $Q$ (it is always possible to find such $w$ and $\Sigma$ by Lemma \[lemma:6\] and Lemma \[lemma:26\]) and let $F_\Sigma$ be a corresponding Ore subset. Let $\nu \in X$. The $U_q$-module $${^{{\overline}{w}}}\left( \bigoplus_{\mathbf{b}\in ({\mathbb{C}}^*)^n: \, \mathbf{b}^{\Sigma}=\nu} {\varphi}_{F_\Sigma,\mathbf{b}}. \left( {^w}L \right)_{F_\Sigma} \right)$$ has finite length by Lemma \[lemma:16\], Lemma \[lemma:13\] and Lemma \[lemma:10\].
We define $$\mathcal{EXT}(L) = \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {^{{\overline}{w}}} \left( {\varphi}_{F_\Sigma,t}.\left( {^w}L \right)_{F_\Sigma} \right) \right)^{ss}.$$ The definition is independent (up to isomorphism) of the chosen $w$, $\Sigma$ and $F_\Sigma$ as suggested by the notation:
\[lemma:20\] Let $L$ be a simple admissible module. Let $w,w'\in W$ and assume $\Sigma\subset -w(T_L),\Sigma'\subset -w'(T_{L'})$ are sets of commuting roots that are both a basis of $Q$. Let $F_\Sigma,F'_{\Sigma'}$ be corresponding Ore subsets. Then $$\left( \bigoplus_{ t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n} } {^{{\overline}{w}}} \left( {\varphi}_{F_\Sigma,t}.\left( {^w}L \right)_{F_\Sigma} \right) \right)^{ss} {\cong}\left( \bigoplus_{ t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n} } {^{{\overline}{w'}}} \left( {\varphi}_{F'_{\Sigma'},t}.\left( {^{w'}}L \right)_{F'_{\Sigma'}} \right) \right)^{ss}$$ as $U_q$-modules.
Obviously $L$ is a submodule of $\left( {^{{\overline}{w}}} \left(
{\varphi}_{F_\Sigma,\mathbf{1}}.\left( {^w}L \right)_{F_\Sigma}
\right) \right)^{ss}$ where $\mathbf{1}=(1,\dots,1)$. By Lemma \[lemma:1\] this implies that for $\mathbf{b}\in
({\mathbb{C}}^*)^n$ $$\begin{aligned}
\left( {^{{\overline}{w'}}} \left( {\varphi}_{F_{\Sigma'},\mathbf{b}}.\left(
{^{w'}}L \right)_{F_{\Sigma'}} \right) \right)^{ss} {\cong}\left( {^{{\overline}{w}}} \left( {\varphi}_{F_\Sigma,f(\mathbf{b})}.\left(
{^w}L \right)_{F_\Sigma} \right) \right)^{ss}
\end{aligned}$$ for some $f$ with the property that $f(q^{{\mathbb{Z}}^n})=q^{{\mathbb{Z}}^n}$. So it makes sense to write $f(t)$ for $t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}$. Thus $$\begin{aligned}
\left( \bigoplus_{ t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n} } {^{{\overline}{w'}}}
\left( {\varphi}_{F'_{\Sigma'},t}.\left( {^{w'}}L
\right)_{F'_{\Sigma'}} \right) \right)^{ss} {\cong}& \left(
\bigoplus_{ t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n} } {^{{\overline}{w}}} \left(
{\varphi}_{F_\Sigma,f(t)}.\left( {^w}L \right)_{F_\Sigma} \right)
\right)^{ss}
\\
{\cong}& \left( \bigoplus_{ t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n} }
{^{{\overline}{w}}} \left( {\varphi}_{F_\Sigma,t}.\left( {^w}L
\right)_{F_\Sigma} \right) \right)^{ss}
\end{aligned}$$ since $f$ is bijective.
\[prop:19\] Let $L$ be a simple infinite dimensional admissible module. For $x\in W$: $$\mathcal{EXT}({^{x}}L) {\cong}{^{x}}\left(\mathcal{EXT}(L)\right)$$ and $$\mathcal{EXT}({^{{\overline}{x}}}L) {\cong}{^{{\overline}{x}}}\left(\mathcal{EXT}(L)\right).$$
Let $w\in W$ be such that $w(F_L \backslash F_L^s) \subset \Phi^+$ (exists by Lemma \[lemma:6\]). Let $\Sigma$ be a set of commuting roots that is a basis of $Q$ such that $-\Sigma \subset w(T_L)$ (exists by Lemma \[lemma:26\]) and let $F_\Sigma$ be a corresponding Ore subset. First we will define $\mathcal{EXT}'(L) =
\left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/ q^{{\mathbb{Z}}^n}} {^{w{^{-1}}}}\left(
{\varphi}_{F_{\Sigma} ,t}.({^{{\overline}{w{^{-1}}}}} L)_{F_\Sigma}\right)
\right)^{ss}$ and show that $\mathcal{EXT}'(L) {\cong}\mathcal{EXT}(L)$ as $U_q$-modules: Going through the proof of Lemma \[lemma:1\] and Lemma \[lemma:20\] and and replacing $T_{w{^{-1}}}$ and $T_{w{^{-1}}}{^{-1}}$ with $T_{w}{^{-1}}$ and $T_{w}$ respectively we get that $\operatorname{Tr}^{\mathcal{EXT}'(L)}=\operatorname{Tr}^{\mathcal{EXT}(L)}$ so they are isomorphic by Lemma \[lemma:32\].
We will show for any $\alpha\in \Pi$ that $$\mathcal{EXT}({^{s_\alpha}}L) {\cong}{^{s_\alpha}}\left(\mathcal{EXT}(L)\right)$$ which implies the claim by induction over the length $l(x)$ of $x$ (where $l(x)$ is the smallest number of simple reflections need to write $x$, i.e. there is a reduced expression $x=s_{i_1}\cdots
s_{i_{l(x)}}$).
So let $\alpha\in \Pi$ and let $w$ and $\Sigma$ be defined as above. Let $w'=ws_\alpha$. Note that $w'
(F_{{^{s_\alpha}}L}\backslash F_{{^{s_\alpha}}L}^s) \subset \Phi^+$ and $-\Sigma \subset T_{{^{s_\alpha}}L}$. We split into two cases: If $l( w' )<l(w)$ then $$\begin{aligned}
{^{s_\alpha}}\left( \mathcal{EXT}(L) \right)=& {^{s_\alpha}}\left(
\left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {^{{\overline}{w' s_\alpha}}}
\left( {\varphi}_{F_\Sigma ,t}. ({^{w' s_\alpha}} L
)_{F_\Sigma} \right) \right)^{ss} \right)
\\
{\cong}& {^{s_\alpha}} \left( \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}}
{^{ {\overline}{s_\alpha} }}\left({^{ {\overline}{w'} }} \left(
{\varphi}_{F_\Sigma,t}.(^{w'}(^{s_\alpha}L))_{F_\Sigma}
\right) \right) \right)^{ss} \right)
\\
{\cong}& \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {^{ {\overline}{w'} }} \left(
{\varphi}_{F_\Sigma,t}.(^{w'}(^{s_\alpha}L))_{F_\Sigma}
\right) \right)^{ss}
\\
=& \mathcal{EXT}({^{s_\alpha}}L).
\end{aligned}$$
If $l(w' ) > l(w)$ we get $$\begin{aligned}
{^{s_\alpha}}\left( \mathcal{EXT}(L) \right) {\cong}&
{^{s_\alpha}}\left( \mathcal{EXT}'(L) \right)
\\
=& {^{s_\alpha}}\left( \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}}
{^{w{^{-1}}}} \left( {\varphi}_{F_\Sigma
,t}. ({^{{\overline}{w{^{-1}}}}} L )_{F_\Sigma} \right)
\right)^{ss} \right)
\\
{\cong}& \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {^{ (w' ){^{-1}}}} \left(
{\varphi}_{F_\Sigma ,t}. ({^{{\overline}{ (w') {^{-1}}}}}
(^{s_\alpha}L) )_{F_\Sigma} \right) \right)^{ss}
\\
=& \mathcal{EXT}({^{s_\alpha}}L).
\end{aligned}$$ The second claim is shown similarly.
\[prop:15\] Let $L$ be an infinite dimensional admissible simple module of degree $d$. If $L'$ is an infinite dimensional simple submodule of $\mathcal{EXT}(L)$ then $L'$ is admissible of degree $d$ and $\mathcal{EXT}(L){\cong}\mathcal{EXT}(L')$.
Let $w\in W$ and let $\Sigma$ be a set of commuting roots that is a basis of $Q$ such that $\Sigma\subset -w(T_L)$ (possible by Lemma \[lemma:6\] and Lemma \[lemma:26\]). Then by definition $$\begin{aligned}
\mathcal{EXT}(L)=\left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}}
{^{{\overline}{w}}} \left( {\varphi}_{F_\Sigma,t}.\left( {^w}L
\right)_{F_\Sigma} \right) \right)^{ss}.
\end{aligned}$$ $L'$ being a submodule of $\mathcal{EXT}(L)$ implies that $L'$ must be a submodule of $$\begin{aligned}
\left( {^{{\overline}{w}}} \left( {\varphi}_{F_\Sigma,\mathbf{c}}.\left( {^w}L
\right)_{F_\Sigma} \right) \right)^{ss}
\end{aligned}$$ for some $\mathbf{c}\in ({\mathbb{C}}^*)^n$. Let $w'\in W$ and let $\Sigma'$ be a set of commuting roots that is a basis of $Q$ such that $\Sigma' \subset -w'(T_{L'})$. By Lemma \[lemma:1\] $L'$ is admissible of degree $d$ and there exists a bijective map $f:({\mathbb{C}}^*)^n \to ({\mathbb{C}}^*)^n$ such that $f(q^{{\mathbb{Z}}^n})=q^{{\mathbb{Z}}^n}$ and $$\begin{aligned}
\left( {^{{\overline}{w'}}} \left( {\varphi}_{F_{\Sigma'},\mathbf{b}}.\left(
{^{w'}}L' \right)_{F_{\Sigma'}} \right) \right)^{ss} {\cong}\left( {^{{\overline}{w}}} \left(
{\varphi}_{F_\Sigma,f(\mathbf{b})\mathbf{c}}.\left( {^w}L
\right)_{F_\Sigma} \right) \right)^{ss}.
\end{aligned}$$ Since $f(q^{{\mathbb{Z}}^n})=q^{{\mathbb{Z}}^n}$ it makes sense to write $f(t)$ for $t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}$. So writing $t_\mathbf{c}=q^{{\mathbb{Z}}^n} \mathbf{c}\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}$ we get $$\begin{aligned}
\mathcal{EXT}(L')=&\left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}}
{^{{\overline}{w'}}} \left( {\varphi}_{F_\Sigma,t}.\left( {^{w'}}(L')
\right)_{F_{\Sigma'}} \right) \right)^{ss}
\\
{\cong}& \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {^{{\overline}{w}}}
\left( {\varphi}_{F_\Sigma,f(t)t_{\mathbf{c}}}.\left( {^w}L
\right)_{F_\Sigma} \right) \right)^{ss}
\\
{\cong}& \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {^{{\overline}{w}}}
\left( {\varphi}_{F_\Sigma,t}.\left( {^w}L \right)_{F_\Sigma} \right)
\right)^{ss}
\\
=& \mathcal{EXT}(L)
\end{aligned}$$ since the assignment $t\mapsto f(t)t_{\mathbf{c}}$ is bijective.
\[lemma:15\] Let $f\in {\mathbb{C}}[X_1^{\pm 1},\dots,X_n^{\pm 1}]$ be a nonzero Laurent polynomial. There exists $b_1,\dots,b_n\in {\mathbb{C}}^*$ such that for all $i_1,\dots,i_n\in {\mathbb{Z}}$ $$f(q^{i_1}b_1,\dots,q^{i_n}b_n) \neq 0.$$
Assume $f = X_1^{-N_1}\cdots X_n^{-N_n}g$ with $g\in
{\mathbb{C}}[X_1,\dots,X_n]$. $g$ has coefficients in some finitely generated (over ${\mathbb{Q}}$) subfield $k$ of ${\mathbb{C}}$. Let $b_1,\dots
b_n$ be generators of $n$ disjoint extensions of $k$ of degree $>\deg g$. The monomials $b_1^{m_1}\cdots b_n^{m_n}$, $0\leq m_i
\leq \deg g$ are all linearly independent over $k$. Since $q^i \neq
0$ for $i\in{\mathbb{Z}}$ the same is true for the monomials $(q^{i_1}b_1)^{m_1}\cdots (q^{i_n}b_n)^{m_n}$. So $g(q^{i_1}b_1,\dots,q^{i_n}b_n)\neq 0$, hence $f(q^{i_1}b_1,\dots,q^{i_n}b_n)\neq 0$.
\[thm:existence\_of\_torsion\_free\_modules\] Let $L$ be an infinite dimensional admissible simple modules of degree $d$. Then $\mathcal{EXT}(L)$ contains at least one simple torsion free module.
Let $\lambda \in w(\operatorname{wt}L)$. Then as a $(U_q)_0$-module $$\mathcal{EXT}(L) = \left( {^{{\overline}{w}}}\left( \bigoplus_{\mathbf{b} \in ({\mathbb{C}}^*)^n} {\varphi}_{F_\Sigma,\mathbf{b}}.\left(\left({^w}L\right)_{F_\Sigma}\right)_{\lambda} \right) \right)^{ss}$$ for some $w\in W$ and some Ore subset $F_{\Sigma}$ corresponding to a set of commuting roots $\Sigma$ that is a basis of $Q$. Let $u\in
(U_q)_0$. Then the map $\mathbf{b} \mapsto \det
u|_{{^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,\mathbf{b}}.\left(\left({^w}L\right)_{F_\Sigma}\right)_\lambda\right)}=
\det
{\varphi}_{F_\Sigma,\mathbf{b}}(T_w{^{-1}}(u))|_{\left(\left({^w}L\right)_{F_\Sigma}\right)_{\lambda}}$ is Laurent polynomial. Let $p(\mathbf{b}) = \prod_{\beta\in \Sigma}
\det
E_{\beta}F_{\beta}|_{{^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,\mathbf{b}}.\left(\left({^w}L\right)_{F_\Sigma}\right)_\lambda\right)}$. $p$ is a Laurent polynomial by the above. By Lemma \[lemma:15\] there exists a $\mathbf{c}\in ({\mathbb{C}}^*)^n$ such that $p(\mathbf{b})\neq 0$ for all $\mathbf{b} \in q^{{\mathbb{Z}}^n}\mathbf{c}$ which implies that $E_{\beta}F_{\beta}$ acts injectively on the module $L':={^{{\overline}{w}}}\left(
{\varphi}_{F_\beta,\mathbf{c}}.({^w}L)_{F_\Sigma}\right)$ for all $\beta\in \Sigma$. Since $F_{\beta}$ acts injectively on the module by construction this implies that $E_\beta$ acts injectively as well. So we have $\pm \Sigma \subset T_{L'}$. Any simple submodule $V$ of $L'$ is admissible of degree $d$ by Lemma \[lemma:1\] and since $F_\beta$ and $E_\beta$ act injectively we get $\dim
V_\lambda = d = \dim L'_\lambda$ for any $\lambda\in \operatorname{wt}L'$ thus $V
= L'$. So $L'$ is a simple module. Using Proposition \[prop:3\] it is easy to see that $L'$ is torsion free since $\pm \Sigma \subset
T_{L'}$ and $\Sigma$ is a basis of $Q$.
\[prop:23\] Let $L$ be an infinite dimensional admissible simple module. Let $\beta\in \Phi^+$. If $-\beta \in T_L$ then $\mathcal{EXT}(L)$ contains $\left( \bigoplus_{t \in {\mathbb{C}}^*/q^{{\mathbb{Z}}}}
{\varphi}_{F_\beta,t}.L_{F_\beta}\right)^{ss}$ as a $U_q$-submodule.
Let $w\in W$ and $\Sigma=\{\beta_1,\dots,\beta_n\}$ be such that $\Sigma$ is a set of commuting roots that is a basis of $Q$ and $-\Sigma \subset w(T_L)$ and $F_\Sigma$ a corresponding Ore subset (always possible by Lemma \[lemma:6\] and Lemma \[lemma:26\]).
We have $w(\beta) = \sum_{i=1}^n a_i \beta_i$ for some $a_i\in
{\mathbb{Z}}$. Set $x =
F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}\in U_{q(F_\Sigma)}$. Let $U_{q(x)}$ be the $U_q$-subalgebra generated by $x$ in $U_{q(F_\Sigma)}$. $x$ is playing the role of $F_\beta$ and that is why the notation resembles the notation for Ohre localization. The Ohre localization of $U_q$ in $x$ does not neccesarily make sense though because $x$ is not neccesarily an element of $U_q$.
Let $V$ be the $U_{q(x)}$-submodule of $({^w}L)_{F_\Sigma}$ generated by $1{\otimes}{^w}L$. For any $t\in {\mathbb{C}}^*/q^{{\mathbb{Z}}}$ $${^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.V\right):=\left\{ {\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.v \in {^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.({^w}L)_{F_{\Sigma}}\right) | v \in V\right\}$$ is a $U_{q(x)}$-submodule of ${^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.({^w}L)_{F_{\Sigma}}\right)$: To show this we show that for $u\in U_{q(x)}$ and $c\in {\mathbb{C}}^*$, ${\varphi}_{F_\Sigma,(c^{a_1},\dots,c^{a_n})}(u)\in U_{q(x)}$. We know that ${\varphi}_{F_\Sigma,(c^{a_1},\dots,c^{a_n})}(u) \in
U_{q(F_\Sigma)}[c^{\pm 1}]$ and we also see by construction that for $c=q^i$, $i\in{\mathbb{Z}}$, we have ${\varphi}_{F_\Sigma,(c^{a_1},\dots,c^{a_n})}(u)=x^{-i}ux^i \in
U_{q(x)}$. Choose a vector space basis of $U_{q(x)}$, $\{u_i\}_{i\in
I}$ and extend to a basis $\{u_i,u_j'\}_{i\in I, j\in J}$ of $U_{q(F_\Sigma)}$ where $I$ and $J$ are some index sets. Then for $u\in U_{q(x)}$ we have ${\varphi}_{F_\Sigma,(c^{a_1},\dots,c^{a_n})}(u)
= \sum_{i\in I'} u_i p_i(c) + \sum_{j\in J'} u_j'p_j'(c)$ for some finite $I'\subset I$ and $J'\subset J$ and some $p_i,p_j'\in
{\mathbb{C}}[X^{\pm 1}]$. We see that for $j\in J'$, $p_j'(q^i)=0$ for all $i\in{\mathbb{Z}}$ so $p_j'=0$. Hence ${\varphi}_{F_\Sigma,(c^{a_1},\dots,c^{a_n})}(u) = \sum_{i\in I'} u_i
p_i(c)\in U_{q(x)}$. This shows that ${^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.V \right)$ is a submodule of ${^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.({^w}L)_{F_{\Sigma}}\right)$. Set $$\mathcal{V} =\left(
\bigoplus_{t\in {\mathbb{C}}^*/q^{{\mathbb{Z}}}} {^{{\overline}{w}}}\left(
{\varphi}_{F_\Sigma,(t^{a_1},\dots,t^{a_n})}.V\right) \right)^{ss}.$$ Clearly $\mathcal{V}$ is a $U_q$-submodule of $\mathcal{EXT}(L)$. We claim that $\mathcal{V}{\cong}\left( \bigoplus_{t \in
{\mathbb{C}}^*/q^{{\mathbb{Z}}}} {\varphi}_{F_\beta,t}.L_{F_\beta}\right)^{ss}$ as $U_q$-modules. We will show this using Lemma \[lemma:32\].
Note that for $\lambda\in \operatorname{wt}V$ and $i\in{\mathbb{Z}}$ we have $${^{{\overline}{w}}}\left({\varphi}_{F_\Sigma,((q^i)^{a_1},\dots,(q^i)^{a_n})}.V_\lambda\right) {\cong}{^{{\overline}{w}}}\left( V_{q^{-i\sum_{k=1}^n a_k \beta_k}\lambda}\right)$$ as a $(U_q)_0$-module by Corollary \[cor:6\].
We have $\operatorname{wt}\mathcal{V} = ({\mathbb{C}}^*)^\beta \operatorname{wt}L = \operatorname{wt}\left(
\bigoplus_{t \in {\mathbb{C}}^*/q^{{\mathbb{Z}}}}
{\varphi}_{F_\beta,t}.L_{F_\beta}\right)^{ss}$. Let $\lambda \in \operatorname{wt}L$ be such that $\dim L_\lambda = \max_{i\in {\mathbb{Z}}}\{ \dim
L_{q^{i\beta}\lambda}\}$ then $V_{w(\lambda)} {\cong}({^w}L)_{w(\lambda)}{\cong}{^w}(L_\lambda)$ as a $(U_q)_0$-module by Lemma \[lemma:13\] and we have for $\nu \in ({\mathbb{C}}^*)^\beta
\lambda$: $$\mathcal{V}_\nu = \left( \bigoplus_{c\in {\mathbb{C}}^*: c^{w(\beta)}=w(\nu{^{-1}}\lambda)} {^{{\overline}{w}}}\left( {\varphi}_{F_\Sigma,(c^{a_1},\dots,c^{a_n})}.V_{w(\lambda)}\right)\right)^{ss}$$ so for $u\in (U_q)_0$: $$\begin{aligned}
\operatorname{Tr}u|_{\mathcal{V}_\nu} =& \sum_{c\in {\mathbb{C}}^*: c^{\beta}=\nu{^{-1}}\lambda} \operatorname{Tr}\left({\varphi}_{F_{\Sigma},(c^{a_1},\dots,c^{a_n})}(T_w{^{-1}}(u))\right)|_{V_{w(\lambda)}}
\end{aligned}$$ (note that $c^{w(\beta)}=w(\nu{^{-1}}\lambda)$ if and only if $c^\beta
= \nu{^{-1}}\lambda$ since $c^{w(\beta)}=w(c^\beta)$).
Set $p(c) = \operatorname{Tr}\left({\varphi}_{F_{\Sigma},(c^{a_1},\dots,c^{a_n})}(T_w{^{-1}}(u))\right)|_{V_{w(\lambda)}}$. $p$ is Laurent polynomial in $c$ and $p(q^{i}) = \operatorname{Tr}u|_{L_{q^{-i\beta}\lambda}}$ for $i\in {\mathbb{N}}$.
On the other hand we can show similarly that $$\begin{aligned}
\operatorname{Tr}&u|_{\left(\left( \bigoplus_{t \in {\mathbb{C}}^*/q^{{\mathbb{Z}}}}
{\varphi}_{F_\beta,t}.L_{F_\beta}\right)^{ss}\right)_\nu}
\\
=& \sum_{c\in {\mathbb{C}}^*: c^\beta=\nu{^{-1}}\lambda} \operatorname{Tr}\left({\varphi}_{F_{\beta},c}(u)\right)|_{(L_{F_\beta})_{\lambda}}.
\end{aligned}$$ Similarly $\operatorname{Tr}\left({\varphi}_{F_{\beta},c}(u)\right)|_{(L_{F_\beta})_{\lambda}}$ is Laurent polynomial in $c$ and equal to $\operatorname{Tr}u|_{L_{q^{-i\beta}\lambda}}$ for $c=q^{i}$, $i\in{\mathbb{N}}$. So $\operatorname{Tr}\left({\varphi}_{F_{\beta},c}(u)\right)|_{(L_{F_\beta})_{\lambda}}=
p(c)$. We conclude that $\operatorname{Tr}^{\mathcal{V}} = \operatorname{Tr}^{(\left(
\bigoplus_{t \in {\mathbb{C}}^*/q^{{\mathbb{Z}}}}
{\varphi}_{F_\beta,t}.L_{F_\beta}\right)^{ss}}$ so $\mathcal{V}{\cong}\left( \bigoplus_{t \in {\mathbb{C}}^*/q^{{\mathbb{Z}}}}
{\varphi}_{F_\beta,t}.L_{F_\beta}\right)^{ss}$ as $U_q$-modules by Lemma \[lemma:32\].
For any $\lambda\in X$ there is a unique simple highest weight module which we call $L(\lambda)$. It is the unique simple quotient of the Verma module $M(\lambda) := U_q {\otimes}_{U_q^{\geq 0}}
{\mathbb{C}}_{\lambda}$ where ${\mathbb{C}}_{\lambda}$ is the $1$-dimensional $U_q^{\geq 0}$-module with $U_q^{+}$ acting trivially and $U_q^0$ acting like $\lambda$. Let $\rho = \frac{1}{2}\sum_{\beta\in \Phi^+}
\beta$. In the following we use the dot action on $X$. For $w\in W$, $w.\lambda := q^{- \rho}
w(q^\rho\lambda)$.
\[prop:9\] Let $\lambda\in X$ be such that $L(\lambda)$ is admissible. Let $\alpha\in \Pi$. Assume $\lambda(K_{\alpha})\not \in \pm
q^{{\mathbb{N}}}$. Let $a=\frac{2}{(\alpha|\alpha)}$. If $a=\frac{1}{2}$ choose a squareroot $\lambda(K_\alpha)^{\frac{1}{2}}$ of $\lambda(K_\alpha)$. Then
- $-\alpha\in T_{L(\lambda)}$.
- $L(s_\alpha.\lambda)$ is admissible.
- ${^{s_\alpha}}L(s_\alpha.\lambda)$ is a subquotient of the $U_q$-module $L(\lambda)_{F_\alpha}$.
- $L(s_\alpha.\lambda)$ and ${^{s_\alpha}}L(\lambda)$ are subquotients of the $U_q$-module. ${\varphi}_{F_\alpha,\lambda(K_{\alpha})^a}.L(\lambda)_{F_\alpha}$.
$\lambda(K_\alpha)\not \in \pm q_\alpha^{{\mathbb{N}}}$ implies that $-\alpha\subset T_{L(\lambda)}$ since for $i\in {\mathbb{N}}$: $$E_\alpha^{(i)} F_\alpha^{(i)}v_\lambda = \prod_{j=1}^i \frac{q_\alpha^{j-1}\lambda(K_\alpha)-q_\alpha^{1-j}\lambda(K_\alpha){^{-1}}}{q_\alpha^j - q_\alpha^{-j}} v_\lambda.$$ This is only zero for an $i\in{\mathbb{N}}$ when $\lambda(K_\alpha)\in \pm
q_\alpha^{{\mathbb{N}}}$.
Let $v_\lambda\in L(\lambda)$ be a highest weight vector. Denote the vector ${\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.F_\alpha v_\lambda \in
{\varphi}_{F_\alpha,\lambda(K_\alpha)}.L(\lambda)_{F_\Sigma}$ as $v_{s_\alpha.\lambda}$. This is a highest weight vector of weight $s_\alpha.\lambda$: For $\mu\in Q$: $$\begin{aligned}
K_\mu v_{s_\alpha.\lambda} =& K_\mu {\varphi}_{F_\alpha,
\lambda(K_\alpha)^a}.F_\alpha v_\lambda
\\
=& {\varphi}_{F_\alpha,q \lambda(K_\alpha)^a}.\left( \left(q
\lambda(K_\alpha)^a \right)^{-\left(\mu|\alpha \right)}
\lambda(K_\mu) F_\alpha v_\lambda \right)
\\
=& q^{-\left(\mu|\alpha \right)} \lambda\left(
K_\alpha^{-\left<\mu,\alpha^\vee\right>} K_\mu \right)
{\varphi}_{F_\alpha,q_\alpha \lambda(K_\alpha)}.F_\alpha v_\lambda
\\
=& q^{-(\mu|\alpha)}(s_\alpha \lambda)(K_\mu) v_{s_\alpha.\lambda}
\\
=& s_\alpha.\lambda(K_\mu) v_{s_\alpha.\lambda}.
\end{aligned}$$ For $\alpha'\in \Pi\backslash\{\alpha\}$ $$E_{\alpha'} {\varphi}_{F_\alpha, \lambda(K_\alpha)^a}.v_\lambda = {\varphi}_{F_\alpha, \lambda(K_\alpha)^a}. E_{\alpha'} v_\lambda$$ and for $\alpha'=\alpha$ we have by the formula in the proof of Lemma \[lemma:9\] $$\begin{aligned}
E_\alpha {\varphi}_{F_\alpha, \lambda(K_\alpha)^a}.&F_\alpha v_\lambda
\\
=& {\varphi}_{F_\alpha, \lambda(K_\alpha)^a}.F_\alpha {\varphi}_{F_\alpha,q \lambda(K_\alpha)^a}(E_\alpha) v_\lambda
\\
=& {\varphi}_{F_\alpha, \lambda(K_\alpha)^a}. F_\alpha \left( E_\alpha +
F_\alpha{^{-1}}\frac{q_\alpha (q_\alpha\lambda(K_\alpha)){^{-1}}K_\alpha - q_\alpha{^{-1}}q_\alpha\lambda(K_\alpha)K_\alpha{^{-1}}}{(q_\alpha-q_\alpha{^{-1}})^2}
\right) v_\lambda
\\
=& 0.
\end{aligned}$$ So $v_{s_\alpha.\lambda}$ is a highest weight vector of weight $s_\alpha.\lambda$ hence $L(s_\alpha.\lambda)$ is a subquotient of ${\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.L(\lambda)_{F_\alpha}$. Since $L(s_\alpha.\lambda)$ is a subquotient of ${\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.L(\lambda)_{F_\alpha}$ it is admissible by Lemma \[lemma:13\].
Consider ${^{{\overline}{s_\alpha}}}\left({\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.L(\lambda)_{F_\alpha}/(U_qv_{s_\alpha.\lambda})\right)$ and the vector $$v' = F_{\alpha}{^{-1}}v_{s_\alpha.\lambda} + U_q v_{s_\alpha.\lambda} \in {^{{\overline}{s_\alpha}}}\left( {\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.L(\lambda)_{F_\alpha} /(U_q v_{s_\alpha.\lambda})\right).$$ Then $E_\beta v'=0$ for all $\beta\in \Pi$: First of all $$\begin{aligned}
E_\alpha \cdot v' =& T_{s_\alpha}{^{-1}}(E_\alpha) v'
\\
=& -K_\alpha F_\alpha v'
\\
=& - K_\alpha v_{s_\alpha.\lambda} + U_q v_{s_\alpha.\lambda}
\\
=& 0.
\end{aligned}$$ For $\beta \in \Pi\backslash\{ \alpha \}$ $$\begin{aligned}
E_\beta \cdot v' =& T_{s_\alpha}{^{-1}}(E_\beta) v'.
\\
=& \sum_{i=0}^{-\left<\beta,\alpha^\vee\right>} (-1)^i
q_\alpha^{-i} E_\alpha^{(i)}E_\beta
E_\alpha^{(-\left<\beta,\alpha^\vee\right>-i)}v'
\\
=& (-1)^{\left<\beta,\alpha^\vee\right>}
q_\alpha^{\left<\beta,\alpha^\vee\right>}E_{\alpha}^{\left(-\left<\beta,\alpha^\vee\right>\right)}
E_{\beta} v'
\\
=& (-1)^{\left<\beta,\alpha^\vee\right>}
q_\alpha^{\left<\beta,\alpha^\vee\right>}E_{\alpha}^{\left(-\left<\beta,\alpha^\vee\right>\right)}
F_{\alpha}{^{-1}}E_{\beta} v_{s_\alpha.\lambda} + U_q
v_{s_\alpha.\lambda}
\\
=& 0
\end{aligned}$$ since $E_{\alpha}v'=0$ and $E_{\beta}v_{s_\alpha.\lambda}=0$ by the above.
So $v'$ is a highest weight vector and $v'$ has weight $\lambda$: For $\mu\in Q$: $$\begin{aligned}
K_\mu \cdot v' =& K_{s_\alpha \mu } v'
\\
=& K_{s_\alpha \mu } F_{\alpha}{^{-1}}v_{s_\alpha.\lambda} + U_q
v_{s_\alpha.\lambda}
\\
=& q^{(s_\alpha(\mu)|\alpha)} s_{\alpha}.\lambda(K_{s_\alpha \mu})
F_{\alpha}{^{-1}}v_{s_\alpha.\lambda} + U_q v_{s_\alpha.\lambda}
\\
=& \lambda(K_{\mu}) F_{\alpha}{^{-1}}v_{s_\alpha.\lambda} + U_q
v_{s_\alpha.\lambda}.
\end{aligned}$$
So $L(\lambda)$ is a subquotient of ${^{{\overline}{s_\alpha}}}({\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.L(\lambda)_{F_\alpha})$ hence ${^{s_\alpha}}L(\lambda)$ is a subquotient of ${\varphi}_{F_\alpha,\lambda(K_\alpha)^a}.L(\lambda)_{F_\alpha}$. Consider the vector $$v''=F_\alpha^{-1} v_\lambda + U_q v_{\lambda}\in
{^{{\overline}{s_\alpha}}}\left(L(\lambda)_{F_\alpha}/(U_q v_{\lambda} )\right).$$ By an argument analog to above we get $E_\beta \cdot v'' = 0$ for all $\beta\in \Pi \backslash\{ \alpha\}$ since $E_\beta$ and $F_\alpha{^{-1}}$ commutes and $v_\lambda$ is a highest weight vector. We get $E_\alpha\cdot v'' = 0$ by the following: $$\begin{aligned}
E_\alpha \cdot v'' =& T_{s_\alpha}{^{-1}}(E_\alpha) v''
\\
=& -K_\alpha F_\alpha v''
\\
=& - q^{-2} F_\alpha K_\alpha F_\alpha{^{-1}}v_\lambda + U_q
v_\lambda
\\
=& 0.
\end{aligned}$$
So $v''$ is a highest weight vector in ${^{{\overline}{s_\alpha}}}\left( L(\lambda)_{F_\alpha}/(U_q v_{\lambda}
)\right)$. $v''$ has weight $s_\alpha. \lambda$: For $\mu\in Q$: $$\begin{aligned}
K_\mu \cdot v'' =& K_{s_\alpha \mu} v''
\\
=& K_{s_\alpha \mu} F_\alpha{^{-1}}v_\lambda + U_q v_\lambda
\\
=& q^{(s_\alpha(\mu)|\alpha)} \lambda(K_{s_\alpha \mu}) v''
\\
=& (q^{-\alpha}s_\alpha \lambda ) (K_{\mu}) v''.
\end{aligned}$$
Hence $L(s_\alpha. \lambda)$ is a subquotient of ${^{{\overline}{s_\alpha}}}L(\lambda)_{F_\Sigma}$ and therefore ${^{s_\alpha}}L(
s_\alpha. \lambda)$ is a subquotient of $L(\lambda)_{F_\Sigma}$.
\[lemma:30\] Let $\lambda\in X$ be such that $L(\lambda)$ is an infinite dimensional admissible module of degree $d$. Let $\alpha\in
\Pi$. Then $$\mathcal{EXT}(L(\lambda)){\cong}\mathcal{EXT}({^{s_\alpha}}L(\lambda))$$ and if $\lambda(K_\alpha) \not \in \pm q_\alpha^{{\mathbb{N}}}$ then $\mathcal{EXT}(L(\lambda))$ contains $L(s_\alpha.\lambda)$ and ${^{s_\alpha}}L(s_\alpha.\lambda)$ as $U_q$-submodules, where $s_\alpha.\lambda := q^{- \rho}
s_\alpha(q^\rho\lambda)=q^{-\alpha}s_\alpha \lambda$.
Assume first that $\lambda(K_\alpha) \not \in \pm
q_\alpha^{{\mathbb{N}}}$. By Proposition \[prop:9\] the $U_q$-module $\bigoplus_{t\in {\mathbb{C}}^*/q^{{\mathbb{Z}}}} {\varphi}_{F_{\alpha} ,t}.
L(\lambda)_{F_\alpha}$ contains $L(s_\alpha.\lambda)$, ${^{s_\alpha}}L(\lambda)$ and ${^{s_\alpha}}L(s_\alpha.\lambda)$ as subquotients. By Proposition \[prop:23\] and Proposition \[prop:15\] this finishes the proof of the claim when $\lambda(K_\alpha) \not \in \pm q_\alpha^{{\mathbb{N}}}$.
Assume now that $\lambda(K_\alpha) = \pm q_\alpha^{k}$ for some $k\in {\mathbb{N}}$: If $\lambda(K_\alpha)= q_\alpha^{k}$ it is easy to prove that $L(\lambda) {\cong}{^{s_\alpha}}L(\lambda)$. Assume from now on that $\lambda(K_\alpha)=-q_\alpha^k$. We have $$\mathcal{EXT}(L(\lambda)) = \left( \bigoplus_{t\in ({\mathbb{C}}^*)/q^{{\mathbb{Z}}^n}} {\varphi}_{F_\Sigma,t}.L(\lambda)_{F_\Sigma} \right)^{ss}$$ for some set of commuting roots $\Sigma=\{\beta_1,\dots,\beta_n\}$ that is a basis of $Q$ with $-\Sigma\subset T_{L(\lambda)}$. Since $\Sigma$ is a basis of $Q$ there exists $a_1,\dots,a_n\in {\mathbb{Z}}$ such that $\alpha = \sum_{i=1}^n a_i \beta_i$. Let $v_\lambda$ be a highest weight vector in $L(\lambda)$. We will show that $v_0:={\varphi}_{F_{\Sigma},((-1)^{a_1'},\dots,(-1)^{a_n'})}.F_\alpha^iv_\lambda\in
{^{{\overline}{s_\alpha}}}\mathcal{EXT}(L(\lambda))$ is a highest weight vector of weight $\lambda$ where $a_i'=\frac{2a_i}{(\alpha|\alpha)}$. This will imply $\mathcal{EXT}({^{s_\alpha}}L(\lambda)) {\cong}\mathcal{EXT}(L(\lambda))$ by Proposition \[prop:15\]. The weight of $v_0$: Let $\mu\in Q$: $$\begin{aligned}
K_\mu \cdot v_0 =&
K_{s_\alpha(\mu)}{\varphi}_{F_\Sigma,((-1)^{a_1'},\dots,(-1)^{a_n'})}.F_\alpha^i
v_\lambda
\\
=& (-1)^{\left(\sum_{i=1}^n a_i'
\beta_i|\mu\right)}q^{i(\alpha|\mu)} \lambda(K_\mu
K_{\alpha}^{-\left< \mu,\alpha^\vee\right>})
{\varphi}_{F_\Sigma,((-1)^{a_1'},\dots,(-1)^{a_n'})}.F_{\alpha}^i
v_\lambda
\\
=& (-1)^{\left<\mu,\alpha^\vee\right>}q_\alpha^{i\left<
\mu,\alpha^\vee\right>} (- q_\alpha^i)^{-
\left<\mu,\alpha^\vee\right>} \lambda(K_\mu) v_0
\\
=& \lambda(K_\mu) v_0.
\end{aligned}$$ By Proposition \[prop:25\] ${\varphi}_{F_\beta,(-1)^{\frac{2}{(\beta|\beta)}}}(E_{\alpha'})
=E_{\alpha'}$ and ${\varphi}_{F_\beta,(-1)^{\frac{2}{(\beta|\beta)}}}(F_{\alpha'}) = \pm
F_{\alpha'}$ for any $\alpha'\in \Pi$ and any $\beta\in \Phi^+$. So ${\varphi}_{F_\Sigma,((-1)^{a_1'},\dots,(-1)^{a_n'})}(E_\beta)$, $\beta\in \Pi\backslash\{\alpha\}$ and ${\varphi}_{F_\Sigma,((-1)^{a_1'},\dots,(-1)^{a_n'})}(F_\alpha)$ kills $F_{\alpha}^iv_\lambda\in L(\lambda)$ because $E_\beta$ and $F_\alpha$ does. Hence $E_\beta$, $\beta\in \Pi$ kills $v_0$ by the same argument as in the proof of Proposition \[prop:9\] when proving that $v'$ is a highest weight vector.
\[thm:EXT\_contains\_highest\_weight\] Let $L$ be an infinite dimensional admissible simple module of degree $d$. Then the $U_q$-module $\mathcal{EXT}(L)$ contains an infinite dimensional admissible simple highest weight module $L(\lambda)$ of degree $d$ for some weight $\lambda\in
X$. Furthermore for any $x\in W$: $${^x}\mathcal{EXT}(L) {\cong}\mathcal{EXT}(L).$$
Let $w\in W$ be such that $w(F_L\backslash F_L^s) \subset \Phi^+$ and $w(T_L\backslash T_L^s)\subset \Phi^-$. Set $L'={^{{\overline}{w{^{-1}}}}}L$ (then ${^{w{^{-1}}}}L'= L$). We will show the result first for $L'$ by induction on $|T_{L'}^+|$. If $|T_{L'}^+|=0$ then $L'$ is itself a highest weight module. Assume $|T_{L'}^+|>0$. Let $\beta\in T_{L'}^+$. Then $\beta \in T_{L'}^s$ since $T_{L'}\backslash T_{L'}^s \subset \Phi^-$. So $-\beta \in
T_{L'}$. Then by Lemma \[lemma:24\] there exists a $b\in {\mathbb{C}}^*$ such that ${\varphi}_{F_\beta,b}.L'_{F_\beta}$ contains a $U_q$-submodule $L''$ with $T_{L''}\subset T_{L'}$ and $\beta\not \in T_{L''}$. By Proposition \[prop:23\] and Proposition \[prop:15\] $\mathcal{EXT}(L'){\cong}\mathcal{EXT}(L'')$ as $U_q$-modules. By induction $\mathcal{EXT}(L'')$ contains an infinite dimensional admissible simple highest weight module $L(\lambda)$ for some $\lambda$. So $\mathcal{EXT}(L'){\cong}\mathcal{EXT}(L(\lambda))$ by Proposition \[prop:15\]. Choose a reduced expression $s_{i_r}\cdots s_{i_1}$ for $w{^{-1}}$. By Proposition \[prop:19\] and Lemma \[lemma:30\] $$\begin{aligned}
\mathcal{EXT}(L) {\cong}& \mathcal{EXT}({^{w{^{-1}}}}L')
\\
{\cong}& {^{w{^{-1}}}}\mathcal{EXT}(L')
\\
{\cong}& {^{ w{^{-1}}}}\mathcal{EXT}(L(\lambda))
\\
{\cong}& {^{s_{i_r}\cdots
s_{i_{2}}}}\mathcal{EXT}({^{s_{i_1}}}L(\lambda))
\\
{\cong}& {^{s_{i_r}\cdots s_{i_{2}}}}\mathcal{EXT}(L(\lambda))
\\
\vdots&
\\
{\cong}& \mathcal{EXT}(L(\lambda)).
\end{aligned}$$ So $\mathcal{EXT}(L)$ contains a simple highest weight module $L(\lambda)$. For any $x\in W$ we can do as above to show ${^x}\mathcal{EXT}(L){\cong}\mathcal{EXT}({^x}L(\lambda)){\cong}\mathcal{EXT}(L(\lambda)){\cong}\mathcal{EXT}(L)$.
\[cor:1\] Let $L$ be a simple torsion free module. Then there exists a set of commuting roots $\Sigma$ that is a basis of $Q$ with corresponding Ore subset $F_\Sigma$, a $\lambda\in X$ and $\mathbf{b}\in
({\mathbb{C}}^*)^n$ such that $-\Sigma\subset T_{L(\lambda)}$ and $L{\cong}{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_{\Sigma}}$
By Theorem \[thm:EXT\_contains\_highest\_weight\] $\mathcal{EXT}(L){\cong}\mathcal{EXT}(L(\lambda))$ for some $\lambda\in X$. So $L$ is a $U_q$-submodule of $\mathcal{EXT}(L(\lambda))$. Let $\Sigma$ be a set of commuting roots such that $-\Sigma \subset L(\lambda)$ (exists by Lemma \[lemma:26\] by setting $w=e$, the neutral element in $W$) then $$\mathcal{EXT}(L(\lambda)) = \left( \bigoplus_{t\in ({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}} {\varphi}_{F_\Sigma,t}.L(\lambda)_{F_\Sigma} \right)^{ss}.$$ Since $L$ is simple we must have that $L$ is a submodule of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ for some $\mathbf{b}\in({\mathbb{C}}^*)^n$. By Proposition \[prop:15\] and Lemma \[lemma:13\] $\dim
\left({\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)_\lambda
= \dim L_\lambda$ for all $\lambda \in \operatorname{wt}L$ so we have $L{\cong}{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$.
So to classify torsion free simple modules we need to classify the admissible infinite dimensional simple highest weight modules $L(\lambda)$ and then we need to determine the $t\in
({\mathbb{C}}^*)^n/q^{{\mathbb{Z}}^n}$ such that ${\varphi}_{F_\Sigma,t}.L(\lambda)_{F_\Sigma}$ is simple. Furthermore we have that if there exists an admissible infinite dimensional simple module then there exists a torsion free simple module. In the classical case torsion free modules only exists if $\mathfrak{g}$ is of type $A$ or $C$ so we expect the same to be true in the quantum group case. We show this in Section \[sec:class-admiss-modul-1\].
Classification of simple torsion free $U_q(\mathfrak{sl}_2)$-modules {#sec:A_classification_sl2}
====================================================================
In this section let $\mathfrak{g}=\mathfrak{sl}_2$. In this case there is a single simple root $\alpha$. It is natural to identify $X$ with ${\mathbb{C}}^*$ via $\lambda \mapsto \lambda(K_\alpha)$. We define $F =
F_\alpha$, $E=E_\alpha$ and $K^{\pm 1}= K_\alpha^{\pm 1}$. Let $\lambda\in {\mathbb{C}}^*\backslash\{\pm q^{{\mathbb{N}}}\}$ and consider the simple highest weight module $L(\lambda)$. Let $0\neq v_0\in
L(\lambda)_{\lambda}$. $\operatorname{wt}L = q^{- 2{\mathbb{N}}}\lambda$ so $L(\lambda)$ is an admissible infinite dimensional highest weight module. Thus $\mathcal{EXT}(L(\lambda))$ contains a torsion free module by Theorem \[thm:existence\_of\_torsion\_free\_modules\]. Let $b\in{\mathbb{C}}^*$. We will describe the action on the module ${\varphi}_{F,b}.L(\lambda)_{(F)}$ and determine exactly for which $b$’s ${\varphi}_{F,b}.L(\lambda)_{(F)}$ is torsion free.
Let $v_i = F^i {\varphi}_{F,b}.v_0$ for all $i\in{\mathbb{Z}}$. Then we have for $i\in {\mathbb{Z}}$ $$\begin{aligned}
F v_i =& v_{i+1}
\\
K^{\pm 1} v_i =& q^{-2i} b^{\mp 2} \lambda v_i
\\
E v_i =& \frac{(q^{i}b-q^{-i}b{^{-1}})(q^{1-i} b{^{-1}}\lambda - q^{i-1}b
\lambda{^{-1}})}{(q-q{^{-1}})^2} v_{i-1}.\end{aligned}$$ We see that unless $b=\pm q^{i}$ or $b=\pm q^{i}\lambda$ for some $i\in{\mathbb{Z}}$ then ${\varphi}_{F,b}.L(\lambda)_{(F)}$ is torsion free. In this case we see that ${\varphi}_{F,-b}={\varphi}_{F,b}$ since for all $u\in
U_q(\mathfrak{sl}_2)$, ${\varphi}_{F,b}(u)$ is Laurent polynomial in $b^2$.
So in this case $\mathcal{EXT}(L(\lambda))$ contains a maximum of four different simple submodules which are *not* torsion free: We have $({\varphi}_{F,\pm q^{i}}.L(\lambda)_{(F)})^{ss} {\cong}(L(\lambda)_{(F)})^{ss}{\cong}L(\lambda)\oplus
{^{s_\alpha}}L(s_\alpha.\lambda)$ (which can be seen directly from the calculations but also follows from Corollary \[cor:6\] and the fact that ${\varphi}_{F,-b}={\varphi}_{F,b}$) and $({\varphi}_{F,\pm
q^{i}\lambda}.L(\lambda)_{(F)})^{ss} {\cong}(L(s_\alpha.\lambda)_{(F)})^{ss}{\cong}L(s_\alpha.\lambda)\oplus
{^{s_\alpha}}L(\lambda)$ if $\lambda \not \in \pm q^{{\mathbb{Z}}}$.
The weights of ${\varphi}_{F,b}.L(\lambda)_{(F)}$ are $b^{-\alpha} \operatorname{wt}L(\lambda)_{(F)} = q^{2{\mathbb{Z}}} b^{-2} \lambda$. Suppose we want to find a torsion free $U_q(\mathfrak{sl}_2)$-modules with integral weights. Then we just need to find $\lambda,b\in {\mathbb{C}}^*$ such that $\lambda\not \in \pm q^{{\mathbb{Z}}_{\geq 0}}$, $b\not \in \pm q^{{\mathbb{Z}}}$ and $b \not \in \pm q^{{\mathbb{Z}}} \lambda$ such that $b^{-2} \lambda \in
q^{{\mathbb{Z}}}$. For example choose a square root $q^{1/2}$ of $q$ and set $\lambda = q^{-1}$ and $b=q^{1/2}$. Then we have a torsion free module $L={\operatorname{span}_{{\mathbb{C}}}\ensuremath{\left\{v_i|i\in {\mathbb{Z}}\right\}}}$ with action given by: $$\begin{aligned}
F v_i =& v_{i+1}
\\
K v_i =& q^{- 2i-2} v_i
\\
E v_i =& \frac{(q^{1/2+i}-q^{-1/2-i})(q^{-1/2-i} -
q^{i+1/2})}{(q-q{^{-1}})^2} v_{i-1}
\\
=& \frac{q(q^{-i-1}-q^i)^2}{(q-q{^{-1}})^2} v_{i-1}.\end{aligned}$$ In this paper we only focus on quantized enveloping algebras over ${\mathbb{C}}$ but note that we can define, for a general field $\mathbb{F}$ with $q\in \mathbb{F}\backslash\{0\}$ a non-root of unity, a simple torsion free $U_{\mathbb{F}}(\mathfrak{sl}_2)$-module with integral weights by the above formulas (here $U_{\mathbb{F}}(\mathfrak{sl}_2)=U_A{\otimes}_A \mathbb{F}$ where $\mathbb{F}$ is considered an $A$-algebra by sending $v$ to $q$).
An example for $U_q(\mathfrak{sl}_3)$ {#sec:an-example-u_qm}
=====================================
In this section we will show how we can construct a specific torsion free simple module for $U_q(\mathfrak{sl}_3)$. In Section \[sec:type-a\_n-calc\] we classify all torsion free $U_q(\mathfrak{sl}_n)$-modules with $n\geq 3$ so this example is of course included there. If you are only interested in the general classification you can skip this section but the calculations in this section gives a taste of the calculations needed in the general case in Section \[sec:type-a\_n-calc\] and they show a phenomona that does not happen in the classical case.
Let $\alpha_1$ and $\alpha_2$ be the two simple roots of the root system. We will consider the set of commuting roots $\Sigma = \{
\beta_1,\beta_2\}$ where $\beta_1 = \alpha_1$ and $\beta_2 =
\alpha_1+\alpha_2$. Set $F_{\beta_1}:=F_{\alpha_1}$ and $F_{\beta_2}:=
T_{s_1}(F_{\alpha_2})=F_{\alpha_2}F_{\alpha_1}-q
F_{\alpha_1}F_{\alpha_2}=[F_{\alpha_2},F_{\alpha_1}]_q$. We have $(\beta_1|\beta_2)=1$ and $0=[F_{\beta_2},F_{\beta_1}]_q =
F_{\beta_2}F_{\beta_1}-q^{-1} F_{\beta_1}F_{\beta_2}$ or equivalently $F_{\beta_1}F_{\beta_2} = q F_{\beta_2}F_{\beta_1}$. Let $\lambda\in
X$ be determined by $\lambda(K_{\alpha_1})=q^{-1}$ and $\lambda(K_{\alpha_2})=1$. Then $M(s_{\alpha_1}.\lambda)$ is a submodule of $M(\lambda)$ and $L(\lambda)=M(\lambda)/M(s_{\alpha_2}.\lambda)=M(\lambda)/M(q^{-\alpha_2}\lambda)$ is admissible of degree $1$. Let $\xi=e^{2\pi i/3}$. We will show that ${\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$ is a torsion free module. We have here a phenomona that does not happen in the classical case: $\operatorname{wt}L(\lambda)_{F_\Sigma}=\operatorname{wt}{\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$ but $L(\lambda)_{F_\Sigma} \not {\cong}{\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$ as $U_q$-modules since one is simple and torsion free and the other isn’t (compare to [@Mathieu Section 10] where Mathieu classifies the torsion free simple modules by determining for a coherent family $\mathcal{M}$ for which cosets $t\in \mathfrak{h}^*/Q$, $\mathcal{M}[t]$ is torsion free).
We will show that $E_{\alpha_1}$ and $E_{\alpha_2}$ act injectively on the module ${\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$. So we need to calculate ${\varphi}_{F_\Sigma,(\xi,\xi)}(E_{\alpha_1})$ and ${\varphi}_{F_\Sigma,(\xi,\xi)}(E_{\alpha_2})$. ${\varphi}_{F_\Sigma,(\xi,\xi)}={\varphi}_{F_{\beta_1},\xi}\circ
{\varphi}_{F_{\beta_2},\xi}$. We have $$\begin{aligned}
[E_{\alpha_1},F_{\beta_2}] =&
F_{\alpha_2}[E_{\alpha_1},F_{\alpha_1}] - q
[E_{\alpha_1},F_{\alpha_1}] F_{\alpha_2}
\\
=& F_{\alpha_2} \frac{K_{\alpha_1} - K_{\alpha_1}{^{-1}}}{q-q{^{-1}}} - q
F_{\alpha_2} \frac{q K_{\alpha_1} - q{^{-1}}K_{\alpha_1}{^{-1}}}{q-q{^{-1}}}
\\
=& F_{\alpha_2} \frac{K_{\alpha_1} - q^2 K_{\alpha_1}}{q-q{^{-1}}}
\\
=& -F_{\alpha_2} q \frac{q-q{^{-1}}}{q-q{^{-1}}}K_{\alpha_1}
\\
=& -q F_{\alpha_2} K_{\alpha_1}.\end{aligned}$$ We can show by induction that $$\begin{aligned}
[E_{\alpha_1}, F_{\beta_2}^j] =& - q^{2-j} [j] F_{\beta_2}^{j-1}
F_{\alpha_2}K_{\alpha_1}\end{aligned}$$ for any $j\in {\mathbb{N}}$. Using that ${\varphi}_{F_{\beta_2},b}(E_{\alpha_1})$ is Laurent polynomial and equal to $F_{\beta_2}^{-j} E_{\alpha_1}
F_{\beta_2}^j$ for $b=q^j$ we get $$\begin{aligned}
{\varphi}_{F_{\beta_2},b}(E_{\alpha_1}) =& E_{\alpha_1} - q^2 b{^{-1}}\frac{b - b{^{-1}}}{q-q{^{-1}}} F_{\beta_2}{^{-1}}F_{\alpha_2} K_{\alpha_1}.\end{aligned}$$ We have $F_{\beta_2}F_{\beta_1}=q{^{-1}}F_{\beta_1}F_{\beta_2}$ so $F_{\beta_1}^{-i}F_{\beta_2}F_{\beta_1}^i=q^{-i}F_{\beta_2}$ thus ${\varphi}_{F_{\beta_1},b}(F_{\beta_2}^{-1}) = b F_{\beta_2}^{- 1}$. We have $$\begin{aligned}
{\varphi}_{F_{\alpha_1},b}(F_{\alpha_2}) =& b F_{\alpha_2} - \frac{b -
b{^{-1}}}{q - q{^{-1}}} F_{\alpha_1}{^{-1}}( q F_{\alpha_1} F_{\alpha_2} -
F_{\alpha_2} F_{\alpha_1}) \\=& b F_{\alpha_2} +
\frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\alpha_1}{^{-1}}F_{\beta_2}\end{aligned}$$ and $$\begin{aligned}
{\varphi}_{F_{\beta_1},b_1}&( {\varphi}_{F_{\beta_2},b_2}(E_{\alpha_1}))
\\
=& {\varphi}_{F_{\alpha_1},b_1}\left( E_{\alpha_1} - q^2b_2{^{-1}}\frac{b_2- b_2{^{-1}}}{q-q{^{-1}}} F_{\beta_2}{^{-1}}F_{\alpha_2}
K_{\alpha_1} \right)
\\
=& E_{\alpha_1} + F_{\alpha_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(qb_1{^{-1}}K_{\alpha_1} - q{^{-1}}b_1 K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2}
\\
&- q^2 b_2{^{-1}}\frac{b_2 - b_2{^{-1}}}{q-q{^{-1}}} b_1 F_{\beta_2}{^{-1}}\left( b_1 F_{\alpha_2} + \frac{b_1-b_1{^{-1}}}{q-q{^{-1}}}
F_{\alpha_1}{^{-1}}F_{\beta_2} \right) b_1^{-2} K_{\alpha_1}
\\
=& E_{\alpha_1} + F_{\alpha_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(qb_1{^{-1}}K_{\alpha_1} - q{^{-1}}b_1 K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2}
\\
&- q^2 b_2{^{-1}}\frac{b_2 - b_2{^{-1}}}{q-q{^{-1}}} F_{\beta_2}{^{-1}}F_{\alpha_2} K_{\alpha_1}
\\
& - q b_1{^{-1}}b_2{^{-1}}\frac{(b_2 - b_2{^{-1}})(b_1-b_1{^{-1}})}{(q-q{^{-1}})^2} F_{\alpha_1}{^{-1}}K_{\alpha_1}
\\
=& E_{\alpha_1} + b_2{^{-1}}F_{\alpha_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(q
b_1{^{-1}}b_2^{-1}K_{\alpha_1} - q{^{-1}}b_1b_2
K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2}
\\
&- q^2 b_2{^{-1}}\frac{b_2 - b_2{^{-1}}}{q-q{^{-1}}} F_{\beta_2}{^{-1}}F_{\alpha_2} K_{\alpha_1}.\end{aligned}$$
Let $v_\lambda'$ be a highest weight vector in $L(\lambda)$ and set $v_\lambda = 1{\otimes}v_\lambda'\in L(\lambda)_{F_\Sigma}$. We have $F_{\alpha_2}v_\lambda = 0$ by construction so we have $$\begin{aligned}
{\varphi}_{F_\Sigma,(b_1,b_2)}(E_{\alpha_1})v_\lambda =& b_2{^{-1}}\frac{(b_1-b_1{^{-1}})(b_1{^{-1}}b_2{^{-1}}- b_1 b_2)}{(q-q{^{-1}})^2}
F_{\alpha_1}{^{-1}}v_\lambda.\end{aligned}$$
${\varphi}_{F_\Sigma,(c_1,c_2)}.L(\lambda)_{F_\Sigma}$ is spanned by the elements $F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(c_1,c_2)}.v_\lambda$, $i,j\in {\mathbb{Z}}$ because every weight space is one-dimensional and $F_{\beta_1}^iF_{\beta_2}^j$ acts injectively. Since $$\begin{aligned}
F_{\beta_2}^{-j} F_{\beta_1}^{-i} E_{\alpha_1} F_{\beta_1}^i
F_{\beta_2}^j =& F_{\beta_1}^{-i} F_{\beta_2}^{-j} E_{\alpha_1}
F_{\beta_2}^j F_{\beta_1}^{i}
\\
=&{\varphi}_{F_{\beta_1},q^i}({\varphi}_{F_{\beta_2},q^j}(E_{\alpha_1}))
\\
=& {\varphi}_{F_\Sigma,(q^i,q^j)}(E_{\alpha_1})\end{aligned}$$ we have $$\begin{aligned}
E_{\alpha_1} F_{\beta_1}^i
F_{\beta_2}^j&{\varphi}_{F_\Sigma,(c_1,c_2)}.v_\lambda
\\
=& F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(q^i,q^j)}(E_{\alpha_1})
{\varphi}_{F_\Sigma,(c_1,c_2)}.v_\lambda
\\
=& F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(c_1,c_2)}. {\varphi}_{F_\Sigma,(q^i c_1,q^j
c_2)}(E_{\alpha_1}) v_\lambda
\\
=& q^{-j}c_2{^{-1}}\frac{(q^ic_1-q^{-i}c_1{^{-1}})(q^{-i-j}c_1{^{-1}}c_2{^{-1}}- q^{i+j}c_1 c_2)}{(q-q{^{-1}})^2}F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(c_1,c_2)}. F_{\alpha_1}{^{-1}}v_\lambda
\\
=& \frac{(q^ic_1-q^{-i}c_1{^{-1}})(q^{-i-j}c_1{^{-1}}c_2{^{-1}}- q^{i+j}c_1
c_2)}{(q-q{^{-1}})^2}F_{\beta_1}^{i-1} F_{\beta_2}^j
{\varphi}_{F_\Sigma,(c_1,c_2)}. v_\lambda.\end{aligned}$$ This is only zero when $c_1 = \pm q^{-i}$ or $c_1c_2 = \pm
q^{-i-j}$. Set $c_1=c_2=e^{2\pi i/3}=:\xi$. Then we have shown that $E_{\alpha_1}$ acts injectively on ${\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$.
Now we will show that $E_{\alpha_2}$ acts injectively on $F_{\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$. We can show by induction that $$\begin{aligned}
[E_{\alpha_2},F_{\beta_2}^j] =& [j]
F_{\alpha_1}F_{\beta_2}^{j-1}K_{\alpha_2}{^{-1}}\end{aligned}$$ so ${\varphi}_{F_{\beta_2},b}(E_{\alpha_2}) = E_{\alpha_2} + b
\frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\alpha_1} F_{\beta_2}{^{-1}}K_{\alpha_2}{^{-1}}$ and $$\begin{aligned}
{\varphi}_{F_\Sigma,(b_1,b_2)}(E_{\alpha_2}) =&
{\varphi}_{F_{\beta_1},b_1}({\varphi}_{F_{\beta_2},b_2}(E_{\alpha_2}))
\\
=& E_{\alpha_2} + b_2 \frac{b_2-b_2{^{-1}}}{q-q{^{-1}}} F_{\alpha_1}
F_{\beta_2}{^{-1}}K_{\alpha_2}{^{-1}}.\end{aligned}$$ Thus $$\begin{aligned}
E_{\alpha_2} F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(c_1,c_2)}.v_\lambda =& F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(q^i,q^j)}(E_{\alpha_2})
{\varphi}_{F_\Sigma,(c_1,c_2)}.v_\lambda
\\
=& F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(c_1,c_2)}. {\varphi}_{F_\Sigma,(q^i c_1,q^j
c_2)}(E_{\alpha_2})v_\lambda
\\
=&F_{\beta_1}^i F_{\beta_2}^j {\varphi}_{F_\Sigma,(c_1,c_2)}. c_2
\frac{q^jc_2-q^{-j}c_2{^{-1}}}{q-q{^{-1}}} F_{\alpha_1}F_{\beta_2}{^{-1}}K_{\alpha_2}{^{-1}}v_\lambda
\\
=& q^{-j-1} c_2 \frac{q^jc_2-q^{-j}c_2{^{-1}}}{q-q{^{-1}}}
F_{\beta_1}^{i+1} F_{\beta_2}^{j-1} {\varphi}_{F_\Sigma,(c_1,c_2)}.
v_\lambda.\end{aligned}$$ We see that this is nonzero only if $c_2 = \pm q^{-j}$ so again setting $c_1=c_2=\xi$ ensures that this is nonzero.
We have shown that the $U_q$-module ${\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$ has a basis $F_{\beta_1}^i F_{\beta_2}^j {\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda$, $i,j\in {\mathbb{Z}}$ and we have $$\begin{aligned}
F_{\beta_1} F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda =& F_{\beta_1}^{i+1}
F_{\beta_2}^{j-1} {\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda
\\
F_{\beta_2} F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda =& q^{-j}F_{\beta_1}^{i}
F_{\beta_2}^{j+1} {\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda
\\
E_{\alpha_1} F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda =& C_1 F_{\beta_1}^{i-1}
F_{\beta_2}^{j} {\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda
\\
E_{\alpha_1} E_{\alpha_2} F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda =& C_2 F_{\beta_1}^{i}
F_{\beta_2}^{j-1} {\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda\end{aligned}$$ for some nonzero constants $C_1,C_2\in {\mathbb{C}}^*$. We see that any of the basis vectors $F_{\beta_1}^i F_{\beta_2}^j
{\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda$ can be mapped injectively to any other basis vector $F_{\beta_1}^{i'} F_{\beta_2}^{j'}
{\varphi}_{F_\Sigma,(\xi,\xi)}.v_\lambda$ by elements of $U_q$ so ${\varphi}_{F_\Sigma,(\xi,\xi)}.L(\lambda)_{F_\Sigma}$ is a simple module. The module is torsion free by Proposition \[prop:3\].
Classification of admissible simple highest weight modules
==========================================================
Preliminaries {#sec:class-admiss-simple}
-------------
In this section we prove some preliminary results with the goal to classify all admissible simple highest weight modules. We will only focus on non-integral weights since we have the following theorem from [@CatO]:
\[thm:integral\] Assume $q\in {\mathbb{C}}\backslash\{0\}$ is transcendental. Let $\lambda:U_q^0 \to {\mathbb{C}}$ be a weight such that $\lambda(K_\alpha)=
q_\beta^i$ for some $i\in{\mathbb{Z}}$ for every $\alpha\in \Pi$ - i.e. $\lambda \in q^Q$. Say $\lambda= q^\mu$, $\mu\in Q$. Let $L_{{\mathbb{C}}}(\mu)$ denote the simple highest weight $\mathfrak{g}$-module of highest weight $\mu$. Then the character of $L(\lambda)$ and $L_{\mathbb{C}}(\mu)$ are equal - i.e. for any $\nu\in Q$, $\dim L(\lambda)_{q^\nu\lambda} = \dim L_{{\mathbb{C}}}(\mu)_{\nu+\mu}$.
[@CatO Corollary 6.3].
Extending to modules which are not of type 1 is done in the usual way (cf. e.g. [@Jantzen Section 5.1–5.2]). The above theorem implies that the integral admissible simple highest weight modules can be classified from the classification of the classical admissible simple highest weight modules when $q$ is transcendental. Hence we need only to consider weights $\lambda\in X$ such that $\lambda(K_\alpha) \not
\in \pm q^{{\mathbb{Z}}}$ for at least one $\alpha\in \Pi$ in this case. *So in the rest of the paper we will restrict our attention to the case when $q$ is transcendental*. If a similar theorem is true for any non-root-of-unity $q$ then the results in this paper extend to all non-root-of-unities but the author is not aware of any such result.
\[thm:Jantzen\_filtration\] Let $\lambda\in X$. Then there exists a filtration of $M(\lambda)$, $M(\lambda) \supset M_1 \supset \dots \supset M_r$ such that $M_1$ is the unique maximal submodule of $M(\lambda)$ and $$\sum_{i=1}^r \operatorname{ch}M_i = \sum_{\substack{\beta\in \Phi^+ \\q^\rho\lambda(K_\beta)\in \pm q_\beta^{{\mathbb{Z}}_{>0}}}} \operatorname{ch}M(s_{\beta}.\lambda)$$
The filtration is called the Jantzen filtration and the formula is called the Jantzen sum formula.
This is proved in [@Joseph Section 4.1.2-4.1.3]. A proof using twisting functors can also be found in [@DHP-twist Theorem 6.3].
Let $\lambda\in X$. $$A(\lambda) = \{ \alpha \in \Pi | \lambda(K_\alpha) \not \in \pm q_{\alpha}^{{\mathbb{N}}} \}.$$ Let $\gamma\in \Pi$. $$D(\gamma) = \{\beta \in \Phi^+|\beta=\sum_{\alpha\in\Pi} m_\alpha \alpha, \, m_\gamma>0\}.$$
\[lemma:18\] Let $\lambda\in X$. Let $\gamma\in \Pi$ be such that $\gamma\in
A(\lambda)$. Then $-D(\gamma)\subset T_{L(\lambda)}$.
Let $\beta=\sum_{\alpha\in \Pi}m_\alpha \alpha\in D(\gamma)$. We prove by induction over $\operatorname{ht}\beta = \sum_{\alpha\in \Pi}
m_\alpha$ that $-\beta \in T_{L(\lambda)}$. If $\operatorname{ht}\beta=1$ then $\beta=\gamma$ and $-\gamma\in T_{L(\lambda)}$ by Proposition \[prop:9\].
Assume $\operatorname{ht}\beta >1$. Then $\beta-\alpha \in \Phi^+$ for some $\alpha\in \Pi$. We have either $\alpha=\gamma$ or $\beta-\alpha\in
D(\gamma)$. In either case we get $\beta = \beta'+\beta''$ for some $\beta',\beta''\in \Phi^+$ with $\beta'\in D(\gamma)$ and $\operatorname{ht}\beta' < \operatorname{ht}\beta$. By induction $-\beta'\in
T_{L(\lambda)}$. If $-\beta \in F_{L(\lambda)}$ then $-\beta' =
-\beta + \beta'' \in F_{L(\lambda)}$ since $\Phi^+\subset
F_{L(\lambda)}$ and $F_{L(\lambda)}$ is closed (Proposition \[prop:8\]). A contradiction. So $-\beta\in
T_{L(\lambda)}$.
\[lemma:38\] Let $\gamma\in \Pi$. $D(\gamma)$ generates $Q$.
Let $\left< D(\gamma)\right>$ be the subgroup of $Q$ generated by $D(\gamma)$. Assume $\Pi\cap \left<D(\gamma)\right>\neq \Pi$. Let $\alpha\not \in \left< D(\gamma) \right>$ be a simple root that is connected to an $\alpha'\in \left< D(\gamma) \right>$ (possible since the Dynkin diagram of a simple Lie algebra is connected). Then $\alpha+\alpha'\in \left<D(\gamma)\right>$. But then $\alpha =
\alpha+\alpha'-\alpha' \in \left<D(\gamma)\right>$. A contradiction. So $\left<D(\gamma)\right>=Q$.
\[lemma:34\] Let $\lambda\in X$ be a non-integral weight. Assume that $L(\lambda)$ is admissible. Then $A(\lambda)$ is connected and $|A(\lambda)|\leq 2$.
Assume $|A(\lambda)|\geq 2$. Let $\alpha,\alpha'\in A(\lambda)$ be two distinct elements. We will show that $\alpha$ and $\alpha'$ are connected. So assume $(\alpha|\alpha')=0$ to reach a contradiction. $L(\lambda)$ is admissible of some degree $d$. By Lemma \[lemma:30\] and Proposition \[prop:15\] ${^{s_\alpha}}L(s_\alpha.\lambda)$ is admissible of the same degree $d$ ($L(s_\alpha.\lambda)$ is infinite dimensional since $s_\alpha.\lambda(K_{\alpha'})=\lambda(K_{\alpha'})\not \in \pm
q_{\alpha'}^{{\mathbb{N}}}$). Let $\Sigma$ be a set of commuting roots that is a basis of $Q$ such that $\alpha\in \Sigma$ and $-\Sigma\subset
T_{L(\lambda)}$ (Lemma \[lemma:26\]). By Proposition \[prop:9\] ${^{s_\alpha}}L(s_\alpha.\lambda)$ is a subquotient of $L(\lambda)_{F_\Sigma}$. We claim that ${\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda))\cap
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_\alpha}}L(s_\alpha.\lambda))\neq \emptyset$. If this is true then we have for $\nu \in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda))\cap
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_\alpha}}L(s_\alpha.\lambda))$, $L(\lambda)_\nu {\cong}(L(\lambda)_{F_\Sigma})_\nu {\cong}({^{s_\alpha}}L(s_\alpha.\lambda))_\nu$ as $(U_q)_0$-modules by Lemma \[lemma:13\]. But then by Theorem \[thm:Lemire\] $L(\lambda){\cong}{^{s_{\alpha}}}L(s_\alpha.\lambda)$ which is clearly a contradiction by looking at the weights of the modules. So we will prove the claim that ${\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda))\cap
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_\alpha}}L(s_\alpha.\lambda))\neq \emptyset$:
We have $-D(\alpha')\subset T_{L(\lambda)}$ and $-D(\alpha') \subset
T_{{^{s_\alpha}}L(s_\alpha.\lambda)}=s_\alpha(T_{L(s_\alpha.\lambda)})$ by Lemma \[lemma:18\] and the fact that $(\alpha|\alpha')=0$. So $-D(\alpha')\subset C(L(\lambda)) \cap
C({^{s_\alpha}}L(s_\alpha.\lambda))$ thus $C(L(\lambda)) \cap
C({^{s_\alpha}}L(s_\alpha.\lambda))$ generate $Q$ by Lemma \[lemma:38\]. This implies that $C(L(\lambda))-C({^{s_\alpha}}L(s_\alpha.\lambda))=Q$. The weights of $L(\lambda)$ and ${^{s_\alpha}}L(s_\alpha.\lambda)$ are contained in $q^Q \lambda$ so a weight in the essential support of $L(\lambda)$ (resp. ${^{s_\alpha}}L(s_\alpha.\lambda)$) is of the form $q^{\mu_1}\lambda$ (resp. $q^{\mu_2}\lambda$) for some $\mu_1,\mu_2 \in Q$. By the above $q^{C(L(\lambda))+\mu_1}\lambda
\cap q^{C({^{s_\alpha}}L(s_\alpha.\lambda))+\mu_2}\lambda\neq
\emptyset$. Since $q^{C(L(\lambda))+\mu_1}\lambda \subset
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda))$ and $q^{C({^{s_\alpha}}L(s_\alpha.\lambda))+\mu_2}\lambda \subset
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_\alpha}}L(s_\alpha.\lambda))$ we have proved the claim.
So we have proved that any two roots of $A(\lambda)$ are connected. Since there are no cycles in the Dynkin diagram of a simple Lie algebra we get $A(\lambda)=2$.
Rank $2$ calculations {#sec:rank-2-calculations}
---------------------
Following the procedure in [@Mathieu Section 7] we classify admissible simple highest weight modules in rank $2$ in order to classify the modules in higher ranks. We only consider non-integral weights because of Theorem \[thm:integral\]. We assume that $q$ is transcendental over ${\mathbb{Q}}$.
\[lemma:8\] Assume $\mathfrak{g}=\mathfrak{sl}_3$. Let $\lambda\in X$ be a non-integral weight. The module $L(\lambda)$ is admissible if and only if $q^\rho\lambda(K_{\beta})\in \pm q^{{\mathbb{Z}}_{>0}}$ for at least one root $\beta\in \Phi^+$.
It is easy to show that the Verma module $M(\lambda)$ is not admissible. So $q^\rho\lambda(K_{\beta})\in \pm q^{{\mathbb{Z}}_{>0}}$ for at least one root $\beta\in \Phi^+$ by Theorem \[thm:Jantzen\_filtration\]. On the other hand suppose $q^\rho\lambda(K_{\beta})\in \pm q^{{\mathbb{Z}}_{>0}}$ for at least one root $\beta\in \Phi^+$. If $q^\rho\lambda(K_{\alpha})\in \pm
q^{{\mathbb{Z}}_{>0}}$ for a simple root $\alpha\in \Pi$ then by easy calculations we see that $M(s_\alpha.\lambda)$ is a submodule of $M(\lambda)$. If $q^\rho\lambda(K_{\alpha})\not\in \pm
q^{{\mathbb{Z}}_{>0}}$ for both simple roots $\alpha\in \Pi$ then we get that $M(s_\beta.\lambda)$ is a submodule by Theorem \[thm:Jantzen\_filtration\]. So in both cases we have a submodule $M(s_\beta.\lambda)$ of $M(\lambda)$. Since $L(\lambda)$ is the unique simple quotient of $M(\lambda)$, $L(\lambda)$ is a subquotient of $M(\lambda)/M(s_\beta.\lambda)$. Since $M(\lambda)/M(s_\beta.\lambda)$ is admissible we see that $L(\lambda)$ is admissible as well.
\[lemma:14\] Assume $\mathfrak{g}$ is of type $C_2$ (i.e. $\mathfrak{g}=\mathfrak{sp}(4)$). Let $\Pi=\{\alpha_1,\alpha_2\}$ where $\alpha_1$ is short and $\alpha_2$ is long. Let $\lambda\in X$ be a non-integral weight. The module $L(\lambda)$ is infinite dimensional and admissible if and only if $q^\rho\lambda(K_{\alpha_1}),q^\rho\lambda(K_{\alpha_1+\alpha_2})\in
\pm q^{{\mathbb{Z}}_{>0}}$ and $\lambda(K_{\alpha_2}),\lambda(K_{2\alpha_1+\alpha_2})\in \pm
q^{1+2{\mathbb{Z}}} (= \pm q_{\alpha_2}^{1/2+{\mathbb{Z}}}=\pm
q_{2\alpha_1+\alpha_2}^{1/2+{\mathbb{Z}}})$.
Theorem \[thm:Jantzen\_filtration\] implies that $q^\rho
\lambda(K_{\beta}) \in q_{\beta}^{{\mathbb{Z}}_{>0}}$ for at least two $\beta\in \Phi^+$ because otherwise $L(\lambda) =
M(\lambda)/M(s_\beta.\lambda)$ for some $\beta\in \Phi^+$. But $M(\lambda)/M(s_\beta.\lambda)$ is not admissible. Since $\lambda$ is not integral we know $q^\rho \lambda(K_\alpha)\not \in
q_{\alpha}^{{\mathbb{Z}}}$ for some $\alpha\in \Pi$. Suppose $\lambda(K_{\alpha_1})\not \in \pm q_{\alpha_1}^{{\mathbb{Z}}}$. We split into cases and arrive at a contradiction in both cases: If $\lambda(K_{\alpha_2})\not \in \pm q_{\alpha_2}^{{\mathbb{Z}}_{>0}}$ then by the above $q^\rho \lambda(K_{\alpha_1+\alpha_2})\in \pm
q_{\alpha_1+\alpha_2}^{{\mathbb{Z}}_{>0}}=\pm q^{{\mathbb{Z}}_{>0}}$ and $q^\rho
\lambda(K_{2\alpha_1+\alpha_2}) \in \pm
q_{2\alpha_1+\alpha_2}^{{\mathbb{Z}}_{>0}}=\pm q^{2{\mathbb{Z}}_{>0}}$ which implies that $q^\rho \lambda(K_{\alpha_1}) = q^\rho \lambda(
K_{2\alpha_1+\alpha_2} K_{\alpha_1+\alpha_2}{^{-1}}) \in \pm
q^{{\mathbb{Z}}}=\pm q_{\alpha_1}^{{\mathbb{Z}}}$. A contradiction.
The other case is $q^\rho \lambda(K_{\alpha_2}) \in \pm
q_{\alpha_2}^{{\mathbb{Z}}_{>0}}=\pm q^{2{\mathbb{Z}}_{>0}}$: In this case we get $\lambda(K_{\alpha_1+\alpha_2}) \not \in \pm q^{{\mathbb{Z}}} = \pm
q_{\alpha_1+\alpha_2}^{{\mathbb{Z}}}$ so the last root, $2\alpha_1+\alpha_2$, must satisfy that $q^\rho\lambda(K_{2\alpha_1+\alpha_2}) \in \pm
q_{2\alpha_1+\alpha_2}^{{\mathbb{Z}}_{>0}}=\pm q^{2{\mathbb{Z}}_{>0}}$. But this implies that $\lambda(K_{\alpha_1})^2 =
\lambda(K_{2\alpha_1+\alpha_2}K_{\alpha_2}{^{-1}})\in \pm q^{2{\mathbb{Z}}}$ which implies that $\lambda(K_{\alpha_1}) \in \pm q^{{\mathbb{Z}}}$. A contradiction.
So $\lambda(K_{\alpha_1})\in \pm q^{{\mathbb{Z}}}$. Since $\lambda$ is not integral we get $\lambda(K_{\alpha_2})\not \in \pm
q_{\alpha_2}^{{\mathbb{Z}}}=\pm q^{2{\mathbb{Z}}}$. This implies that $\lambda(K_{2\alpha_1+\alpha_2})\not \in \pm q^{2{\mathbb{Z}}} = \pm
q_{2\alpha_1+\alpha_2}^{{\mathbb{Z}}}$. Since $q^\rho \lambda(K_{\beta})
\in \pm q_{\beta}^{{\mathbb{Z}}_{>0}}$ for at least two $\beta\in \Phi^+$ we get $q^{\rho}\lambda(K_{\alpha_1}) \in \pm q^{{\mathbb{Z}}_{>0}}$ and $q^{\rho}\lambda(K_{\alpha_1+\alpha_2}) \in \pm
q^{{\mathbb{Z}}_{>0}}$. This in turn implies that $\lambda(K_{\alpha_2}) =
\lambda(K_{\alpha_1+\alpha_2}K_{\alpha_1}{^{-1}}) \in \pm
q^{{\mathbb{Z}}}$. Since $\lambda(K_{\alpha_2})\not \in \pm q^{2{\mathbb{Z}}}$ we get $\lambda(K_{\alpha_2}) \in \pm q^{1+2{\mathbb{Z}}}$. Similarly $\lambda(K_{2\alpha_1+\alpha_2})=\lambda(K_{\alpha_1+\alpha_2}K_{\alpha_1})\in
\pm q^{1+2{\mathbb{Z}}}$. So we have shown the only if part.
Assume $\lambda$ is as required in the lemma. We will show that $L(\lambda)$ is admissible. By Theorem \[thm:Jantzen\_filtration\] we see that the composition factors of $M(s_{\alpha_1}.\lambda)$ are $L(s_{\alpha_1}.\lambda)$ and $L(s_{\alpha_1+\alpha_2}s_{\alpha_1}.\lambda) = M(w_0.\lambda)$ and the composition factors of $M(s_{\alpha_1+\alpha_2})$ are $L(s_{\alpha_1+\alpha_2})$ and $L(s_{\alpha_1}s_{\alpha_1+\alpha_2})=M(w_0.\lambda)$. So $$\sum_{\substack{\beta\in \Phi^+ \\q^\rho\lambda(K_\beta)\in \pm q_\beta^{{\mathbb{Z}}_{>0}}}} \operatorname{ch}M(s_{\beta}.\lambda) = \operatorname{ch}L(s_{\alpha_1}.\lambda)+\operatorname{ch}L(s_{\alpha_1+\alpha_2}.\lambda) + 2 \operatorname{ch}L(w_0.\lambda).$$ So the composition factors of the maximal submodule of $M(\lambda)$ are $L(s_{\alpha_1}.\lambda)$, $L(s_{\alpha_1+\alpha_2}.\lambda)$ and $L(w_0.\lambda)$. The worst case scenario being multiplicity one. In this case the character of $L(\lambda)$ is $$\begin{aligned}
\operatorname{ch}M(\lambda)& - \operatorname{ch}L(s_{\alpha_1}.\lambda) - \operatorname{ch}L(s_{\alpha_1+\alpha_2}) - \operatorname{ch}L(w_0.\lambda)=
\\
=& \operatorname{ch}M(\lambda) - \operatorname{ch}M(s_{\alpha_1}.\lambda) - \operatorname{ch}M(s_{\alpha_1+\alpha_2}.\lambda) + \operatorname{ch}M(w_0.\lambda)
\end{aligned}$$ The character of Verma modules are known and by an easy calculation it is seen that this would imply $L(\lambda)$ is admissible (cf. the proof of Lemma 7.2 in [@Mathieu]).
Type A, D, E {#sec:class-admiss-modul}
------------
In this section we complete the classification of all simple admissible highest weight modules when the Dynkin diagram of $\mathfrak{g}$ is simply laced. In particular we show that $\mathfrak{g}$ does not admit infinite dimensional simple admissible modules when $\mathfrak{g}$ is of type D and E. In Section \[sec:class-admiss-modul-1\] we show that the same is the case when $\mathfrak{g}$ is of type $B$ or $F$. Combining this and Section \[sec:class-admiss-modul-1\] we get that $\mathfrak{g}$ admits infinite dimensional simple admissible modules if and only if $\mathfrak{g}$ is of type $A$ or $C$. Remember that we restrict our attention to transcendental $q$ and to non-integral weights because of Theorem \[thm:integral\].
Let $\lambda:U_q^0 \to {\mathbb{C}}$ be a weight. In the Dynkin diagram of $\mathfrak{g}$ let any node corresponding to $\alpha \in \Pi\cap
A(\lambda)$ be written as $\circ$ and every other as $\bullet$. e.g. if $\mathfrak{g}=\mathfrak{sl}_3$ and $|A(\lambda)|=1$ then the graph corresponding to $\lambda$ would look like this: $$\xymatrix{ \bullet \ar@{-}[r] & \circ}$$
We call this the colored Dynkin diagram corresponding to $\lambda$.
In this way we get a ’coloring’ of the Dynkin diagram for every $\lambda$.
\[lemma:39\] Let $\lambda\in X$ be a non-integral weight such that $L(\lambda)$ is admissible. If the colored Dynkin diagram of $\lambda$ contains $$\xymatrix{ \stackrel{\alpha'}{\circ} \ar@{-}[r] & \stackrel{\alpha}{\circ}}$$ as a subdiagram then $q^\rho\lambda(K_{\alpha+\alpha'})\in \pm
q_{\alpha'+\alpha}^{{\mathbb{Z}}_{>0}}$.
Let $v_\lambda$ be a highest weight vector of $L(\lambda)$. Let $\mathfrak{s}$ be the Lie algebra $\mathfrak{sl}_3$ with $\alpha$ and $\alpha'$ as simple roots. Let $U$ be the subalgebra of $U_q$ generated by $F_{\alpha},F_{\alpha'},K_{\alpha}^{\pm
1},K_{\alpha'}^{\pm 1},E_{\alpha},E_{\alpha'}$. Then $U {\cong}U_{q_{\alpha}}(\mathfrak{s})$ as algebras and $U v_\lambda$ contains the simple highest weight $U_{q_\alpha}(\mathfrak{s})$-module $L(\lambda,\mathfrak{s})$ of highest weight $\lambda$ (restricted to $U_{q_\alpha}^0(\mathfrak{s})$) as a subquotient. Since $L(\lambda)$ is admissible so is $U v_\lambda$ hence $L(\lambda,\mathfrak{s})$ is admissible. Then Lemma \[lemma:8\] implies that $q^\rho
\lambda(K_{\alpha+\alpha'})\in \pm q_\alpha^{{\mathbb{Z}}_{>0}}$.
\[lemma:33\] Let $\lambda\in X$ be a non-integral weight such that $L(\lambda)$ is admissible. If the colored Dynkin diagram of $\lambda$ contains $$\xymatrix{ \stackrel{\alpha'}{\circ} \ar@{-}[r] & \stackrel{\alpha}{\circ} \ar@{-}[r] & \stackrel{\alpha''}{\bullet}}$$ as a subdiagram then $L(s_\alpha.\lambda)$ is admissible and the colored Dynkin diagram corresponding to $s_\alpha.\lambda$ contains $$\xymatrix{ \stackrel{\alpha'}{\bullet} \ar@{-}[r] & \stackrel{\alpha}{\circ} \ar@{-}[r] & \stackrel{\alpha''}{\circ}}$$ i.e. we can ’move’ $\xymatrix{\circ \ar@{-}[r] &\circ}$ and still get an admissible module.
$L(s_\alpha.\lambda)$ is admissible by Proposition \[prop:9\]. It is easy to see that $q^\rho s_\alpha.\lambda(K_\alpha)\not \in \pm
q^{{\mathbb{Z}}}$ (follows by Lemma \[lemma:39\] since $\lambda$ is non-integral), that $q^\rho s_{\alpha}.\lambda(K_{\alpha''})\not \in
\pm q^{{\mathbb{Z}}}$ and that $q^\rho s_\alpha.\lambda(K_{\alpha'}) \in
\pm q^{{\mathbb{Z}}_{>0}}$ (by Lemma \[lemma:39\])
\[lemma:41\] Assume $\mathfrak{g}\neq \mathfrak{sl}_2$. Let $\lambda\in X$ be a non-integral weight such that $L(\lambda)$ is admissible.
If $A(\lambda)=\{\alpha\}$ then $\alpha$ is only connected to one other simple root $\alpha'$, $L(s_\alpha.\lambda)$ is admissible and the corresponding colored Dynkin diagram of $s_\alpha.\lambda$ contains $$\xymatrix{ \stackrel{\alpha}{\circ} \ar@{-}[r] &\stackrel{\alpha'}{\circ} }$$ as a subdiagram.
On the other hand if the colored Dynkin diagram of $\lambda$ contains $$\xymatrix{ \stackrel{\alpha}{\circ} \ar@{-}[r] &\stackrel{\alpha'}{\circ} }$$ and $\alpha'$ is the only root connected to $\alpha$ then the colored Dynkin diagram of $s_\alpha.\lambda$ contains $$\xymatrix{ \stackrel{\alpha}{\circ} \ar@{-}[r] &\stackrel{\alpha'}{\bullet} }$$ as a subdiagram.
Since $\alpha\in A(\lambda)$, $L(s_\alpha.\lambda)$ is admissible by Proposition \[prop:9\]. First assume $A(\lambda)=\{\alpha\}$. If $\alpha$ is connected to two distinct roots $\alpha'$ and $\alpha''$ then it is easily seen that $\alpha',\alpha''\in
A(s_\alpha.\lambda)$ contradicting the fact that $A(s_\alpha.\lambda)$ is connected (Lemma \[lemma:34\]). It is easily seen that $q^\rho s_\alpha.\lambda(K_{\alpha})\not \in \pm
q^{{\mathbb{Z}}_{>0}}$ (since $\lambda$ is non integral) and $q^\rho
s_\alpha.\lambda(K_{\alpha'}) \not \in q^{{\mathbb{Z}}_{>0}}$.
On the other hand if $A(\lambda)=\{\alpha,\alpha'\}$ then $q^\rho
s_\alpha.\lambda(K_{\alpha'}) = q^\rho \lambda(K_{\alpha+\alpha'})
\in \pm q^{{\mathbb{Z}}_{>0}}$ by Lemma \[lemma:39\].
Now we can eliminate the types that are not type $A$ by the following theorem:
\[thm:simply\_laced\_exists\_only\_type\_A\] Assume $\mathfrak{g}$ is a simple Lie algebra of simply laced type. If there exists an infinite dimensional admissible simple module then $\mathfrak{g}$ is of type $A$.
Suppose there exists an infinite dimensional admissible simple module then by Theorem \[thm:EXT\_contains\_highest\_weight\] there exists a $\lambda\in X$ such that $L(\lambda)$ is an infinite admissible simple highest weight module. By Theorem \[thm:integral\] and the classification in [@Mathieu] there exists no highest weight simple admissible modules with integral weights unless $\mathfrak{g}$ is of type $A$. We need to show the same for non-integral weights.
If the Dynkin diagram is simply laced and not of type $A$ then the Dynkin diagram contains $$\xymatrix{ & \stackrel{\alpha}{\bullet} \ar@{-}[d] & \\ \stackrel{\alpha'}{\bullet} \ar@{-}[r] & \stackrel{\gamma}{\bullet} \ar@{-}[r] &\stackrel{\alpha''}{\bullet}}$$ as a subdiagram.
By Lemma \[lemma:41\] we can assume without loss of generality that $|A(\lambda)|=2$ and by Lemma \[lemma:33\] we can assume that the colored Dynkin diagram corresponding to $\lambda$ contains the following: $$\xymatrix{ & \stackrel{\alpha}{\bullet} \ar@{-}[d] & \\ \stackrel{\alpha'}{\circ} \ar@{-}[r] & \stackrel{\gamma}{\circ} \ar@{-}[r] &\stackrel{\alpha''}{\bullet}}$$ But then $L(s_\gamma.\lambda)$ is admissible as well by Proposition \[prop:9\] and the colored Dynkin diagram for $s_\gamma.\lambda$ contains $$\xymatrix{ & \stackrel{\alpha}{\circ} \ar@{-}[d] & \\ \stackrel{\alpha'}{\bullet} \ar@{-}[r] & \stackrel{\gamma}{\circ} \ar@{-}[r] &\stackrel{\alpha''}{\circ}}$$ contradicting the fact that $A(\lambda)$ is connected.
Combining all the above results we get
\[thm:Classification\_of\_adm\_modules\_simply\_laced\] Let $\mathfrak{g}=\mathfrak{sl}_{n+1}$, $n\geq 2$ with simple roots $\alpha_1,\dots,\alpha_n$ such that $(\alpha_i|\alpha_{i+1})=-1$, $i=1,\dots,n$. Let $\lambda \in X$ be a non-integral weight.
$L(\lambda)$ is admissible if and only if the colored Dynkin diagram of $\lambda$ is of one of the following types: $$\begin{aligned}
\xymatrix{ \stackrel{\alpha_1}{\circ} \ar@{-}[r] &
\stackrel{\alpha_2}{\bullet} \ar@{-}[r] &
\stackrel{\alpha_3}{\bullet} \ar@{.}[r] &
\stackrel{\alpha_n}{\bullet} }
\\
\xymatrix{ \stackrel{\alpha_1}{\circ} \ar@{-}[r] &
\stackrel{\alpha_2}{\circ} \ar@{-}[r] &
\stackrel{\alpha_3}{\bullet} \ar@{.}[r] &
\stackrel{\alpha_n}{\bullet} }
\\
\xymatrix{ \stackrel{\alpha_1}{\bullet} \ar@{-}[r] &
\stackrel{\alpha_2}{\circ} \ar@{-}[r] &
\stackrel{\alpha_3}{\circ} \ar@{.}[r] &
\stackrel{\alpha_n}{\bullet} }
\\
&\vdots
\\
\xymatrix{ \stackrel{\alpha_1}{\bullet} \ar@{-}[r] &
\stackrel{\alpha_2}{\bullet} \ar@{-}[r] &
\stackrel{\alpha_3}{\bullet} \ar@{.}[r] &
\stackrel{\alpha_n}{\circ} }
\end{aligned}$$
By the above results these are the only possibilites. To show that $L(\lambda)$ is admissible when the colored Dynkin diagram is of the above form use the fact that by Lemma \[lemma:33\] and Lemma \[lemma:41\] we can assume $\lambda$ has colored Dynkin diagram as follows: $$\xymatrix{ \stackrel{\alpha_1}{\circ} \ar@{-}[r] &
\stackrel{\alpha_2}{\bullet} \ar@{-}[r] &
\stackrel{\alpha_3}{\bullet} \ar@{.}[r] &
\stackrel{\alpha_n}{\bullet} }.$$ Let $\beta_i=\alpha_1+\alpha_2+\dots + \alpha_i$, $i=1,\dots,n$. We see easily that $T_{L(\lambda)}=-\{\beta_1,\beta_2,\dots,\beta_n\}$ and $F_L = \Phi^+ \cup \Phi_{\{\alpha_2,\dots,\alpha_n\}}$. Let $\mathfrak{l}$, $\mathfrak{u}$, $\mathfrak{p}$ etc. be defined as in Section 2 of [@DHP1]. By [@DHP1 Theorem 2.23] $N:=L(\lambda)^{\mathfrak{u}}$ is a simple finite dimensional $U_q(\mathfrak{l})$-module and $L(\lambda)$ is the unique simple quotient of $\mathcal{M}(N) = U_q {\otimes}_{U_q(\mathfrak{p})}
N$. Since the vectors $\beta_1,\dots,\beta_n$ are linearly independent $\mathcal{M}(N)$ is admissible. This implies that $L(\lambda)$ is admissible since it is a quotient of $\mathcal{M}(N)$.
We can now make Corollary \[cor:1\] more specific in type A:
\[cor:2\] Let $\mathfrak{g}=\mathfrak{sl}_{n+1}$, $n\geq 2$ with simple roots $\alpha_1,\dots,\alpha_n$ such that $(\alpha_i|\alpha_{i+1})=-1$, $i=1,\dots,n$. Let $\beta_j=\alpha_1+\cdots+\alpha_j$, $j=1,\dots,n$ and $\Sigma=\{\beta_1,\dots,\beta_n\}$. Let $F_{\beta_j}=T_{s_1}\cdots T_{s_{j-1}}(F_{\alpha_{j}})$ and let $F_{\Sigma}=\{q^a F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}|a_i\in
{\mathbb{N}},a\in {\mathbb{Z}}\}$ be the corresponding Ore subset. Then $\Sigma$ is a set of commuting roots that is a basis of $Q$ with corresponding Ore subset $F_\Sigma$.
Let $\beta_j'=\alpha_n+\cdots+\alpha_{n-j}$, $j=1,\dots,n$ and $\Sigma=\{\beta_1',\dots,\beta_n'\}$. Let $F'_{\beta_j'}=T_{s_n}\cdots T_{s_{n-j+1}}(F_{\alpha_{n-j}})$ and let $F_{\Sigma'}=\{q^a (F'_{\beta_1'})^{a_1}\cdots
(F'_{\beta_n'})^{a_n}|a_i\in {\mathbb{N}},a\in {\mathbb{Z}}\}$ be the corresponding Ore subset. Then $\Sigma'$ is a set of commuting roots that is a basis of $Q$ with corresponding Ore subset $F_{\Sigma'}$.
Let $L$ be a simple torsion free module then one of the two following claims hold
- There exists a $\lambda\in X$ with $\lambda(K_{\alpha_1})\not
\in \pm q^{{\mathbb{N}}}$, $\lambda(K_{\alpha_i})\in \pm q^{{\mathbb{N}}}$, $i=2,\dots,n$ and $\mathbf{b}\in ({\mathbb{C}}^*)^n$ such that $$\begin{aligned}
L {\cong}{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}.
\end{aligned}$$
- There exists a $\lambda\in X$ with $\lambda(K_{\alpha_n})\not
\in \pm q^{{\mathbb{N}}}$, $\lambda(K_{\alpha_i})\in \pm q^{{\mathbb{N}}}$, $i=1,\dots,n-1$ and $\mathbf{b}\in ({\mathbb{C}}^*)^n$ such that $$\begin{aligned}
L {\cong}{\varphi}_{F_{\Sigma'},\mathbf{b}}.L(\lambda)_{F_{\Sigma'}}.
\end{aligned}$$
By Theorem \[thm:EXT\_contains\_highest\_weight\] $\mathcal{EXT}(L){\cong}\mathcal{EXT}(L(\lambda'))$ for some $\lambda'\in X$. If $\lambda'$ is non-integral then by Theorem \[thm:Classification\_of\_adm\_modules\_simply\_laced\], Lemma \[lemma:33\], Lemma \[lemma:30\] and Proposition \[prop:15\] there exists a $\lambda$ such that $\lambda(K_{\alpha_1})\not \in \pm q^{{\mathbb{N}}}$, $\lambda(K_{\alpha_i})\in\pm q^{{\mathbb{N}}}$, $i=2,\dots,n$ and such that $\mathcal{EXT}(L(\lambda')){\cong}\mathcal{EXT}(L(\lambda))$. By Lemma \[lemma:20\] we can choose $\Sigma$ as the commuting set of roots that is used in the definition of $\mathcal{EXT}(L(\lambda))$.
If $\lambda'$ is integral we see by Theorem \[thm:integral\], Lemma \[lemma:30\], Proposition \[prop:15\] and the classification in [@Mathieu Section 8] that $\mathcal{EXT}(L(\lambda')){\cong}\mathcal{EXT}(L(\lambda))$ for a $\lambda$ such that $A(\lambda)=\{\alpha_1\}$ or $A(\lambda)=\{\alpha_n\}$ (cf. e.g. [@Mathieu Proposition 8.5]).
Now the result follows just like in the proof of Corollary \[cor:1\].
In Section \[sec:type-a\_n-calc\] we determine all $\mathbf{b}\in
({\mathbb{C}}^*)^n$ such that ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is torsion free with $\Sigma$ as above in Corollary \[cor:2\] and $\lambda$ such that $\lambda(K_{\alpha_1})\not \in \pm q^{{\mathbb{N}}}$, $\lambda(K_{\alpha_i})\in\pm q^{{\mathbb{N}}}$, $i=2,\dots,n$. By symmetry of the Dynkin diagram and Corollary \[cor:2\] this classifies all simple torsion free modules.
Quantum Shale-Weil representation {#sec:quantum-shale-weyl}
---------------------------------
In this section we assume $\mathfrak{g}$ is of type $C_n$. Let $\alpha_1,\dots,\alpha_n$ be the simple roots such that $\alpha_i$ is connected to $\alpha_{i+1}$ and $\alpha_1$ is long. We will describe a specific admissible module $V$ and show that $V=L(\omega^+)\oplus
L(\omega^-)$ for some weights $\omega^{\pm}$ with the purpose of classifying the admissible simple highest weight modules, see Theorem \[thm:existence\_adm\_mod\_type\_C\]. Let $V={\mathbb{C}}[X_1,\dots,X_n]$. We describe an action of the simple root vectors on $V$: For $i\in \{2,\dots,n\}$ $$\begin{aligned}
E_{\alpha_1} X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n} =&
-\frac{[a_1][a_1-1]}{[2]} X_1^{a_1-2}X_2^{a_2}\cdots X_n^{a_n}
\\
F_{\alpha_1} X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n} =& \frac{1}{[2]}
X_1^{a_1+2}X_2^{a_2}\cdots X_n^{a_n}
\\
E_{\alpha_i} X_1^{a_1}\cdots X_n^{a_n} =& [a_i] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n}
\\
F_{\alpha_i} X_1^{a_1}\cdots X_n^{a_n} =& [a_{i-1}] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}-1}X_i^{a_i+1}\cdots X_n^{a_n}
\\
K_{\alpha_1}^{\pm 1} X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n} =& q^{\mp
(2a_1+1)} X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n}
\\
K_{\alpha_i}^{\pm 1} X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n} =& q^{\pm
(a_{i-1}-a_i)}X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n}.\end{aligned}$$
We check that this is an action of $U_q$ by checking the generating relations. These are tedious and kind of long calculations but just direct calculations. We refer to the generating relations as (R1) to (R6) like in [@Jantzen Section 4.3].
(R1) is clear. (R2) and (R3): Let $j\in\{1,\dots,n\}$ $$\begin{aligned}
K_{\alpha_j} E_{\alpha_1} X_1^{a_1}\cdots X_n^{a_n} =&
\begin{cases}
-q^{-2a_1+3} \frac{[a_1][a_1-1]}{[2]} X_1^{a_1-2}X_2^{a_2}\cdots
X_n^{a_n} &\text{ if } j=1
\\
-q^{a_{1}-2-a_2} \frac{[a_1][a_1-1]}{[2]}
X_1^{a_1-2}X_2^{a_2}\cdots X_n^{a_n} &\text{ if } j=2
\\
-q^{a_{j-1}-a_j} \frac{[a_1][a_1-1]}{[2]}
X_1^{a_1-2}X_2^{a_2}\cdots X_n^{a_n} &\text{ if } j>2
\end{cases}
\\
=& q^{(\alpha_1|\alpha_j)}E_{\alpha_1} K_{\alpha_j}
X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n}.\end{aligned}$$ Similar for $K_{\alpha_j}F_{\alpha_1}$. For $i\in\{2,\dots,n\}$ $$\begin{aligned}
K_{\alpha_j} E_{\alpha_i} X_1^{a_1}\cdots X_n^{a_n} =&
\begin{cases}
q^{a_{j-1}-a_j}[a_i] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n} &\text{ if }
|j-i|>1
\\
q^{a_{j-1}-a_j-1}[a_i] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n} &\text{ if } j=i-1
\\
q^{a_{j-1}+1-a_j+1}[a_i] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n} &\text{ if } j=i
\\
q^{a_{j-1}-1-a_j}[a_i] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n} &\text{ if } j=i+1
\end{cases}
\\
=& q^{(\alpha_i|\alpha_j)}E_{\alpha_1} K_{\alpha_j}
X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n}.\end{aligned}$$ Similarly for $K_{\alpha_j}F_{\alpha_i}$.
(R4): $$\begin{aligned}
[E_{\alpha_1},F_{\alpha_1}] X_1^{a_1}X_2^{a_2}\cdots X_n^{a_n} =&
E_{\alpha_1} \frac{1}{[2]} X_1^{a_1+2}X_2^{a_2}\cdots X_n^{a_n} +
F_{\alpha_1} \frac{[a_1][a_1-1]}{[2]} X_1^{a_1-2}X_2^{a_2}\cdots
X_n^{a_n}
\\
=& \left(-\frac{[a_1+2][a_1+1]}{[2][2]} +
\frac{[a_1][a_1-1]}{[2][2]}\right) X_1^{a_1}\cdots X_n^{a_n}
\\
=& \frac{q^{-2a_1-1}-q^{2a_1+1}}{q^2-q^{-2}} X_1^{a_1}\cdots
X_n^{a_n}
\\
=& \frac{K_{\alpha_1}-K_{\alpha_1}{^{-1}}}{q^2-q^{-2}} X_1^{a_1}\cdots
X_n^{a_n}.\end{aligned}$$
$$\begin{aligned}
[E_{\alpha_1},F_{\alpha_2}] X_1^{a_1}\cdots X_n^{a_n} =&
[a_1]E_{\alpha_1} X_1^{a_1-1}X_2^{a_2+1}\cdots X_n^{a_n} +
\frac{[a_1][a_1-1]}{[2]} F_{\alpha_2} X_1^{a_1-2}X_2^{a_2}\cdots
X_n^{a_n}
\\
=& -\frac{[a_1][a_1-1][a_1-2]}{[2]} X_1^{a_1-3}X_2^{a_2+1}\cdots
X_n^{a_n}
\\
&+ \frac{[a_1][a_1-1][a_1-2]}{[2]} X_1^{a_1-3}X_2^{a_2+1}\cdots
X_n^{a_n}
\\
=&0.\end{aligned}$$
For $i>2$ clearly $[E_{\alpha_1},F_{\alpha_i}]X_1^{a_1}\cdots
X_n^{a_n}=0$. For $i,j\in\{2,\dots,n\}$: If $|i-j|>1$ clearly $[E_{\alpha_i},F_{\alpha_j}]X_1^{a_1}\cdots X_n^{a_n}=0$. $$\begin{aligned}
[E_{\alpha_i},F_{\alpha_{i+1}}] X_1^{a_n}\cdots X_n^{a_n} =&
[a_{i}]E_{\alpha_i} X_1^{a_1}\cdots X_{i}^{a_{i}-1}X_{i+1}^{a_{i+1}
+1}\cdots X_n^{a_n}
\\
&- [a_i]F_{\alpha_{i+1}}X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n}
\\
=& [a_{i}][a_i-1] X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_{i}^{a_{i}-2}X_{i+1}^{a_{i+1} +1}\cdots
X_n^{a_n}
\\
&- [a_{i}][a_i-1]X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-2}X_{i+1}^{a_{i+1}+1}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
$$\begin{aligned}
[E_{\alpha_2},F_{\alpha_1}] X_1^{a_1}\cdots X_n^{a_n} =&
E_{\alpha_2} \frac{1}{[2]} X_1^{a_1+2}X_2^{a_2}\cdots X_n^{a_n} -
[a_2]F_{\alpha_1} X_1^{a_1+1}X_2^{a_2-1}\cdots X_n^{a_n}
\\
=& \frac{[a_2]}{[2]} X_1^{a_1+3}X_2^{a_2-1}\cdots X_n^{a_n} -
\frac{[a_2]}{[2]}X_1^{a_1+3}X_2^{a_2-1}\cdots X_n^{a_n}
\\
=&0.\end{aligned}$$
For $i>2$: $$\begin{aligned}
[E_{\alpha_i},F_{\alpha_{i-1}}] X_1^{a_n}\cdots X_n^{a_n} =&
[a_{i-2}]E_{\alpha_i} X_1^{a_1}\cdots
X_{i-2}^{a_{i-2}-1}X_{i-1}^{a_{i-1} +1}\cdots X_n^{a_n}
\\
&- [a_i]F_{\alpha_{i-1}}X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n}
\\
=&[a_{i-2}][a_i] X_1^{a_1}\cdots X_{i-2}^{a_{i-2}-1}X_{i-1}^{a_{i-1}
+2}X_i^{a_i-1}\cdots X_n^{a_n}
\\
&- [a_i][a_{i-2}]F_{\alpha_{i-1}}X_1^{a_1}\cdots
X_{i-2}^{a_{i-2}-1}X_{i-1}^{a_{i-1}+2}X_i^{a_i-1}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
For $i>1$: $$\begin{aligned}
[E_{\alpha_i},F_{\alpha_i}]X_1^{a_1}\cdots X_n^{a_n} =&
[a_{i-1}]E_{\alpha_i}X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i-1}\cdots X_n^{a_n}
\\
&- [a_i]F_{\alpha_i}X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}-1}X_i^{a_{i}+1}\cdots X_n^{a_n}
\\
=& ([a_{i-1}][a_i-1]-[a_i][a_{i-1}-1])X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}}X_i^{a_i}\cdots X_n^{a_n}
\\
=& [a_{i-1}-a_i] X_1^{a_1}\cdots X_n^{a_n}
\\
=& \frac{K_{\alpha_i}-K_{\alpha_i}{^{-1}}}{q-q{^{-1}}}X_1^{a_1}\cdots
X_n^{a_n}.\end{aligned}$$
Finally we have the relations (R5) and (R6): Clearly $[E_{\alpha_i},E_{\alpha_j}]X_1^{a_1}\cdots X_n^{a_n}=0$ and $[F_{\alpha_i},F_{\alpha_j}]X_1^{a_1}\cdots X_n^{a_n}=0$ when $|j-i|>1$.
$$\begin{aligned}
(E_{\alpha_2}^3 E_{\alpha_1}& - [3]
E_{\alpha_2}^2E_{\alpha_1}E_{\alpha_2} +
[3]E_{\alpha_2}E_{\alpha_1}E_{\alpha_2}^2 -
E_{\alpha_1}E_{\alpha_2}^3)X_1^{a_1}\cdots X_n^{a_n}
\\
=& \frac{1}{[2]}\Big(-[a_1][a_1-1][a_2][a_2-1][a_2-2]
\\
&+[3][a_1+1][a_1][a_2][a_2-1][a_2-2]
\\
&-[3][a_1+2][a_1+1][a_2][a_2-1][a_2-2]
\\
&+[a_1+3][a_1+2][a_2][a_2-1][a_2-2]\Big)X_1^{a_1+1}X_2^{a_2-3}\cdots
X_n^{a_n}
\\
=& \frac{[a_2][a_2-1][a_2-2]}{[2]}\Big(
-[a_1][a_1-1]+[3][a_1+1][a_1]
\\
&-[3][a_1+2][a_1+1]+[a_1+3][a_1+2]\Big) X_1^{a_1+1}X_2^{a_2-3}\cdots
X_n^{a_n}
\\
=&0.\end{aligned}$$
$$\begin{aligned}
(E_{\alpha_1}^2 E_{\alpha_2}-[2]_{\alpha_1}
&E_{\alpha_1}E_{\alpha_2}E_{\alpha_1} +
E_{\alpha_2}E_{\alpha_1}^2)X_1^{a_1}\cdots X_n^{a_n}
\\
=& \frac{[a_2]}{[2][2]}\big( [a_1+1][a_1][a_1-1][a_1-2]
\\
&- [2]_{\alpha_1} [a_1][a_1-1][a_1-1][a_1-2]
\\
&+ [a_1][a_1-1][a_1-2][a_1-3]\Big) X_1^{a_1+3}X_2^{a_2-1}\cdots
X_n^{a_n}
\\
=& \frac{[a_2][a_1][a_1-1][a_1-2]}{[2][2]} \Big(
[a_1+1]-[2]_{\alpha_1}[a_1-1]
\\
&+[a_1-3]\Big) X_1^{a_1+3}X_2^{a_2-1}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
For $i>1$: $$\begin{aligned}
(E_{\alpha_i}^2
E_{\alpha_{i+1}}-&[2]E_{\alpha_i}E_{\alpha_{i+1}}E_{\alpha_i}+E_{\alpha_{i+1}}E_{\alpha_i}^2)X_1^{a_1}\cdots
X_n^{a_n}
\\
=& [a_{i+1}][a_i]([a_i+1]-[2][a_i]+[a_i-1])X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+2}X_i^{a_i-1}X_{i+1}^{a_{i+1}-1}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
$$\begin{aligned}
(E_{\alpha_{i+1}}^2
E_{\alpha_{i}}-&[2]E_{\alpha_{i+1}}E_{\alpha_{i}}E_{\alpha_{i+1}}+E_{\alpha_{i}}E_{\alpha_{i+1}}^2)X_1^{a_1}\cdots
X_n^{a_n}
\\
=& [a_{i+1}][a_{i+1}-1]([a_i]-[2][a_i+1]+[a_i+2])X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}+1}X_i^{a_i+1}X_{i+1}^{a_{i+1}-2}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
$$\begin{aligned}
(F_{\alpha_1}^2 F_{\alpha_2}-[2]_{\alpha_1}
&F_{\alpha_1}F_{\alpha_2}F_{\alpha_1} +
F_{\alpha_2}F_{\alpha_1}^2)X_1^{a_1}\cdots X_n^{a_n}
\\
=&
\frac{1}{[2][2]}([a_1]-[2]_{\alpha_1}[a_1+2]+[a+4])X_1^{a_1+3}X_2^{a_2+1}\cdots
X_n^{a_n}
\\
=&0.\end{aligned}$$
$$\begin{aligned}
(F_{\alpha_2}^3 F_{\alpha_1}& - [3]
F_{\alpha_2}^2F_{\alpha_1}F_{\alpha_2} +
[3]F_{\alpha_2}F_{\alpha_1}F_{\alpha_2}^2 -
F_{\alpha_1}F_{\alpha_2}^3)X_1^{a_1}\cdots X_n^{a_n}
\\
=& \frac{1}{[2]}\Big( [a_1+2][a_1+1][a_1] - [3][a_1][a_1+1][a_1]
\\
&+ [3][a_1][a_1-1][a_1]
\\
&-[a_1][a_1-1][a_1-2] \Big) X_1^{a_1-1}X_2^{a_2+3}\cdots X_n^{a_n}
\\
=& \frac{[a_1]}{[2]}\Big( [a_1+2][a_1+1] - [3][a_1+1][a_1]
\\
&+ [3][a_1][a_1-1]
\\
&-[a_1-1][a_1-2] \Big) X_1^{a_1-1}X_2^{a_2+3}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
For $i>1$: $$\begin{aligned}
(F_{\alpha_i}^2
F_{\alpha_{i+1}}-&[2]F_{\alpha_i}F_{\alpha_{i+1}}F_{\alpha_i}+F_{\alpha_{i+1}}F_{\alpha_i}^2)X_1^{a_1}\cdots
X_n^{a_n}
\\
=& [a_{i-1}][a_{i-1}-1]([a_i]-[2][a_i+1]+[a_i+2])X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}-2}X_i^{a_i+1}X_{i+1}^{a_{i+1}+1}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
$$\begin{aligned}
(F_{\alpha_{i+1}}^2
F_{\alpha_{i}}-&[2]F_{\alpha_{i+1}}F_{\alpha_{i}}F_{\alpha_{i+1}}+F_{\alpha_{i}}F_{\alpha_{i+1}}^2)X_1^{a_1}\cdots
X_n^{a_n}
\\
=& [a_{i-1}][a_{i}]([a_i+1]-[2][a_i]+[a_i-1])X_1^{a_1}\cdots
X_{i-1}^{a_{i-1}-1}X_i^{a_i-1}X_{i+1}^{a_{i+1}+2}\cdots X_n^{a_n}
\\
=& 0.\end{aligned}$$
So we have shown that $V$ is a $U_q(\mathfrak{g})$-module. Note that $V$ is admissible of degree $1$ and $V=V^{even}\oplus V^{odd}$ where $V^{even}$ are even degree polynomials and $V^{odd}$ are odd degree polynomials. Furthermore we see that $V^{even}=L(\omega^+)$ and $V^{odd}=L(\omega^-)$ where $\omega^{\pm}$ are the weights defined by $\omega^+(K_{\alpha_1})=q{^{-1}}$, $\omega^+(K_{\alpha_i})=1$, $i>1$ and $\omega^-(K_{\alpha_1})=q^{-3}$, $\omega^-(K_{\alpha_2})=q{^{-1}}$, $\omega^-(K_{\alpha_i})=1$, $i>2$. $V^{even}$ is generated by $1$ and $V^{odd}$ is generated by $X_1$. We will use the fact that $L(\omega^+)$ is admissible in Theorem \[thm:existence\_adm\_mod\_type\_C\] in the next section.
Type B, C, F {#sec:class-admiss-modul-1}
------------
In this section we classify the simple highest weight admissible modules when $\mathfrak{g}$ is of type $B$, $C$ or $F$. Remember that we have assumed that $q$ is transcendental.
\[thm:clas\_of\_adm\_modules\_type\_B\_C\_F\] Let $\mathfrak{g}$ be a simple Lie algebra not of type $G_2$. Suppose there exists an infinite dimensional admissible simple $U_q(\mathfrak{g})$-module. Then $\mathfrak{g}$ is of type $A$ or $C$.
If $\mathfrak{g}$ is simply laced then Theorem \[thm:Classification\_of\_adm\_modules\_simply\_laced\] gives that $\mathfrak{g}$ is of type $A$. So assume $\mathfrak{g}$ is not of simply laced type. Theorem \[thm:integral\] and the classification in the classical case tells us that no admissible infinite dimensional simple highest weight modules exists with integral weights when $\mathfrak{g}$ is not simply laced (cf. [@Mathieu Lemma 9.1]).
We have assumed that $\mathfrak{g}$ is not of type $G_2$ so the remaining non-simply laced types are $B$, $C$ or $F$. We will show that the Dynkin diagram of $\mathfrak{g}$ can’t contain the subdiagram $$\xymatrix{ \stackrel{\alpha_1}{\bullet} \ar@{<=}[r] & \stackrel{\alpha_2}{\bullet} \ar@{-}[r] & \stackrel{\alpha_3}{\bullet}}.$$
Assume the Dynkin diagram contains the above as a subdiagram. If there exists a simple admissible infinite dimensional module $L$ then there exists a non-integral $\lambda\in X$ such that $L(\lambda)$ is infinite dimensional and admissible (Theorem \[thm:EXT\_contains\_highest\_weight\]). Let $\lambda\in X$ be a non-integral weight such that $L(\lambda)$ is admissible. Then by Lemma \[lemma:14\], $q^\rho\lambda(K_{\alpha_1}) \in \pm
q_{\alpha_1}^{{\mathbb{Z}}}= \pm q^{{\mathbb{Z}}}$. By Lemma \[lemma:33\] and Lemma \[lemma:41\] we can assume without loss of generality that the colored Dynkin diagram of $\lambda$ is of the form $$\xymatrix{ \stackrel{\alpha_1}{\bullet} \ar@{<=}[r] & \stackrel{\alpha_2}{\circ} \ar@{-}[r] & \stackrel{\alpha_3}{\circ}}.$$ Let $\mathfrak{s}$ be the simple rank $3$ Lie algebra of type $B_3$. Let $U$ be the subalgebra of $U_q$ generated by $E_{\alpha_i},F_{\alpha_i},K_{\alpha_i}^{\pm 1}$, $i=1,2,3$. Then $U{\cong}U_q(\mathfrak{s})$. Let $Q_{\mathfrak{s}}:= {\mathbb{Z}}\{\alpha_1,\alpha_2,\alpha_3\}\subset Q$. Let $v_\lambda$ be a highest weight vector of $L(\lambda)$. Then $Uv_\lambda$ contains the simple highest weight $U_q(\mathfrak{s})$-module $L(\lambda,\mathfrak{s})$ of highest weight $\lambda$ (restricted to $U_q^0(\mathfrak{s})$) as a subquotient. Since $L(\lambda)$ is admissible so is $L(\lambda,\mathfrak{s})$.
Like in the proof of Lemma \[lemma:34\] we get a contradiction if we can show that $T_{L(\lambda,\mathfrak{s})}\cap
T_{{^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})}$ generates $Q_{\mathfrak{s}}$. It is easily seen that $\{-\alpha_1-\alpha_2,-\alpha_3,-2\alpha_1-\alpha_2\} \subset
T_{L(\lambda,\mathfrak{s})}\cap
T_{{^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})}$, so $T_{L(\lambda,\mathfrak{s})}\cap
T_{{^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})}$ generates $Q_{\mathfrak{s}}$. So $C(L(\lambda,\mathfrak{s})) \cap
C({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))$ generates $Q_{\mathfrak{s}}$. Therefore $C(L(\lambda,\mathfrak{s}))-
C({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))=Q_{\mathfrak{s}}$. The weights of $L(\lambda,\mathfrak{s})$ and ${^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})$ are contained in $q^{Q_{\mathfrak{s}}} \lambda$ so a weight in the essential support of $L(\lambda,\mathfrak{s})$ (resp. ${^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})$) is of the form $q^{\mu_1}\lambda$ (resp. $q^{\mu_2}\lambda$) for some $\mu_1,\mu_2 \in Q_{\mathfrak{s}}$. By the above $q^{C(L(\lambda,\mathfrak{s}))+\mu_1}\lambda \cap
q^{C({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))+\mu_2}\lambda\neq
\emptyset$. Since $q^{C(L(\lambda,\mathfrak{s}))+\mu_1}\lambda
\subset {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda))$ and $q^{C({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))+\mu_2}\lambda
\subset
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))$ we have proved that ${\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))\cap
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda,\mathfrak{s}))\neq \emptyset$. By Proposition \[prop:9\] $L(\lambda,\mathfrak{s})$ and ${^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})$ are subquotients of $L(\lambda,\mathfrak{s})_{F_{\alpha_2}}$. Let $\nu\in
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))\cap
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L(\lambda,\mathfrak{s}))$. Then by Lemma \[lemma:13\] $L(\lambda,\mathfrak{s})_{\nu} {\cong}(L(\lambda,\mathfrak{s})_{F_{\alpha_2}})_\nu {\cong}({^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s}))_\nu$ so by Theorem \[thm:Lemire\] $L(\lambda,\mathfrak{s}){\cong}{^{s_{\alpha_2}}}L(s_{\alpha_2}.\lambda,\mathfrak{s})$. This is a contradiction by looking at weights of the modules.
\[thm:existence\_adm\_mod\_type\_C\] Let $\mathfrak{g}$ be a simple Lie algebra of type $C_n$ (i.e. $\mathfrak{g}=\mathfrak{sp}(2n)$). Let $\alpha_1,\dots,\alpha_n$ be the simple roots such that $\alpha_i$ is connected to $\alpha_{i+1}$ and $\alpha_1$ is long – i.e. the Dynkin diagram of $C_n$ is $$\xymatrix{ \stackrel{\alpha_1}{\bullet} \ar@{=>}[r] & \stackrel{\alpha_2}{\bullet} \ar@{.}[r] & \stackrel{\alpha_{n-1}}{\bullet} \ar@{-}[r] & \stackrel{\alpha_n}{\bullet}}.$$ Let $\lambda\in X$. $L(\lambda)$ is infinite dimensional and admissible if and only if
- $\lambda(K_{\alpha_i})\in \pm q^{{\mathbb{N}}}$ for $1< i \leq n$
- $\lambda(K_{\alpha_1})\in \pm q_{\alpha_1}^{1/2 + {\mathbb{Z}}}=\pm
q^{1+2{\mathbb{Z}}}$
- $\lambda(K_{\alpha_{1}+\alpha_2})\in \pm q^{{\mathbb{Z}}_{\geq -2}}$
or equivalently $q^\rho\lambda(K_\beta)\in \pm q^{{\mathbb{Z}}_{>0}}$ for every short root $\beta\in \Phi^+$ and $\lambda(K_{\beta'})\in \pm
q^{1+2{\mathbb{Z}}}$ for every long root $\beta'\in \Phi^+$.
Assume $\lambda(K_{\alpha_i})\not \in \pm q^{{\mathbb{N}}}$ for some $i>1$. Then by Lemma \[lemma:33\] there exists a $\lambda'$ such that $L(\lambda')$ is admissible and such that $\lambda'(K_{\alpha_{2}})\not \in q^{{\mathbb{N}}}$. Let $\mathfrak{s}$ be the Lie algebra $\mathfrak{sp}(4)$ with simple roots $\alpha_{2}$ and $\alpha_1$. Let $U$ be the subalgebra of $U_q$ generated by $F_{\alpha_{1}},F_{\alpha_{2}},K_{\alpha_1},K_{\alpha_{2}},E_{\alpha_1},E_{\alpha_{2}}$. Then $U {\cong}U_{q}(\mathfrak{s})$ as algebras and $U v_{\lambda'}$ contains the simple highest weight $U_{q}(\mathfrak{s})$-module $L(\lambda',\mathfrak{s})$ of highest weight $\lambda'$ (restricted to $U_{q_\alpha}^0(\mathfrak{s})$) as a subquotient. Since $L(\lambda')$ is admissible so is $U v_{\lambda'}$ hence $L(\lambda',\mathfrak{s})$ is admissible. So $\lambda'(K_{\alpha_{2}})\in \pm q^{{\mathbb{N}}}$ by Lemma \[lemma:14\]. A contradiction. So we have proven that $\lambda(K_{\alpha_i})\in \pm q^{{\mathbb{N}}}$ for $1< i \leq n$ is a neccesary condition. We get also from Lemma \[lemma:14\] that $\lambda(K_{\alpha_1})\in q^{1+2{\mathbb{Z}}}$ and $q^3\lambda(K_{\alpha_{1}+\alpha_2})=
q^\rho\lambda(K_{\alpha_{1}+\alpha_2}) \in \pm q^{{\mathbb{Z}}_{>0}}$ which shows that the two other conditions are neccesary.
Now assume we have a weight $\lambda\in X$ that satisfies the above. So $\lambda(K_{\alpha_1})=q^{-1+r}$ for some $r\in
2{\mathbb{Z}}$. We can assume $r\in {\mathbb{N}}$ by Lemma \[lemma:30\] and Proposition \[prop:15\] (if $r<0$ replace $\lambda$ with $s_1.\lambda$, $L(\lambda)$ is admissible if and only if $L(s_1.\lambda)$ is). We have $\lambda= \omega^+ \lambda_0$ for some dominant integral weight $\lambda_0$ and $L(\lambda)$ is a subquotient of $L(\omega^+){\otimes}L(\lambda_0)$. Since $L(\omega^+)$ is admissible and $L(\lambda_0)$ is finite dimensional $L(\omega^+){\otimes}L(\lambda_0)$ is admissible and since $L(\lambda)$ is a subquotient of $L(\omega^+){\otimes}L(\lambda_0)$, $L(\lambda)$ is admissible as well.
\[cor:3\] Let $\mathfrak{g}$ be a simple Lie algebra of type $C_n$ (i.e. $\mathfrak{g}=\mathfrak{sp}(2n)$). Let $\alpha_1,\dots,\alpha_n$ be the simple roots such that $\alpha_i$ is connected to $\alpha_{i+1}$ and $\alpha_1$ is long.
Let $\beta_j=\alpha_1+\cdots+\alpha_j$, $j=1,\dots,n$ and $\Sigma=\{\beta_1,\dots,\beta_n\}$. Let $F_{\beta_j}=T_{s_1}\cdots
T_{s_{j-1}}(F_{\alpha_{j}})$ and let $F_{\Sigma}=\{q^a
F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}|a_i\in {\mathbb{N}},a\in {\mathbb{Z}}\}$ be the corresponding Ore subset. Then $\Sigma$ is a set of commuting roots that is a basis of $Q$ with corresponding Ore subset $F_\Sigma$.
Let $L$ be a simple torsion free module. Then there exists a $\lambda\in X$ with $\lambda(K_{\beta})\in \pm q^{\mathbb{N}}$ for all short $\beta\in \Phi^+$ and $\lambda(K_{\gamma}) \in \pm
q^{1+2{\mathbb{Z}}}$ for all long $\gamma \in \Phi^+$ and a $\mathbf{b}\in
({\mathbb{C}}^*)^n$ such that $$\begin{aligned}
L{\cong}{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}
\end{aligned}$$
By Theorem \[thm:EXT\_contains\_highest\_weight\] there exists a $\lambda \in X$ such that $\mathcal{EXT}(L){\cong}\mathcal{EXT}(L(\lambda))$. By Proposition \[prop:15\] $L(\lambda)$ is admissible and by Theorem \[thm:existence\_adm\_mod\_type\_C\] $\lambda$ is as described in the statement of the corollary. Now the result follows just like in the proof of Corollary \[cor:1\].
In Section \[sec:type-c\_n-calc\] we determine all $\mathbf{b}\in
({\mathbb{C}}^*)^n$ such that ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is torsion free (with $\Sigma$ and $\lambda$ as above in Corollary \[cor:3\]). By the corollary this classifies all simple torsion free modules for type $C$.
Classification of simple torsion free modules. Type A. {#sec:type-a_n-calc}
======================================================
In this section we assume $\mathfrak{g}=\mathfrak{sl}_{n+1}$ with $n\geq 2$. Let $\Pi=\{\alpha_1,\dots,\alpha_n\}$ denote the simple roots such that $(\alpha_i|\alpha_{i+1})=-1$, $i=1,\dots,n-1$. Set $\beta_j = s_1\cdots s_{j-1}(\alpha_j)=\alpha_1+\dots+\alpha_j$, then $\Sigma=\{\beta_1,\dots,\beta_n\}$ is a set of commuting roots with corresponding root vectors $F_{\beta_j} = T_{s_1}\cdots
T_{s_{j-1}}(F_{\alpha_j})$. We will show some commutation formulas and use these to calculate ${\varphi}_{F_\Sigma,\mathbf{b}}$ on all simple root vectors. This will allow us to determine exactly for which $\mathbf{b}\in ({\mathbb{C}}^*)^n$, ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is torsion free, see Theorem \[thm:clas\_of\_b\_such\_that\_twist\_is\_torsion\_free\].
Choose a reduced expression of $w_0$ starting with $s_1\cdots s_n$ and define roots $\gamma_1,\dots,\gamma_N$ and root vectors $F_{\gamma_1},\dots,F_{\gamma_N}$ from this expression. Note that $F_{\beta_i}=F_{\gamma_i}$ for $i=1,\dots, n$.
\[prop:26\] Let $i\in \{2,\dots,n\}$ and $j\in \{1,\dots,n\}$. $$[F_{\alpha_i},F_{\beta_j}]_q =
\begin{cases}
F_{\beta_i}, &\text{ if } j = i-1
\\
0, &\text{ otherwise}
\end{cases}$$ and $$[E_{\alpha_i},F_{\beta_j}] =
\begin{cases}
F_{\beta_{i-1}} K_{\alpha_i}{^{-1}}, &\text{ if } j = i
\\
0, &\text{ otherwise.}
\end{cases}$$
We will show the proposition for the $F$’s first and then for the $E$’s.
Assume first that $j<i-1$. Then clearly $[F_{\alpha_i},F_{\beta_j}]_q = [F_{\alpha_i},F_{\beta_j}]=0$ since $\alpha_i$ is not connected to any of the simple roots $\alpha_1,\dots,\alpha_j$ appearing in $\beta_j$.
Then assume $j\geq i$. We must have $\alpha_i = \gamma_k$ for some $k>n$ since $\{\gamma_1,\dots,\gamma_N\}=\Phi^+$. By Theorem \[thm:DP\] $[F_{\alpha_i},F_{\beta_j}]_q$ is a linear combination of monomials of the form $F_{\gamma_{j+1}}^{a_{j+1}}\cdots F_{\gamma_{k-1}}^{a_{k-1}}$. For a monomial of this form to appear with nonzero coefficient we must have $$\sum_{h=j+1}^{k-1} a_h \gamma_h = \alpha_i + \beta_j = \alpha_1+\dots + \alpha_{i-1}+2\alpha_i +\alpha_{i+1}+\dots \alpha_j.$$ For this to be possible one of the positive roots $\gamma_s$, $j<s<k$ must be equal to $\alpha_1+\alpha_2+\dots+\alpha_m$ for some $m\leq j$ but $\alpha_1+\alpha_2+\dots+\alpha_m=\gamma_m$ by construction and $m\leq j<s$ so $m\neq s$. We conclude that this is not possible.
Finally we investigate the case when $j=i-1$. We have $$\begin{aligned}
[F_{\alpha_i},F_{\beta_{i-1}}]_q =& [T_{s_1}\cdots
T_{s_{i-2}}(F_{\alpha_i}),T_{s_1}\cdots
T_{s_{i-2}}(F_{\alpha_{i-1}})]_q
\\
=& T_{s_1}\cdots T_{s_{i-2}}
\left([F_{\alpha_i},F_{\alpha_{i-1}}]_q\right)
\\
=& T_{s_1}\cdots T_{s_{i-2}} T_{s_{i-1}}(F_{\alpha_i})
\\
=& F_{\beta_i}.
\end{aligned}$$
For the $E$’s: Assume first $j<i$: Since $F_{\beta_j}$ is a polynomial in $F_{\alpha_1},\dots,F_{\alpha_j}$, $E_{\alpha_i}$ commutes with $F_{\beta_j}$ when $j<i$.
Assume then $j=i$: We have by the above $$F_{\beta_i} = [F_{\alpha_i},F_{\beta_{i-1}}]_q$$ so $$\begin{aligned}
[E_{\alpha_i},F_{\beta_i}] =&
[E_{\alpha_i},(F_{\alpha_i}F_{\beta_{i-1}}-q^{-(\beta_{i-1}|\alpha_i)}F_{\beta_{i-1}}F_{\alpha_i})]
\\
=& [E_{\alpha_i},F_{\alpha_i}] F_{\beta_{i-1}} - q F_{\beta_{i-1}}
[E_{\alpha_i},F_{\alpha_i}]
\\
=& \frac{ K_{\alpha_i}-K_{\alpha_i}{^{-1}}}{q-q{^{-1}}} F_{\beta_{i-1}}
- q F_{\beta_{i-1}} \frac{K_{\alpha_i}-K_{\alpha_i}{^{-1}}}{q-q{^{-1}}}
\\
=& F_{\beta_{i-1}} \frac{ q K_{\alpha_i} - q{^{-1}}K_{\alpha_i}{^{-1}}- q K_{\alpha_i} + q K_{\alpha_i}{^{-1}}}{q-q{^{-1}}}
\\
=& F_{\beta_{i-1}}K_{\alpha_{i}}{^{-1}}.
\end{aligned}$$
Finally assume $j>i$: Observe first that we have $$T_{s_{i+1}}\cdots T_{s_{j-1}}F_{\alpha_j} = \sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s'$$ for some $m\in {\mathbb{N}}$ and some $u_s,u_s'$ that are polynomials in $F_{\alpha_{i+2}},\dots F_{\alpha_{j}}$. Note that $T_{s_i}(u_s)=u_s$ and $T_{s_i}(u_s')=u_s'$ for all $s$ since $\alpha_i$ is not connected to any of the simple roots $\alpha_{i+2},\dots \alpha_j$. So $$\begin{aligned}
T_{s_i}T_{s_{i+1}}\cdots T_{s_{j-1}}F_{\alpha_j} =& T_{s_i}\left(
\sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s'\right)
\\
=& \sum_{s=1}^m u_s T_{s_i}(F_{\alpha_{i+1}}) u_s'
\\
=& \sum_{s=1}^m u_s
(F_{\alpha_{i+1}}F_{\alpha_i}-qF_{\alpha_i}F_{\alpha_{i+1}}) u_s'
\\
=& \sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s' F_{\alpha_i} - q
F_{\alpha_i}\sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s'
\\
=& T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) F_{\alpha_i} - q
F_{\alpha_i} T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}).
\end{aligned}$$ Thus we see that $$\begin{aligned}
F_{\beta_j} =& T_{s_1}\dots T_{s_i}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=&T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) T_{s_{1}}\cdots
T_{s_{i-1}}(F_{\alpha_i}) - qT_{s_{1}}\cdots T_{s_{i-1}}(
F_{\alpha_i}) T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
\\
=& T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) F_{\beta_i} - q
F_{\beta_i} T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
\end{aligned}$$ and therefore $$\begin{aligned}
[E_{\alpha_i},F_{\beta_j}] =& T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j}) [E_{\alpha_i},F_{\beta_i}] - q
[E_{\alpha_i},F_{\beta_i}]T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=& T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}- q F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
\\
=& F_{\beta_{i-1}} T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
K_{\alpha_i}{^{-1}}- F_{\beta_{i-1}} T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_i}{^{-1}}\\
=& 0.
\end{aligned}$$
\[prop:27\] Let $i\in \{2,\dots,n\}$. Let $a\in {\mathbb{Z}}_{>0}$. Then $$[F_{\alpha_i},F_{\beta_{i-1}}^a]_q = [a] F_{\beta_{i-1}}^{a-1}F_{\beta_i}$$ and for $b\in {\mathbb{C}}^*$ $${\varphi}_{F_{\beta_{i-1}},b}(F_{\alpha_i}) = b F_{\alpha_i}+ \frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\beta_{i-1}}{^{-1}}F_{\beta_i}.$$
The first claim is proved by induction over $a$. $a=1$ is shown in Proposition \[prop:26\]. The induction step: $$\begin{aligned}
F_{\alpha_i}F_{\beta_{i-1}}^{a+1} =& \left( q^a F_{\beta_{i-1}}^a
F_{\alpha_i} + [a] F_{\beta_{i-1}}^{a-1}F_{\beta_i}\right)
F_{\beta_{i-1}}
\\
=& q^{a+1}F_{\beta_{i-1}}^{a+1}F_{\alpha_i} +q^a F_{\beta_{i-1}}^a
F_{\beta_i} + q{^{-1}}[a] F_{\beta_{i-1}}^a F_{\beta_i}
\\
=& q^{a+1}F_{\beta_{i-1}}^{a+1}F_{\alpha_i} +
[a+1]F_{\beta_{i-1}}^a F_{\beta_i}.
\end{aligned}$$ So we have proved the first claim. We get then for $a\in {\mathbb{Z}}_{>0}$ $${\varphi}_{F_{\beta_{i-1}},q^a}(F_{\alpha_i}) = F_{\beta_{i-1}}^{-a} F_{\alpha_i} F_{\beta_{i-1}}^a = q^a F_{\alpha_i} + \frac{q^a-q^{-a}}{q-q{^{-1}}} F_{\beta_{i-1}}{^{-1}}F_{\beta_i}.$$ Using the fact that ${\varphi}_{F_{\beta_{i-1}},b}(F_{\alpha_i})$ is Laurent polynomial in $b$ we get the second claim of the proposition.
\[prop:28\] Let $i\in \{2,\dots,n\}$. Let $a\in {\mathbb{Z}}_{>0}$. Then $$[E_{\alpha_i},F_{\beta_i}^a] = q^{a-1}[a] F_{\beta_i}^{a-1}F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}$$ and for $b\in {\mathbb{C}}^*$ $${\varphi}_{F_{\beta_{i}},b}(E_{\alpha_i}) = E_{\alpha_i}+ q{^{-1}}b \frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\beta_i}{^{-1}}F_{\beta_{i-1}} K_{\alpha_i}{^{-1}}.$$
The first claim is proved by induction over $a$. $a=1$ is shown in Proposition \[prop:26\]. The induction step: $$\begin{aligned}
E_{\alpha_i} F_{\beta_i}^{a+1} =& \left( F_{\beta_i}^a
E_{\alpha_i}+ q^{a-1} [a] F_{\beta_i}^{a-1}F_{\beta_{i-1}}
K_{\alpha_i}{^{-1}}\right) F_{\beta_i}
\\
=& F_{\beta_i}^{a+1} E_{\alpha_i} + F_{\beta_i}^a F_{\beta_{i-1}}
K_{\alpha_i}{^{-1}}+ q^{a+1} [a] F_{\beta_i}^a
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}\\
=& F_{\beta_i}^{a+1} E_{\alpha_i} + q^{a} (q^{-a} +
q[a])F_{\beta_i}^a F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}\\
=& F_{\beta_i}^{a+1} E_{\alpha_i} + q^{a} [a+1]F_{\beta_i}^a
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}.
\end{aligned}$$ This proves the first claim. We get then for $a\in {\mathbb{Z}}_{>0}$ $${\varphi}_{F_{\beta_i},q^a}(E_{\alpha_i}) = F_{\beta_i}^{-a} E_{\alpha_i} F_{\beta_i}^a = E_{\alpha_i} + q{^{-1}}q^a \frac{q^a - q^{-a}}{q-q{^{-1}}} F_{\beta_i}{^{-1}}F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}.$$ Using the fact that ${\varphi}_{F_{\beta_{i}},b}(E_{\alpha_i})$ is Laurent polynomial in $b$ we get the second claim of the proposition.
In our classification we don’t need to calculate ${\varphi}_{F_\Sigma,\mathbf{b}}(E_{\alpha_1})$ but for completeness we show what it is in this case in Proposition \[prop:30\]. To do this we need the following proposition:
\[prop:29\] Let $j\in \{2,\dots,n\}$. Then $$[E_{\alpha_1},F_{\beta_j}] = -q T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j})K_{\alpha_1},$$ for $a\in {\mathbb{Z}}_{>0}$: $$[E_{\alpha_1},F_{\beta_j}^a] = -q^{2-a}[a]F_{\beta_j}^{a-1} T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j})K_{\alpha_1}$$ and for $b\in {\mathbb{C}}^*$: $${\varphi}_{F_{\beta_j},b}(E_{\alpha_1}) = E_{\alpha_1} - q^2 b \frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\beta_j}{^{-1}}T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}.$$
Like in the proof of Proposition \[prop:26\] we see that $$T_{s_{2}}\cdots T_{s_{j-1}}F_{\alpha_j} = \sum_{s=1}^m u_s F_{\alpha_{2}} u_s'$$ for some $m\in {\mathbb{N}}$ and some $u_s,u_s'$ that are polynomials in $F_{\alpha_{3}},\dots F_{\alpha_{j}}$. Note that $T_{s_1}(u_s)=u_s$ and $T_{s_1}(u_s')=u_s'$ for all $s$ since $\alpha_1$ is not connected to any of the simple roots $\alpha_{3},\dots \alpha_j$. So $$\begin{aligned}
T_{s_1}T_{s_{2}}\cdots T_{s_{j-1}}F_{\alpha_j} =& T_{s_1}\left(
\sum_{s=1}^m u_s F_{\alpha_{2}} u_s'\right)
\\
=& \sum_{s=1}^m u_s T_{s_1}(F_{\alpha_{2}}) u_s'
\\
=& \sum_{s=1}^m u_s (F_{\alpha_{2}}F_{\alpha_1}-q
F_{\alpha_1}F_{\alpha_{2}}) u_s'
\\
=& \sum_{s=1}^m u_s F_{\alpha_{2}} u_s' F_{\alpha_1} - q
F_{\alpha_1}\sum_{s=1}^m u_s F_{\alpha_{2}} u_s'
\\
=& T_{s_{2}}\cdots T_{s_{j-1}}(F_{\alpha_j}) F_{\alpha_1} - q
F_{\alpha_1} T_{s_{2}}\cdots T_{s_{j-1}}(F_{\alpha_j}).
\end{aligned}$$ Thus $$\begin{aligned}
[E_{\alpha_1},F_{\beta_j}] =& T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) [E_{\alpha_1},F_{\alpha_1}] - q
[E_{\alpha_1},F_{\alpha_1}] T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=& T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j})
\frac{K_{\alpha_1}-K_{\alpha_1}{^{-1}}}{q-q{^{-1}}} - q
\frac{K_{\alpha_1}-K_{\alpha_1}{^{-1}}}{q-q{^{-1}}} T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=& T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}) \frac{
K_{\alpha_1}-K_{\alpha_1}{^{-1}}- q^2 K_{\alpha_1} +
K_{\alpha_1}{^{-1}}}{q-q{^{-1}}}
\\
=& -q T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}.
\end{aligned}$$ Note that $T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j})$ is a polynomial in $F_{\alpha_2},\dots,F_{\alpha_j}$. By Proposition \[prop:26\] $[F_{\alpha_i},F_{\beta_j}]_q = [F_{\alpha_i},F_{\beta_j}] = 0$ for $1<i<j$ and $[F_{\alpha_j},F_{\beta_j}]_q =
F_{\alpha_j}F_{\beta_j}-q{^{-1}}F_{\beta_j}F_{\alpha_j}=0$ so $$\begin{aligned}
T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}) F_{\beta_j} - q{^{-1}}F_{\beta_j}T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}) =&
[T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}),F_{\beta_j}]_q
\\
=& 0.
\end{aligned}$$
The second claim is by induction on $a$: $$\begin{aligned}
E_{\alpha_1} F_{\beta_j}^{a+1} =& \left( F_{\beta_j}^a
E_{\alpha_1} - q^{2-a}[a] F_{\beta_j}^{a-1} T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1} \right) F_{\beta_j}
\\
=& F_{\beta_j}^{a+1} E_{\alpha_1} - q F_{\beta_j}^a T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}
\\
&- q^{-a} [a] F_{\beta_j}^a T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}
\\
=& F_{\beta_j}^{a+1} E_{\alpha_1} - q^{1-a}\left( q^{a} + q{^{-1}}[a]\right)F_{\beta_j}^a T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j})
K_{\alpha_1}
\\
=& F_{\beta_j}^{a+1} E_{\alpha_1} - q^{1-a}[a+1] F_{\beta_j}^a
T_{s_2}\cdots T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}.
\end{aligned}$$
So we get for $a\in {\mathbb{Z}}_{>0}$: $${\varphi}_{F_{\beta_j},q^a}(E_{\alpha_1}) = F_{\beta_j}^{-a} E_{\alpha_1} F_{\beta_j}^a = E_{\alpha_1} - q^2 q^{-a} \frac{q^a - q^{-a}}{q-q{^{-1}}} T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}.$$ Using the fact that ${\varphi}_{F_{\beta_{j}},b}(E_{\alpha_1})$ is Laurent polynomial in $b$ we get the third claim of the proposition.
We can combine the above propositions in the following proposition
\[prop:30\] Let $i\in \{2,\dots,n\}$. For $\mathbf{b}=(b_1,\dots,b_n)\in
({\mathbb{C}}^*)^n$ $$\begin{aligned}
{\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_{i}}) =& b_i^{-1}b_{i+1}{^{-1}}\cdots b_n{^{-1}}{\varphi}_{F_{\beta_{i-1},b_{i-1}}}(F_{\alpha_i})
\\
=& b_i^{-1}b_{i+1}{^{-1}}\cdots b_n{^{-1}}(b_{i-1}F_{\alpha_i}+
\frac{b_{i-1}-b_{i-1}{^{-1}}}{q-q{^{-1}}} F_{\beta_{i-1}}{^{-1}}F_{\beta_i})
\\
{\varphi}_{F_\Sigma,\mathbf{b}}(E_{\alpha_i}) =&
{\varphi}_{F_{\beta_{i},b_{i}}}(E_{\alpha_i}) = E_{\alpha_i}+ q{^{-1}}b_i
\frac{b_i-b_i{^{-1}}}{q-q{^{-1}}} F_{\beta_i}{^{-1}}F_{\beta_{i-1}}
K_{\alpha_i}{^{-1}}.
\end{aligned}$$ Furthermore $${\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_1}) = b_2\cdots b_n
F_{\alpha_1}$$ and $$\begin{aligned}
{\varphi}_{F_{\Sigma,\mathbf{b}}}(E_{\alpha_1}) =& E_{\alpha_1} - q^2
\sum_{j=2}^n b_jb_{j+1}{^{-1}}\cdots b_n{^{-1}}\frac{b_j-b_j{^{-1}}}{q-q{^{-1}}}F_{\beta_j}{^{-1}}T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}
\\
&+ b_2{^{-1}}\cdots b_n{^{-1}}F_{\beta_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(qb_1{^{-1}}\cdots b_n{^{-1}}K_{\alpha_1} - q{^{-1}}b_1\cdots b_n K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2}.
\end{aligned}$$
The first two equations follow from Proposition \[prop:26\], Proposition \[prop:27\] and Proposition \[prop:28\]. The third follows because $F_{\alpha_1}=F_{\beta_1}$ q-commutes with all the other root vectors $F_{\beta_2},\dots,F_{\beta_n}$ (see also the discussion before Definition \[def:twist\_by\_weight\]). For the last equation we use Proposition \[prop:29\]: $$\begin{aligned}
{\varphi}_{F_\Sigma,\mathbf{b}}(E_{\alpha_1})=&
{\varphi}_{F_{\beta_n},b_n}\circ \cdots \circ
{\varphi}_{F_{\beta_1},b_1}(E_{\alpha_1})
\\
=& {\varphi}_{F_{\beta_n},b_n}\circ \cdots \circ
{\varphi}_{F_{\beta_2},b_2}\left( E_{\alpha_1} - F_{\beta_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(qb_1{^{-1}}K_{\alpha_1}-q{^{-1}}b_1
K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2}\right)
\\
=& {\varphi}_{F_{\beta_n},b_n}\circ \cdots \circ
{\varphi}_{F_{\beta_3},b_3}( E_{\alpha_1} - q^2
b_2\frac{b_2-b_2{^{-1}}}{q-q{^{-1}}} F_{\beta_2}{^{-1}}F_{\alpha_2}
K_{\alpha_1}
\\
&- b_2{^{-1}}F_{\beta_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(qb_1{^{-1}}b_2{^{-1}}K_{\alpha_1}-q{^{-1}}b_1b_2 K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2})
\\
&\vdots
\\
=& E_{\alpha_1} - q^2 \sum_{j=2}^n b_jb_{j+1}{^{-1}}\cdots b_n{^{-1}}\frac{b_j-b_j{^{-1}}}{q-q{^{-1}}}F_{\beta_j}{^{-1}}T_{s_2}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_1}
\\
&- b_2{^{-1}}\cdots b_n{^{-1}}F_{\beta_1}{^{-1}}\frac{(b_1-b_1{^{-1}})(qb_1{^{-1}}\cdots b_n{^{-1}}K_{\alpha_1} - q{^{-1}}b_1\cdots b_n K_{\alpha_1}{^{-1}})}{(q-q{^{-1}})^2}
\end{aligned}$$
\[prop:4\] Let $\lambda$ be a weight such that $\lambda(K_{\alpha_i})\in \pm
q^{\mathbb{N}}$ for $i=2,\dots,n$ and $\lambda(K_{\alpha_1})\not \in \pm
q^{\mathbb{N}}$. Let $\mathbf{b}=(b_1,\dots,b_n)\in ({\mathbb{C}}^*)^n$. Let $i\in\{2,\dots,n\}$. Then $E_{\alpha_i}$ acts injectively on the $U_q$-module ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ if and only if $b_i \not \in \pm q^{{\mathbb{Z}}}$ and $F_{\alpha_i}$ acts injectively on ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ if and only if $b_{i-1}\not \in \pm q^{{\mathbb{Z}}}$.
By Proposition \[prop:25\] and Corollary \[cor:6\] a root vector acts injectively on the $U_q$-module $${\varphi}_{F_\Sigma,(b_1,\dots,b_n)}.L(\lambda)_{F_\Sigma}$$ if and only if it acts injectively on $${\varphi}_{F_\Sigma,({\varepsilon}_1
q^{i_1}b_1,\dots,{\varepsilon}_n q^{i_n}b_n)}.L(\lambda)_{F_\Sigma}$$ for any $i_1,\dots,i_n\in{\mathbb{Z}}$ and ${\varepsilon}_1,\dots,{\varepsilon}_n\in
\{\pm 1\}$.
Assume there exists a $0\neq v \in
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ such that $E_{\alpha_i}v=0$. We have $v=F_{\beta_1}^{a_1}\cdots
F_{\beta_n}^{a_n} {\otimes}v'$ for some $a_1,\dots,a_n\in {\mathbb{Z}}_{\leq 0}$ and some $v'\in L(\lambda)$. So $E_{\alpha_i}v=0$ implies $$0={\varphi}_{F_\Sigma,\mathbf{b}}(E_{\alpha_i}) F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}{\otimes}v' = F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}{\otimes}{\varphi}_{F_\Sigma,\mathbf{c}}(E_{\alpha_i})v'$$ where $\mathbf{c}=(q^{a_1}b_1,\dots,q^{a_n}b_n)$. So there exists a $v'\in L(\lambda)$ such that ${\varphi}_{F_\Sigma,\mathbf{c}}(E_{\alpha_i})v'=0$. That is $$\left(E_{\alpha_i} + q{^{-1}}c_i \frac{c_i-c_i{^{-1}}}{q-q{^{-1}}} F_{\beta_i}{^{-1}}F_{\beta_{i-1}} K_{\alpha_i}{^{-1}}\right) v' = 0$$ or equivalently $$F_{\beta_i}E_{\alpha_i} v' = q{^{-1}}c_i \frac{c_i{^{-1}}- c_i}{q-q{^{-1}}} F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}v'.$$ Since $L(\lambda)$ is a highest weight module we have some $r\in
{\mathbb{N}}$ such that $E_{\alpha_i}^{r}v' \neq 0$ and $E_{\alpha_i}^{r+1}
v' =0$. Fix this $r$. We get $$\begin{aligned}
E_{\alpha_i}^{(r)}F_{\beta_i}E_{\alpha_i} v' =
E_{\alpha_i}^{(r)}q{^{-1}}c_i \frac{c_i{^{-1}}- c_i}{q-q{^{-1}}}
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}v'
\end{aligned}$$ and calculating the right hand side and left hand side we get $$\begin{aligned}
q^{r-1}[r]F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}E_{\alpha_i}^{(r)}v' =
q^{-1+2r} c_i \frac{c_i{^{-1}}- c_i}{q-q{^{-1}}}
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}E_{\alpha_i}^{(r)} v'.
\end{aligned}$$ So we must have $$\begin{aligned}
q^{r-1}[r] = q^{-1+2r} c_i \frac{c_i{^{-1}}- c_i}{q-q{^{-1}}}
\end{aligned}$$ or equivalently $c_i = \pm q^{-r}$. Since $c_i\in q^{{\mathbb{Z}}} b_i$ we have proved the first claim.
The other claim is shown similarly (see e.g. the calculations done in the proof of Proposition \[prop:7\]. The calculations will be the same in this case).
\[prop:5\] Let $M$ be a weight $U_q$-module of finite Jordan-Hölder length with finite dimensional weight spaces. Let $\alpha\in \Pi$. If $E_\alpha$ and $F_\alpha$ both act injectively on $M$ then $E_\alpha$ and $F_\alpha$ act injectively on every composition factor of $M$.
Let $V$ be a simple $U_q$-submodule of $M$. Let $\mu$ be a weight of $V$. Then $V_\mu$ is a simple $(U_q)_0$-module by Theorem \[thm:Lemire\] and $E_{\alpha}F_{\alpha}$ and $F_{\alpha}E_\alpha$ act injectively on $V_\mu$ by assumption. Since $\dim M_\mu<\infty$ this implies that $F_\alpha E_\alpha$ and $E_\alpha F_\alpha$ act injectively on the $(U_q)_0$ module $(M/V)_\mu {\cong}M_\mu /V_\mu$. Since $M/V$ is the sum of its weight spaces this implies that $E_\alpha F_\alpha$ and $F_\alpha E_\alpha$ act injectively on $M/V$. This in turn implies that $E_\alpha$ and $F_\alpha$ act injectively on $M/V$. Doing induction on the Jordan-Hölder length of $M$ finishes the proof.
The above proposition is true for a general simple Lie algebra $\mathfrak{g}$ and we will use it in the next section as well.
\[thm:clas\_of\_b\_such\_that\_twist\_is\_torsion\_free\] Let $\lambda$ be a weight such that $\lambda(K_{\alpha_i})\in \pm
q^{\mathbb{N}}$ for $i=2,\dots,n$ and $\lambda(K_{\alpha_1})\not \in \pm
q^{\mathbb{N}}$. Let $\mathbf{b}=(b_1,\dots,b_n)\in ({\mathbb{C}}^*)^n$. Then ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is simple and torsion free if and only if $b_i\not \in \pm q^{\mathbb{Z}}$, $i=1,\dots, n$ and $\lambda(K_{\alpha_1}){^{-1}}b_1\cdots b_n \not \in \pm q^{\mathbb{Z}}$.
By Proposition \[prop:9\] $L(\lambda)$ is a subquotient of $${^{{\overline}{s_1}}}\left( {\varphi}_{F_\Sigma,(\lambda(K_{\alpha_1}),1,\dots,1)}.L(\lambda)_{F_\Sigma}\right).$$ So by Lemma \[lemma:1\] we get (using that $L(\lambda)={^{s_1}}\left({^{{\overline}{s_1}}}L(\lambda)\right)$) for any $\mathbf{c}=(c_1,\dots,c_n) \in ({\mathbb{C}}^*)^n$ $$\left( {\varphi}_{F_\Sigma,\mathbf{c}}.L(\lambda)_{F_\Sigma} \right)^{ss} {\cong}{^{{\overline}{s_1}}}\left( {\varphi}_{F_\Sigma,(\lambda(K_{\alpha_1})c_1{^{-1}}\cdots c_n{^{-1}},c_2,\dots,c_n)}.L(\lambda)_{F_\Sigma}\right)^{ss}.$$ We have $\lambda(K_{\alpha_2})={\varepsilon}q^{r}$ for some $r\in {\mathbb{N}}$ and some ${\varepsilon}\in \{\pm 1\}$. We see in the proof of Lemma \[lemma:30\] that $L(\lambda)$ is a subqoutient of $${^{{\overline}{s_2}}}\left( {\varphi}_{F_\Sigma,({\varepsilon},{\varepsilon},1,\dots,1)}.L(\lambda)_{F_\Sigma}\right).$$ We get by Lemma \[lemma:1\] (using that $L(\lambda)={^{s_2}}\left({^{{\overline}{s_2}}}L(\lambda)\right)$) for any $\mathbf{c}=(c_1,\dots,c_n) \in ({\mathbb{C}}^*)^n$ $$\left( {\varphi}_{F_\Sigma,\mathbf{c}}.L(\lambda)_{F_\Sigma}\right)^{ss} {\cong}{^{{\overline}{s_2}}}\left( {\varphi}_{F_\Sigma,({\varepsilon}c_2,{\varepsilon}c_1,c_3,\dots,c_n)}.L(\lambda)_{F_\Sigma}\right)^{ss}.$$ Combining the above we get $$\begin{aligned}
\left(
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss}
{\cong}& {^{{\overline}{s_2}}}\left( {\varphi}_{F_\Sigma,({\varepsilon}b_2,{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_\Sigma}\right)^{ss}
\\
{\cong}& {^{{\overline}{s_2}}}\left({^{{\overline}{s_1}}}\left(
{\varphi}_{F_\Sigma,(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots b_n{^{-1}},
{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_\Sigma}\right)\right)^{ss}
\\
{\cong}& {^{{\overline}{s_1s_2}}}\left(
{\varphi}_{F_\Sigma,(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots b_n{^{-1}},
{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_\Sigma}\right)^{ss}.
\end{aligned}$$ Since $T_{s_1}{^{-1}}T_{s_2}{^{-1}}(E_{\alpha_1})=E_{\alpha_2}$ and $T_{s_1}{^{-1}}T_{s_2}{^{-1}}(F_{\alpha_1})=F_{\alpha_2}$ we get by Proposition \[prop:4\] that $E_{\alpha_1}$ acts injectively on ${^{{\overline}{s_1s_2}}}\left(
{\varphi}_{F_{\Sigma},(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots
b_n{^{-1}},{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_{\Sigma}}\right)$ if and only if $b_1 \not \in \pm q^{{\mathbb{Z}}}$ and $F_{\alpha_1}$ acts injectively on ${^{{\overline}{s_1s_2}}}\left(
{\varphi}_{F_{\Sigma},(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots
b_n{^{-1}},{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_{\Sigma}}\right)$ if and only if $\lambda(K_{\alpha_1}){^{-1}}b_1\cdots b_n \not \in \pm q^{{\mathbb{Z}}}$.
Assume ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is torsion free. Then all root vectors act injectively on ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. We claim ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is simple: Let $V\subset {\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ be a simple module. Then $V$ is admissible of the same degree $d$ as $L(\lambda)$ by Proposition \[prop:15\] and because all root vectors act injectively $\dim V_{q^\mu\lambda}=d$ for all $\mu\in
Q$. So $V= {\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. Thus $\left({\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss}={\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. Then by the above $${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma} {\cong}{^{{\overline}{s_1s_2}}}\left( {\varphi}_{F_{\Sigma},(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots b_n{^{-1}},{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_{\Sigma}}\right).$$ This shows that when ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_{\Sigma}}$ is torsion free we must have $\lambda(K_{\alpha_1}){^{-1}}b_1\cdots b_n \not \in \pm
q^{{\mathbb{Z}}}$. By Proposition \[prop:4\] $b_i\not \in \pm q^{\mathbb{Z}}$, $i=1,\dots, n$.
Assume on the other hand that $b_i \not \in \pm q^{{\mathbb{Z}}}$ for $i\in
\{1,\dots,n\}$ and $\lambda(K_{\alpha_1}){^{-1}}b_1\cdots b_n\not \in
\pm q^{{\mathbb{Z}}}$. By Proposition \[prop:4\] we get that the simple root vectors $E_{\alpha_2},\dots,E_{\alpha_n}$ and $F_{\alpha_1},\dots,F_{\alpha_n}$ all act injectively on ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. We need to show that $E_{\alpha_1}$ acts injectively on the module. By the above $$\left({\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss} {\cong}{^{{\overline}{s_1s_2}}}\left( {\varphi}_{F_{\Sigma},(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots b_n{^{-1}},{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_{\Sigma}}\right)^{ss}$$ and the root vectors $E_{\alpha_1},F_{\alpha_1}$ act injectively on $${^{{\overline}{s_1s_2}}}\left(
{\varphi}_{F_{\Sigma},(\lambda(K_{\alpha_1})b_1{^{-1}}\cdots
b_n{^{-1}},{\varepsilon}b_1,b_3,\dots,b_n)}.L(\lambda)_{F_{\Sigma}}\right).$$ Then by Proposition \[prop:5\] $E_{\alpha_1}$ act injectively on all composition factors of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$.
Let $V$ be a simple $U_q$-submodule of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. By the above all simple root vectors act injectively on $V$ and then like above this implies $V={\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ i.e. ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is simple and torsion free.
By the comments after Corollary \[cor:2\] the above Theorem completes the classification of simple torsion free modules in type A.
Classification of simple torsion free modules. Type C. {#sec:type-c_n-calc}
======================================================
In this section we assume $\mathfrak{g}$ is of type $C_n$ (i.e. $\mathfrak{g}=\mathfrak{sp}_{2n}$) with $n\geq 2$. Let $\Pi=\{\alpha_1,\dots,\alpha_n\}$ denote the simple roots such that $(\alpha_i|\alpha_{i+1})=-1$, $i=2,\dots,n-1$, $\left<\alpha_2,\alpha_1^\vee\right>=-1$ and $\left<\alpha_1,\alpha_2^\vee\right>=-2$ i.e. $\alpha_1$ is long and $\alpha_2,\dots,\alpha_n$ are short.
Set $\beta_j = s_1\cdots s_{j-1}(\alpha_j)=\alpha_1+\dots+\alpha_j$, then $\Sigma=\{\beta_1,\dots,\beta_n\}$ is a set of commuting roots with corresponding root vectors $F_{\beta_j} = T_{s_1}\cdots
T_{s_{j-1}}(F_{\alpha_j})$. We will show some commutation formulas and use these to calculate ${\varphi}_{F_\Sigma,\mathbf{b}}$ on most of the simple root vectors.
Choose a reduced expression of $w_0$ starting with $s_1\cdots s_n
s_1\cdots s_{n-1}$ and define root vectors $F_{\gamma_1},\dots,F_{\gamma_N}$ from this expression. Note that $F_{\beta_i}=F_{\gamma_i}$ for $i=1,\dots, n$. Note for use in the proposition below that for $j\in \{1,\dots,n-1\}$, $$\begin{aligned}
\gamma_{n+j} = s_1\cdots s_n s_1\cdots
s_{j-1}(\alpha_j)=\alpha_1+2\alpha_2+\alpha_3+\cdots \alpha_{j+1}\end{aligned}$$ and $$\begin{aligned}
F_{\gamma_{n+j}}=&T_{s_1}\cdots T_{s_n} T_{s_1}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=& T_{s_1}\cdots T_{s_{j+1}} T_{s_1}\cdots
T_{s_{j-1}}(F_{\alpha_j}).\end{aligned}$$ In particular $F_{\alpha_1+2\alpha_2}=T_{s_1}T_{s_2}(F_{\alpha_1})$.
\[prop:32\] Let $i\in \{2,\dots,n\}$ and $j\in \{1,\dots,n\}$ $$[F_{\alpha_i},F_{\beta_j}]_q =
\begin{cases}
[2] F_{\alpha_1+2\alpha_2}, &\text{ if } j=i=2
\\
F_{\alpha_1+2\alpha_2+\alpha_3+\cdots+\alpha_{j}}, &\text{ if }
i=2 \text{ and } j>2
\\
F_{\beta_i}, &\text{ if } j = i-1
\\
0, &\text{ otherwise}
\end{cases}$$ and $$[E_{\alpha_i},F_{\beta_j}] =
\begin{cases}
[2] F_{\beta_{1}} K_{\alpha_2}{^{-1}}, &\text{ if } j = 2 = i
\\
F_{\beta_{i-1}} K_{\alpha_i}{^{-1}}, &\text{ if } j = i > 2
\\
0, &\text{ otherwise.}
\end{cases}$$
We will show the proposition for the $F$’s first and then for the $E$’s. Assume first that $j<i-1$. Then clearly $[F_{\alpha_i},F_{\beta_j}]_q = [F_{\alpha_i},F_{\beta_j}]=0$ since $\alpha_i$ is not connected to any of the simple roots $\alpha_1,\dots,\alpha_j$ appearing in $\beta_j$.
Then assume $j\geq i>2$. We must have $\alpha_i = \gamma_k$ for some $k>n$ since $\{\gamma_1,\dots,\gamma_N\}=\Phi^+$. By Theorem \[thm:DP\] $[F_{\alpha_i},F_{\beta_j}]_q$ is a linear combination of monomials of the form $F_{\gamma_{j+1}}^{a_{j+1}}\cdots F_{\gamma_{k-1}}^{a_{k-1}}$. For a monomial of this form to appear with nonzero coefficient we must have $$\sum_{h=j+1}^{k-1} a_h \gamma_h = \alpha_i + \beta_j = \alpha_1+\dots + \alpha_{i-1}+2\alpha_i +\alpha_{i+1}+\dots \alpha_j.$$ For this to be possible one of the positive roots $\gamma_s$, $j<s<k$ must be equal to $\alpha_1+\alpha_2+\dots+\alpha_m$ for some $m\leq j$ but $\alpha_1+\alpha_2+\dots+\alpha_m=\gamma_m$ by construction and $m\leq j<s$ so $m\neq s$. We conclude that this is not possible.
Assume $j=i-1$. We have $$\begin{aligned}
[F_{\alpha_i},F_{\beta_{i-1}}]_q =& [T_{s_1}\cdots
T_{s_{i-2}}(F_{\alpha_i}),T_{s_1}\cdots
T_{s_{i-2}}(F_{\alpha_{i-1}})]_q
\\
=& T_{s_1}\cdots T_{s_{i-2}}
\left([F_{\alpha_i},F_{\alpha_{i-1}}]_q\right)
\\
=& T_{s_1}\cdots T_{s_{i-2}} T_{s_{i-1}}(F_{\alpha_i})
\\
=& F_{\beta_i}.
\end{aligned}$$
Assume $j=2=i$. Then $$\begin{aligned}
[F_{\alpha_2},F_{\beta_2}]_q =&
F_{\alpha_2}F_{\beta_2}-F_{\beta_2}F_{\alpha_2}
\\
=&
F_{\alpha_2}(F_{\alpha_2}F_{\alpha_1}-q^2F_{\alpha_1}F_{\alpha_2})-(F_{\alpha_2}F_{\alpha_1}-q^2
F_{\alpha_1}F_{\alpha_2})F_{\alpha_2}
\\
=& (q^2 F_{\alpha_1}F_{\alpha_2}^2 - q [2]
F_{\alpha_2}F_{\alpha_1}F_{\alpha_2} + F_{\alpha_2}^2
F_{\alpha_1})
\\
=& [2] T_{s_2}{^{-1}}(F_{\alpha_1})
\\
=& [2] T_{s_2}{^{-1}}T_{s_2}T_{s_1}T_{s_2}(F_{\alpha_1})
\\
=& [2] T_{s_1}T_{s_2}(F_{\alpha_1})
\\
=& [2] F_{\alpha_1+2\alpha_2}.
\end{aligned}$$
Assume $i=2$ and $j=3$. Then $$\begin{aligned}
F_{\alpha_1+2\alpha_2+\alpha_3} =& F_{\gamma_{n+2}}
\\
=& T_{s_1} T_{s_{2}}T_{s_{3}}T_{s_{1}}(F_{\alpha_{2}})
\\
=& T_{s_1} T_{s_2} T_{s_1} T_{s_3}(F_{\alpha_2})
\\
=& T_{s_1} T_{s_2} T_{s_1}
(F_{\alpha_2}F_{\alpha_3}-qF_{\alpha_3}F_{\alpha_2})
\\
=& F_{\alpha_2}F_{\beta_3}-qF_{\beta_3}F_{\alpha_2}.
\end{aligned}$$
Finally assume $i=2$ and $j>3$. We have $$\begin{aligned}
F_{\alpha_1+2\alpha_2+\alpha_3+\cdots+\alpha_j} =&
F_{\gamma_{n+j-1}}
\\
=& T_{s_1}\cdots T_{s_{j-2}}T_{s_{j-1}}T_{s_{j}}T_{s_{1}}\cdots
T_{s_{j-3}}T_{s_{j-2}}(F_{\alpha_{j-1}})
\\
=& T_{s_1}\cdots T_{s_{j-2}}T_{s_{1}}\cdots
T_{s_{j-3}}T_{s_{j-1}}T_{s_{j-2}}T_{s_{j}}(F_{\alpha_{j-1}})
\\
=& T_{s_1}\cdots T_{s_{j-2}}T_{s_{1}}\cdots
T_{s_{j-3}}T_{s_{j-1}}T_{s_{j-2}}(F_{\alpha_{j-1}}F_{\alpha_{j}}-qF_{\alpha_j}F_{\alpha_{j-1}})
\\
=& T_{s_1}\cdots T_{s_{j-2}}T_{s_{1}}\cdots
T_{s_{j-3}}(F_{\alpha_{j-2}}T_{s_{j-1}}(F_{\alpha_{j}})-qT_{s_{j-1}}(F_{\alpha_j})F_{\alpha_{j-2}})
\\
=& F_{\alpha_2} F_{\beta_j} - q F_{\beta_j} F_{\alpha_2}
\\
=& [F_{\alpha_2},F_{\beta_j}]_q
\end{aligned}$$ using the facts that $T_{s_{j-1}}T_{s_{j-2}}(F_{\alpha_{j-1}})=F_{\alpha_{j-2}}$ and $T_{s_1}\cdots T_{s_{j-2}}T_{s_1}\cdots
T_{s_{j-3}}(F_{\alpha_{j-2}})=F_{\alpha_2}$ by Proposition 8.20 in [@Jantzen] (The proposition is about the $E$ root vectors but the proposition is true for the $F$’s as well).
For the $E$’s: Assume first $j<i$: Since $F_{\beta_j}$ is a polynomial in $F_{\alpha_1},\dots,F_{\alpha_j}$, $E_{\alpha_i}$ commutes with $F_{\beta_j}$ when $j<i$.
Assume then $j=i$: We have by the above $$F_{\beta_i} = [F_{\alpha_i},F_{\beta_{i-1}}]_q$$ so $$\begin{aligned}
[E_{\alpha_i},F_{\beta_i}] =&
[E_{\alpha_i},(F_{\alpha_i}F_{\beta_{i-1}}-q^{-(\beta_{i-1}|\alpha_i)}F_{\beta_{i-1}}F_{\alpha_i})]
\\
=& [E_{\alpha_i},F_{\alpha_i}] F_{\beta_{i-1}} - q_{\alpha_{i-1}}
F_{\beta_{i-1}} [E_{\alpha_i},F_{\alpha_i}]
\\
=& \frac{ K_{\alpha_i}-K_{\alpha_i}{^{-1}}}{q-q{^{-1}}} F_{\beta_{i-1}}
- q_{\alpha_{i-1}} F_{\beta_{i-1}}
\frac{K_{\alpha_i}-K_{\alpha_i}{^{-1}}}{q-q{^{-1}}}
\\
=& F_{\beta_{i-1}} \frac{ q_{\alpha_{i-1}} K_{\alpha_i} -
q_{\alpha_{i-1}}{^{-1}}K_{\alpha_i}{^{-1}}- q_{\alpha_{i-1}}
K_{\alpha_i} + q_{\alpha_{i-1}} K_{\alpha_i}{^{-1}}}{q-q{^{-1}}}
\\
=& \frac{q_{\alpha_{i-1}}-q_{\alpha_{i-1}}{^{-1}}}{q-q{^{-1}}}
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}\\
=&
\begin{cases}
[2] F_{\beta_{i-1}}K_{\alpha_{i}}{^{-1}}, &\text{ if } i=2
\\
F_{\beta_{i-1}}K_{\alpha_{i}}{^{-1}}, &\text{ otherwise. }
\end{cases}
\end{aligned}$$
Finally assume $j>i$: Observe first that we have $$T_{s_{i+1}}\cdots T_{s_{j-1}}F_{\alpha_j} = \sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s'$$ for some $m\in {\mathbb{N}}$ and some $u_s,u_s'$ that are polynomials in $F_{\alpha_{i+2}},\dots F_{\alpha_{j}}$. Note that $T_{s_i}(u_s)=u_s$ and $T_{s_i}(u_s')=u_s'$ for all $s$ since $\alpha_i$ is not connected to any of the simple roots $\alpha_{i+2},\dots \alpha_j$. So $$\begin{aligned}
T_{s_i}T_{s_{i+1}}\cdots T_{s_{j-1}}F_{\alpha_j} =& T_{s_i}\left(
\sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s'\right)
\\
=& \sum_{s=1}^m u_s T_{s_i}(F_{\alpha_{i+1}}) u_s'
\\
=& \sum_{s=1}^m u_s
(F_{\alpha_{i+1}}F_{\alpha_i}-qF_{\alpha_i}F_{\alpha_{i+1}}) u_s'
\\
=& \sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s' F_{\alpha_i} - q
F_{\alpha_i}\sum_{s=1}^m u_s F_{\alpha_{i+1}} u_s'
\\
=& T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) F_{\alpha_i} - q
F_{\alpha_i} T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}).
\end{aligned}$$ Thus we see that $$\begin{aligned}
F_{\beta_j} =& T_{s_1}\dots T_{s_i}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=&T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) T_{s_{1}}\cdots
T_{s_{i-1}}(F_{\alpha_i}) - qF_{\alpha_i} T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=& T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) F_{\beta_i} - q
F_{\beta_i} T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
\end{aligned}$$ and therefore $$\begin{aligned}
[E_{\alpha_i},F_{\beta_j}] =& T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j}) [E_{\alpha_i},F_{\beta_i}] - q
[E_{\alpha_i},F_{\beta_i}]T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j})
\\
=& [r_i] (T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j})
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}- q F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) )
\\
=& [r_i] (F_{\beta_{i-1}} T_{s_{i+1}}\cdots
T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_i}{^{-1}}- F_{\beta_{i-1}}
T_{s_{i+1}}\cdots T_{s_{j-1}}(F_{\alpha_j}) K_{\alpha_i}{^{-1}})
\\
=& 0
\end{aligned}$$ where $$r_i =
\begin{cases}
2 &\text{ if } i =2
\\
1 &\text{ otherwise. }
\end{cases}$$
\[prop:33\] Let $i\in \{2,\dots,n\}$. Let $a\in {\mathbb{Z}}_{>0}$. Then $$[F_{\alpha_i},F_{\beta_{i-1}}^a]_q = [a]_{\beta_{i-1}} F_{\beta_{i-1}}^{a-1}F_{\beta_i}$$ and for $b\in {\mathbb{C}}^*$ $${\varphi}_{F_{\beta_{i-1}},b}(F_{\alpha_i}) =
\begin{cases}
b^2 F_{\alpha_2}+ \frac{b^2-b^{-2}}{q^2-q^{-2}}
F_{\beta_{1}}{^{-1}}F_{\beta_2}, &\text{ if } i=2
\\
b F_{\alpha_i}+ \frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\beta_{i-1}}{^{-1}}F_{\beta_i}, &\text{ otherwise. }
\end{cases}$$
The first claim is proved by induction over $a$. $a=1$ is shown in Proposition \[prop:26\]. The induction step: $$\begin{aligned}
F_{\alpha_i}F_{\beta_{i-1}}^{a+1} =& \left( q_{\beta_{i-1}}^a
F_{\beta_{i-1}}^a F_{\alpha_i} + [a]_{\beta_{i-1}}
F_{\beta_{i-1}}^{a-1}F_{\beta_i}\right) F_{\beta_{i-1}}
\\
=& q_{\beta_{i-1}}^{a+1}F_{\beta_{i-1}}^{a+1}F_{\alpha_i}
+q_{\beta_{i-1}}^a F_{\beta_{i-1}}^a F_{\beta_i} +
q_{\beta_{i-1}}{^{-1}}[a]_{\beta_{i-1}} F_{\beta_{i-1}}^a F_{\beta_i}
\\
=& q_{\beta_{i-1}}^{a+1}F_{\beta_{i-1}}^{a+1}F_{\alpha_i} +
[a+1]_{\beta_{i-1}} F_{\beta_{i-1}}^a F_{\beta_i}.
\end{aligned}$$ So we have proved the first claim. We get then for $a\in {\mathbb{Z}}_{>0}$: $${\varphi}_{F_{\beta_{i-1}},q^a}(F_{\alpha_i}) = F_{\beta_{i-1}}^{-a} F_{\alpha_i} F_{\beta_{i-1}}^a = q_{\beta_{i-1}}^a F_{\alpha_i} + \frac{q_{\beta_{i-1}}^a-q_{\beta_{i-1}}^{-a}}{q_{\beta_{i-1}}-q_{\beta_{i-1}}{^{-1}}} F_{\beta_{i-1}}{^{-1}}F_{\beta_i}.$$ Using the fact that ${\varphi}_{F_{\beta_{i-1}},b}(F_{\alpha_i})$ is Laurent polynomial in $b$ we get the second claim of the proposition.
\[prop:34\] Let $i\in \{2,\dots,n\}$. Let $a\in {\mathbb{Z}}_{>0}$. Then $$[E_{\alpha_i},F_{\beta_i}^a] =
\begin{cases}
q^{a-1}[2][a] F_{\beta_2}^{a-1}F_{\beta_{1}}K_{\alpha_2}{^{-1}},
&\text{ if } i=2
\\
q^{a-1}[a] F_{\beta_i}^{a-1}F_{\beta_{i-1}}K_{\alpha_i}{^{-1}},
&\text{ otherwise. }
\end{cases}$$ and for $b\in {\mathbb{C}}^*$ $${\varphi}_{F_{\beta_{i}},b}(E_{\alpha_i}) =
\begin{cases}
E_{\alpha_2}+ q^{-1} [2]b \frac{b-b^{-1}}{q-q{^{-1}}}
F_{\beta_2}{^{-1}}F_{\beta_{1}} K_{\alpha_2}{^{-1}}, &\text{ if } i=2
\\
E_{\alpha_i}+ q{^{-1}}b \frac{b-b{^{-1}}}{q-q{^{-1}}} F_{\beta_i}{^{-1}}F_{\beta_{i-1}} K_{\alpha_i}{^{-1}}, &\text{ otherwise. }
\end{cases}$$
The first claim is proved by induction over $a$. $a=1$ is shown in Proposition \[prop:26\]. The induction step: For $i>2$: $$\begin{aligned}
E_{\alpha_i} F_{\beta_i}^{a+1} =& \left( F_{\beta_i}^a
E_{\alpha_i}+ q^{a-1} [a] F_{\beta_i}^{a-1}F_{\beta_{i-1}}
K_{\alpha_i}{^{-1}}\right) F_{\beta_i}
\\
=& F_{\beta_i}^{a+1} E_{\alpha_i} + F_{\beta_i}^a F_{\beta_{i-1}}
K_{\alpha_i}{^{-1}}+ q^{a+1} [a] F_{\beta_i}^a
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}\\
=& F_{\beta_i}^{a+1} E_{\alpha_i} + q^{a} (q^{-a} +
q[a])F_{\beta_i}^a F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}\\
=& F_{\beta_i}^{a+1} E_{\alpha_i} +
q^{a}[a+1]_{\alpha_{i-1}}F_{\beta_i}^a
F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}.
\end{aligned}$$
For $i=2$: $$\begin{aligned}
E_{\alpha_2} F_{\beta_2}^{a+1} =& \left( F_{\beta_2}^a
E_{\alpha_2}+ q^{a-1}[2][a] F_{\beta_2}^{a-1}F_{\beta_{1}}
K_{\alpha_2}{^{-1}}\right) F_{\beta_2}
\\
=& F_{\beta_2}^{a+1} E_{\alpha_2} + [2] F_{\beta_2}^a
F_{\beta_{1}} K_{\alpha_2}{^{-1}}+ q^{a+1} [2][a] F_{\beta_2}^a
F_{\beta_{1}}K_{\alpha_2}{^{-1}}\\
=& F_{\beta_2}^{a+1} E_{\alpha_2} + q^{a} [2](q^{-a} + q
[a])F_{\beta_2}^a F_{\beta_{1}}K_{\alpha_2}{^{-1}}\\
=& F_{\beta_2}^{a+1} E_{\alpha_2} + q^{a} [2][a+1] F_{\beta_2}^a
F_{\beta_{1}}K_{\alpha_2}{^{-1}}.
\end{aligned}$$
This proves the first claim. We get then for $a\in {\mathbb{Z}}_{>0}$ $${\varphi}_{F_{\beta_i},q^a}(E_{\alpha_i}) =F_{\beta_i}^{-a} E_{\alpha_i} F_{\beta_i}^a =
\begin{cases}
E_{\alpha_2} + q^{-2} q^{2a} \frac{q^{2a} - q^{-2a}}{q-q{^{-1}}}
F_{\beta_2}{^{-1}}F_{\beta_{1}}K_{\alpha_2}{^{-1}}, &\text{ if } i=2
\\
E_{\alpha_i} + q{^{-1}}q^a \frac{q^a - q^{-a}}{q-q{^{-1}}}
F_{\beta_i}{^{-1}}F_{\beta_{i-1}}K_{\alpha_i}{^{-1}}, &\text{
otherwise. }
\end{cases}$$ Using the fact that ${\varphi}_{F_{\beta_{i}},b}(E_{\alpha_i})$ is Laurent polynomial in $b$ we get the second claim of the proposition.
We combine the above propositions in the following proposition
\[prop:37\] Let $i\in \{3,\dots,n\}$. For $\mathbf{b}=(b_1,\dots,b_n)\in
({\mathbb{C}}^*)^n$ $$\begin{aligned}
{\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_{i}}) =&
{\varphi}_{F_{\beta_{i-1},b_{i-1}}}(F_{\alpha_i})
\\
=& b_{i-1}F_{\alpha_i}+ \frac{b_{i-1}-b_{i-1}{^{-1}}}{q-q{^{-1}}}
F_{\beta_{i-1}}{^{-1}}F_{\beta_i}
\\
{\varphi}_{F_\Sigma,\mathbf{b}}(E_{\alpha_i}) =&
{\varphi}_{F_{\beta_{i},b_{i}}}(E_{\alpha_i}) = E_{\alpha_i}+ q{^{-1}}b_i
\frac{b_i-b_i{^{-1}}}{q-q{^{-1}}} F_{\beta_i}{^{-1}}F_{\beta_{i-1}}
K_{\alpha_i}{^{-1}}.
\end{aligned}$$ Furthermore $${\varphi}_{F_\Sigma,\mathbf{b}}(E_{\alpha_2})= E_{\alpha_2} + q^{-1}[2] b_2 \frac{b_2-b_2{^{-1}}}{q-q{^{-1}}} F_{\beta_2}{^{-1}}F_{\beta_1}K_{\alpha_2}{^{-1}}$$
and $${\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_1}) = b_2\cdots b_n
F_{\alpha_1}.$$
With similar proof as the proof of Proposition \[prop:4\] we can show
\[prop:7\] Let $\lambda$ be a weight such that $\lambda(K_{\beta})\in \pm
q^{\mathbb{N}}$ for all short $\beta\in \Phi^+$ and $\lambda(K_{\gamma})
\in \pm q^{1+2{\mathbb{Z}}}$ for all long $\gamma \in \Phi^+$. Let $\mathbf{b}=(b_1,\dots,b_n)\in ({\mathbb{C}}^*)^n$. $E_{\alpha_2}$ acts injectively on the $U_q$-module ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ if and only if $b_2 \not \in \pm q^{{\mathbb{Z}}}$. Let $i\in\{3,\dots,n\}$. Then $E_{\alpha_i}$ acts injectively on the module ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ if and only if $b_i \not \in \pm q^{{\mathbb{Z}}}$ and $F_{\alpha_i}$ acts injectively on ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ if and only if $b_{i-1}\not \in \pm q^{{\mathbb{Z}}}$.
By Proposition \[prop:25\] and Corollary \[cor:6\] a root vector acts injectively on the $U_q$-module $${\varphi}_{F_\Sigma,(b_1,\dots,b_n)}.L(\lambda)_{F_\Sigma}$$ if and only if it acts injectively on $${\varphi}_{F_\Sigma,({\varepsilon}_1
q^{i_1}b_1,\dots,{\varepsilon}_n q^{i_n}b_n)}.L(\lambda)_{F_\Sigma}$$ for any $i_1,\dots,i_n\in{\mathbb{Z}}$ and ${\varepsilon}_1,\dots,{\varepsilon}_n\in
\{\pm 1\}$.
Assume there exists a $0\neq v \in
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ such that $F_{\alpha_i}v=0$. We have $v=F_{\beta_1}^{a_1}\cdots
F_{\beta_n}^{a_n} {\otimes}v'$ for some $a_1,\dots,a_n\in {\mathbb{Z}}_{\leq 0}$ and some $v'\in L(\lambda)$. $F_{\alpha_i}v=0$ implies $$0={\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_i}) F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}{\otimes}v' = F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}{\otimes}{\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_i})v'$$ where $\mathbf{c}=(q^{a_1}b_1,\dots,q^{a_n}b_n)$. So there exists a $v'\in L(\lambda)$ such that ${\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_i})v'=0$. That is $$\left(c_{i-1}F_{\alpha_i} + \frac{c_{i-1}-c_{i-1}{^{-1}}}{q-q{^{-1}}} F_{\beta_{i-1}}{^{-1}}F_{\beta_{i}}\right) v' = 0$$ or equivalently $$\left(F_{\beta_{i-1}}F_{\alpha_i} + c_{i-1}{^{-1}}\frac{c_{i-1} - c_{i-1}{^{-1}}}{q-q{^{-1}}} F_{\beta_i}\right) v' = 0.$$ Let $r\in {\mathbb{N}}$ be such that $F_{\alpha_i}^{(r)}v'\neq 0$ and $F_{\alpha_i}^{(r+1)}v'=0$ (possible since $\lambda(K_{\alpha_i})\in
\pm q^{{\mathbb{N}}}$ so $-\alpha_i \in F_{L(\lambda)}$). So the above being equal to zero implies $$\begin{aligned}
0=&F_{\alpha_i}^{(r)}\left(F_{\beta_{i-1}}F_{\alpha_i} +
c_{i-1}{^{-1}}\frac{c_{i-1} - c_{i-1}{^{-1}}}{q-q{^{-1}}}
F_{\beta_i}\right) v'
\\
=& \left( [r] F_{\beta_i}F_{\alpha_i}^{(r)} + q^{-r}
\frac{1-c_{i-1}^{-2}}{q-q{^{-1}}} F_{\beta_i}
F_{\alpha_i}^{(r)}\right)v'
\\
=& \left( [r] + q^{-r} \frac{1-c_{i-1}^{-2}}{q-q{^{-1}}}\right)
F_{\beta_i}F_{\alpha_i}^{(r)}v'.
\end{aligned}$$ Since $F_{\beta_i}F_{\alpha_i}^{(r)}v'\neq 0$ this is equivalent to $$\begin{aligned}
0=q^r - q^{-r} +q^{-r} - q^{-r}c_{i-1}^{-2}= q^r -
q^{-r}c_{i-1}^{-2}
\end{aligned}$$ or equivalently $c_{i-1} = \pm q^{-r}$.
The other claims are shown similarly.
\[prop:6\] Let $\lambda$ be a weight such that $\lambda(K_{\beta})\in \pm
q^{\mathbb{N}}$ for all short $\beta\in \Phi^+$ and $\lambda(K_{\gamma})
\in \pm q^{1+2{\mathbb{Z}}}$ for all long $\gamma \in \Phi^+$. Let $\mathbf{b}=(b_1,\dots,b_n)\in ({\mathbb{C}}^*)^n$. Then $F_{\alpha_1+2\alpha_2}$ acts injectively on the $U_q$-module ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$.
We can show similarly to the above calculations in this section that $${\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_1+2\alpha_2}) = b_2^2 F_{\alpha_1+2\alpha_2} + (1-q^2) b_2^2 b_1^{-2} \frac{b_1^2-b_1^{-2}}{q^2-q^{-2}} F_{\beta_1}{^{-1}}F_{\beta_2}^{(2)}.$$
By Proposition \[prop:25\] and Corollary \[cor:6\] a root vector acts injectively on the $U_q$-module $${\varphi}_{F_\Sigma,(b_1,\dots,b_n)}.L(\lambda)_{F_\Sigma}$$ if and only if it acts injectively on $${\varphi}_{F_\Sigma,({\varepsilon}_1
q^{i_1}b_1,\dots,{\varepsilon}_n q^{i_n}b_n)}.L(\lambda)_{F_\Sigma}$$ for any $i_1,\dots,i_n\in{\mathbb{Z}}$ and ${\varepsilon}_1,\dots,{\varepsilon}_n\in
\{\pm 1\}$.
Assume there exists a $0\neq v \in
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ such that $F_{\alpha_1+2\alpha_2}v=0$. We have $v=F_{\beta_1}^{a_1}\cdots
F_{\beta_n}^{a_n} {\otimes}v'$ for some $a_1,\dots,a_n\in {\mathbb{Z}}$ and some $v'\in L(\lambda)$. So $F_{\alpha_1+2\alpha_2}v=0$ implies $$0={\varphi}_{F_\Sigma,\mathbf{b}}(F_{\alpha_1+2\alpha_2}) F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}{\otimes}v' = F_{\beta_1}^{a_1}\cdots F_{\beta_n}^{a_n}{\otimes}{\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_1+2\alpha_2})v'$$ where $\mathbf{c}=(q^{a_1}b_1,\dots,q^{a_n}b_n)$. So there exists a $v'\in L(\lambda)$ and $a_1,\dots,a_n\in {\mathbb{Z}}$ such that for $\mathbf{c}=(q^{a_1}b_1,\dots,q^{a_n}b_n)$, ${\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_1+2\alpha_2})v'=0$. That is $$\left(c_2^2 F_{\alpha_1+2\alpha_2} + (1-q^2) c_2^2 c_1^{-2} \frac{c_1^2-c_1^{-2}}{q^2-q^{-2}} F_{\beta_1}{^{-1}}F_{\beta_2}^{(2)}\right) v' = 0$$ or equivalently $$F_{\beta_1}F_{\alpha_1+2\alpha_2}v' + (1-q^2) c_1^{-2} \frac{c_1^2-c_1^{-2}}{q^2-q^{-2}} F_{\beta_2}^{(2)}v'=0.$$
So to prove our claim it is enough to prove that $$\left(F_{\beta_1}F_{\alpha_1+2\alpha_2} + (1-q^2) c_1^{-2}
\frac{c_1^2-c_1^{-2}}{q^2-q^{-2}} F_{\beta_2}^{(2)}\right)v'\neq
0$$ for any $v'\in L(\lambda)$ and any $c_1 \in {\mathbb{C}}^*$.
So let $v'\in L(\lambda)$ and let $c_1\in {\mathbb{C}}^*$. Let $r\in {\mathbb{N}}$ be such that $E_{\alpha_2}^{(r)}v'\neq 0$ and $E_{\alpha_2}^{(r+1)}v'=0$ (possible since $L(\lambda)$ is a highest weight module). It is straightforward to show that for $a\in
{\mathbb{N}}$: $$[E_{\alpha_2}^{(a)},F_{\alpha_1+2\alpha_2}] = q^{-a+1}[2]F_{\beta_2}E_{\alpha_2}^{(a-1)}K_{\alpha_2}{^{-1}}+ q^{4-2a} F_{\beta_1} E_{\alpha_2}^{(a-2)}K_{\alpha_2}^{-2}$$ and $$[E_{\alpha_2}^{(a)},F_{\beta_2}^{(2)}] = q^{2-a}[2] F_{\beta_2}F_{\beta_1} E_{\alpha_2}^{(a-1)}K_{\alpha_2}{^{-1}}+ q^{3-2a}[2] F_{\beta_1}^2 E_{\alpha_2}^{(a-2)}K_{\alpha_2}^{-2}.$$ Using this we get $$\begin{aligned}
E_{\alpha_2}^{(r+2)}&\left(F_{\beta_1}F_{\alpha_1+2\alpha_2} +
(1-q^2) c_1^{-2} \frac{c_1^2-c_1^{-2}}{q^2-q^{-2}}
F_{\beta_2}^{(2)}\right)v'
\\
=& \left( q^{-2r} + q^{-1-2r}[2]
(1-q^2)c_1^{-2}\frac{c_1^2-c_1^{-2}}{q^2-q^{-2}}\right)
F_{\beta_1}^2 E_{\alpha_2}^{(r)}K_{\alpha_2}^{-2}v'
\\
=& q^{-2r}c_1^{-4} F_{\beta_1}^2
E_{\alpha_2}^{(r)}K_{\alpha_2}^{-2}v'
\\
\neq& 0
\end{aligned}$$ since $F_{\beta_1}$ acts injectively on $L(\lambda)$. Thus $$\left(F_{\beta_1}F_{\alpha_1+2\alpha_2} +
(1-q^2) c_1^{-2} \frac{c_1^2-c_1^{-2}}{q^2-q^{-2}}
F_{\beta_2}^{(2)}\right)v'\neq 0.$$
\[thm:clas\_C\] Let $\lambda$ be a weight such that $\lambda(K_{\beta})\in \pm
q^{\mathbb{N}}$ for all short $\beta\in \Phi$ and $\lambda(K_{\gamma}) \in
\pm q^{1+2{\mathbb{Z}}}$ for all long $\gamma \in \Phi$. Let $\mathbf{b}=(b_1,\dots,b_n)\in ({\mathbb{C}}^*)^n$. Then the $U_q$-module ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is simple and torsion free if and only if $b_i\not \in \pm q^{\mathbb{Z}}$, $i=2,\dots,
n$ and $b_1^2 b_2\cdots b_n \not \in \pm q^{\mathbb{Z}}$.
Let $i\in \{2,\dots,n\}$. By Proposition \[prop:7\], $E_{\alpha_i}$ acts injectively on ${\varphi}_{F_{\Sigma},\mathbf{b}}.L(\lambda)_{F_\Sigma}$ if and only if $b_i\not \in \pm q^{{\mathbb{Z}}}$. If ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is torsion free then every root vector acts injectively. So ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ being torsion free implies $b_i \not \in \pm q^{{\mathbb{Z}}}$.
Let $\Sigma'=\{\beta_1',\dots,\beta_n'\}$ denote the set of commuting roots with $\beta_1' = \alpha_1+\alpha_2$, $\beta_2'=\alpha_1+2\alpha_2$, $\beta_j'=\alpha_1+2\alpha_2+\alpha_3+\cdots+\alpha_j$, $j=3,\dots,n$. Let $F'_{\beta_1'} :=
T_{s_1}(F_{\alpha_2})=F_{\beta_2}$, $F'_{\beta_2'}:=T_{s_1}T_{s_2}(F_{\alpha_1})=F_{\alpha_1+2\alpha_1}$, $F'_{\beta_j'} := T_{s_1}\cdots T_{s_n}T_{s_1}\cdots
T_{s_{j-2}}(F_{\alpha_{j-1}})=T_{s_2}(F_{\beta_{j}})=F_{\alpha_1+2\alpha_2+\alpha_3+\cdots+\alpha_j}$, $j=3,\dots,n$ (in this case we actually have $F'_{\beta_j'}=F_{\beta_j'}$) and $F_{\Sigma'}$ the Ore subset generated by $F'_{\beta_1'},\dots,F'_{\beta_n'}$. Similarly to the above calculations in this section we can show that for $\mathbf{c}\in ({\mathbb{C}}^*)^n$ $$\begin{aligned}
{\varphi}_{F_{\Sigma'},\mathbf{c}}(F_{\alpha_2}) = c_n{^{-1}}\cdots
c_3{^{-1}}c_2^{-2} \left( F_{\alpha_2} + q [2] c_1{^{-1}}\frac{c_1-c_1{^{-1}}}{q-q{^{-1}}} (F'_{\beta_1'}){^{-1}}F'_{\beta_2'}\right).
\end{aligned}$$ Let $v\in L(\lambda)$ and let $r\in {\mathbb{N}}$ be such that $F_{\alpha_2}^{(r)}v \neq 0$ and $F_{\alpha_2}^{(r+1)}v =0$ (possible since $\lambda(K_{\alpha_2})\in\pm q^{{\mathbb{N}}}$). Then we see like in the proof of Proposition \[prop:7\] that ${\varphi}_{F_{\Sigma'},\mathbf{c}}(F_{\alpha_2})v =0$ if and only if $c_1=\pm q^{-r}$ thus ${\varphi}_{F_{\Sigma'},\mathbf{c}}.L(\lambda)_{F_{\Sigma'}}$ is not torsion free whenever $c_1 \in \pm q^{\mathbb{Z}}$ by Proposition \[prop:25\] and Corollary \[cor:6\].
Set $f(\mathbf{b})=(b_1^2 b_2\cdots b_n,b_1{^{-1}}b_3{^{-1}}\cdots
b_n,b_3,\dots,b_n)$. Then by Lemma \[lemma:1\] $$\begin{aligned}
\left(
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss}
{\cong}\left(
{\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}\right)^{ss}.
\end{aligned}$$ If ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is torsion free then it is simple so $$\begin{aligned}
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma} {\cong}& \left(
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss}
\\
{\cong}& \left(
{\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}\right)^{ss}
\\
{\cong}& {\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}.
\end{aligned}$$ We see that ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ being torsion free implies $b_1^2 b_2\cdots b_n \not \in \pm q^{{\mathbb{Z}}}$.
Now assume $b_i\not \in \pm q^{\mathbb{Z}}$, $i=2,\dots, n$ and $b_1^2
b_2\cdots b_n \not \in \pm q^{\mathbb{Z}}$. By Proposition \[prop:7\] and Proposition \[prop:5\] $E_{\alpha_i}$ and $F_{\alpha_i}$, $i=3,\dots,n$ act injectively on all composition factors of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$.
Let $L_1$ be a simple submodule of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ and let $L_2$ be a simple submodule of ${\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}$. By Proposition \[prop:6\], $F_{\alpha_1+2\alpha_2}$ acts injectively on ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. Now clearly $\{-\alpha_1-\alpha_2,-\alpha_1-2\alpha_2,\alpha_3,\dots,\alpha_n\}\subset
T_{L_1}\cap T_{L_2}$ so $C(L_1)\cap C(L_2)$ generates $Q$. This implies that $C(L_1)-C(L_2)=Q$. Since $\left(
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss} {\cong}\left(
{\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}\right)^{ss}$ we have $\operatorname{wt}L_k \subset q^Q (\mathbf{b}{^{-1}})^{\Sigma}\lambda$, $k=1,2$. Choose $\mu_1,\mu_2\in Q$ such that $q^{\mu_1}(\mathbf{b}{^{-1}})^\Sigma \lambda \in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_1)$ and $q^{\mu_2}(\mathbf{b}{^{-1}})^\Sigma \lambda \in {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_2)$. Then obviously $q^{C(L_1)+\mu_1}(\mathbf{b}{^{-1}})^\Sigma \lambda\subset
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_1)$ and $q^{C(L_2)+\mu_2}(\mathbf{b}{^{-1}})^\Sigma
\lambda\subset {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_2)$. By the above $q^{C(L_1)+\mu_1}(\mathbf{b}{^{-1}})^\Sigma \lambda \cap
q^{C(L_2)+\mu_2}(\mathbf{b}{^{-1}})^\Sigma \lambda \neq \emptyset$ so ${\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_1)\cap {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_2)\neq \emptyset$. Let $\nu \in
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_1)\cap {\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_2)$. By Proposition \[prop:15\], $L_1$ and $L_2$ are admissible of the same degree as $L(\lambda)$. So we have as $(U_q)_0$-modules (using that $(L_1)_\nu$ and $(L_2)_\nu$ are simple $(U_q)_0$-modules by Theorem \[thm:Lemire\]) $$\begin{aligned}
(L_1)_\nu &= \left(
{\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)_\nu {\cong}\left(
\left({\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)_\nu\right)^{ss}
\\
&{\cong}\left(
\left({\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}\right)_\nu\right)^{ss}
{\cong}\left({\varphi}_{F_{\Sigma'},f(\mathbf{b})}.L(\lambda)_{F_{\Sigma'}}\right)_\nu
= (L_2)_\nu.
\end{aligned}$$ By Theorem \[thm:Lemire\] this implies $L_1{\cong}L_2$.
Let $\Sigma''=\{ \beta_1'',\dots,\beta_n''\}$ denote the set of commuting roots with $\beta_1'' = \alpha_1+2\alpha_2$, $\beta_2''=\alpha_2$, $\beta_j''=\alpha_1+2\alpha_2+\alpha_3+\cdots+\alpha_j$, $j=3,\dots,n$. Let $F''_{\beta_1''} :=
T_{s_1}T_{s_2}(F_{\alpha_1})$, $F''_{\beta_2''}:=F_{\alpha_2}$, $F''_{\beta_j''} := T_{s_2}T_{s_1}T_{s_2}T_{s_3}\cdots
T_{s_{j-1}}(F_{\alpha_{j}})=T_{s_1}T_{s_2}(F_{\beta_j})$, $j=3,\dots,n$ and $F_{\Sigma''}$ the Ore subset generated by $F''_{\beta_1''},\dots,F''_{\beta_n''}$. Note that $F''_{\beta_j''}=T_{s_1}T_{s_2}(F_{\beta_j})$ for all $j\in
\{1,\dots,n\}$. The root vectors $F''_{\beta_1''},\dots,F''_{\beta_n''}$ act injectively on ${^{{\overline}{s_2 s_1}}}L(\lambda)$. By Theorem \[thm:EXT\_contains\_highest\_weight\] and Proposition \[prop:19\] $L(\lambda)$ is a submodule of $\left({\varphi}_{F_{\Sigma''},\mathbf{d}}.({^{{\overline}{s_2
s_1}}}L(\lambda))_{F_{\Sigma''}}\right)^{ss}$ for some $\mathbf{d}\in ({\mathbb{C}}^*)^n$. Then by Lemma \[lemma:1\] $$\left({\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}\right)^{ss} {\cong}\left({\varphi}_{F_{\Sigma''},g(\mathbf{b})\mathbf{d}}.({^{{\overline}{s_2s_1}}}L(\lambda))_{F_{\Sigma''}}\right)^{ss}$$ for some $g(\mathbf{b})\in ({\mathbb{C}}^*)^n$.
Observe that for $a_1,\dots,a_n\in {\mathbb{N}}$: $$\begin{aligned}
{\varphi}_{F_{\Sigma''},(q^{a_1},\dots,q^{a_n})}&(-K_{\alpha_1}E_{\alpha_1})
\\
=&
{\varphi}_{F_{\Sigma''},(q^{a_1},\dots,q^{a_n})}(T_{s_1}T_{s_2}(F_{\alpha_1+2\alpha_2}))
\\
=& \left(F''_{\beta_1''}\right)^{-a_1}\cdots
\left(F''_{\beta_n''}\right)^{-a_n}
T_{s_1}T_{s_2}(F_{\alpha_1+2\alpha_2})
\left(F''_{\beta_n''}\right)^{a_n} \cdots
\left(F''_{\beta_1''}\right)^{a_1}
\\
=& T_{s_1}T_{s_2}\left( F_{\beta_1}^{-a_n}\cdots
F_{\beta_n}^{-a_n} F_{\alpha_1+2\alpha_2}
F_{\beta_n}^{a_n}\cdots F_{\beta_1}^{a_1}\right)
\\
=& T_{s_1}T_{s_2}\left(
{\varphi}_{F_\Sigma,(q^{a_1},\dots,q^{a_n})}(F_{\alpha_1+2\alpha_2})\right).
\end{aligned}$$ Since ${\varphi}_{F_{\Sigma''},\mathbf{c}}(-K_{\alpha_1}E_{\alpha_1})$ and $T_{s_1}T_{s_2}\left({\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_1+2\alpha_2})\right)$ are both Laurent polynomial in $\mathbf{c}$ we get by Lemma \[lemma:37\] that ${\varphi}_{F_{\Sigma''},\mathbf{c}}(-K_{\alpha_1}E_{\alpha_1})
=T_{s_1}T_{s_2}\left({\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_1+2\alpha_2})\right)$ for any $\mathbf{c}\in ({\mathbb{C}}^*)^n$. $T_{s_1}T_{s_2}\left({\varphi}_{F_\Sigma,\mathbf{c}}(F_{\alpha_1+2\alpha_2})\right)$ acts injectively on ${^{{\overline}{s_2s_1}}}L(\lambda)$ for any $\mathbf{c}\in ({\mathbb{C}}^*)^n$ by Proposition \[prop:6\]. This implies that $-K_{\alpha_1}E_{\alpha_1}$ acts injectively on ${\varphi}_{F_{\Sigma''},g(\mathbf{b})\mathbf{d}}.({^{{\overline}{s_2s_1}}}L(\lambda))_{F_{\Sigma''}}$ and this implies that $E_{\alpha_1}$ acts injectively.
Let $L_3$ be a simple submodule of ${\varphi}_{F_{\Sigma''},g(\mathbf{b})\mathbf{d}}.({^{{\overline}{s_2s_1}}}L(\lambda))_{F_{\Sigma''}}$. We see that $\{-\alpha_2,-\alpha_1-2\alpha_2,\alpha_3,\dots,\alpha_n\}\subset
T_{L_3}\cap T_{L_2}$ so $C(L_2)\cap C(L_3)$ generates $Q$ ($\{\alpha_3,\dots,\alpha_n\}\subset T_{L_3}$ because of Proposition \[prop:5\] and the fact that $L_3$ is a composition factor of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$). Arguing as above this implies that $L_2{\cong}L_3$. We have shown that $L_1{\cong}L_2 {\cong}L_3$. Above we have shown that $E_{\alpha_1}$ acts injectively on $L_3$, $F_{\alpha_2}$ acts injectively on $L_2$ and $F_{\alpha_1},E_{\alpha_2},F_{\alpha_i},E_{\alpha_i}$, $i=3,\dots,n$ act injectively on $L_1$. In conclusion we have shown that all root vectors act injectively on the simple submodule $L_1$ of ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ thus $\operatorname{wt}L_1 =
{\ensuremath{\operatorname{Supp}_{\operatorname{ess}}}}(L_1)=q^Q (\mathbf{b}{^{-1}})^{\Sigma}\lambda$ and therefore $L_1 = {\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$. This shows that ${\varphi}_{F_\Sigma,\mathbf{b}}.L(\lambda)_{F_\Sigma}$ is simple and torsion free with our assumptions on $\mathbf{b}$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The interplay of coherent dynamics and dissipation in many body sysems give rise to a rich class of non-equilibrium phenomena with the emergence of non-trivial phases. In this paper, we investigate this interplay of dynamics in a model of interacting spins with infinite range interactions described by Heisenberg interaction. Using Holestien-Primarkoff transformations, the spin model is bosonized to a collective mode with self interaction at $\frac{1}{N}$ order. Employing Keldysh field theoretic technique with saddle point approximation, we see that the system breaks $\mathcal{Z}_2$-symmetry at the transition point. An effective temperature arises due to dissipation which depends linearly on effective system-bath coupling, and is independent of the dissipation rate and cutoff frequency of the bath with Drude-Lorentz cutoff in wide class of bath spectral densities. Furthermore, fluctuations over mean field are studied and is shown that the dissipative spectrum is modified by ${\rm O}(\frac{1}{N})$ correction term which results change in various physically measurable quantities.'
author:
- Muzaffar Qadir Lone
- Junaid Majeed Bhat
- Mehboob Rashid Bhat
- Ahmed Farouk
title: Keldysh approach to dissipative dynamics of interacting spins with long range interactions
---
Introduction
============
Equilibrium statistical physics developed over past many decades has lead to the successful explanation of various phenomena in many body systems [@Ref1; @Ref2]. However, recent experiments ranging from polariton condensates in the context of semiconductor quantum wells in optical cavities[@Ref4; @R5; @R6], arrays of microcavities [@Ref5] to trapped ions [@Ref6; @R7], optomechanical setups [@Ref7; @R8] and strongly interacting Rydberg polaritons[@Ref8; @R9] have opened new avenues to probe far-from-equilibrium many-body systems in the presence of both coherent dynamics and controlled dissipation, the so-called driven-dissipative systems. As these systems are driven by disspation in addition to coherent dynamics governed by the Hamiltonian, the competetion between these lead to new non-quillibrium phases of matter [@R10; @Alexy].
Since the driven open quantum systems can be well described by microscopic master equations, the traditional techniques of quantum optics cannot be used efficiently[@Ref9]. This driven character makes it impossible to approach these problems in the framework of equilibrium many-body physics. The equivalents of equilibrium concepts [@R11] such as temperature, free energy, and partition function either have no obvious counterpart out of equilibrium, or often become intractably complicated. Here in this work, we employ the Keldysh-Schwinger functional integral [@Ref9; @R13] formalism to study the dissipative dynamics in an interacting spin model with long range interactions. This approach has found numerous applications to driven-dissipative systems such as lossy polariton condensates[@Ref10; @R14; @R15] and driven atomic ensembles interacting with a cavity mode [@Ref11].
A general many qubit system can be represented by interacting spin-$\frac{1}{2}$ particles, the interaction can depend on the distance between the qubits. The two limiting cases for interaction are spin interactions that are independent of distance and spin chains with nearest-neighbor interactions only. In this paper, we consider the extreme case of distance independent interaction among the spins described by the anisotropic antiferromagnetic Heisenberg model (IRHM) coupled strongly with a bosonic bath [@R16].
Long-range interactions between spins or qubits can be produced in cavity quantum electrodynamics as shown by experiments using a quantized cavity mode [@Ref12; @Ref13]. The distance-independent interaction can be realized in fully connected network (FCN) which is a well-studied model in the context of coherent transport of excitation energy in light-harvesting complexes [@Ref14; @Ref15]. FCN is characterized by uniform hopping strength between any pair of sites and is an extreme limit of long-range interaction model for excitons, spins, or hardcore bosons. The model that is used for the study of the excitation energy in Fenna-Matthews-Olson (FMO) complexes is an extreme long range interaction model [@Ref14; @Ref16] for excitons with uniform hopping strength between any pair of chromophores in FMO complexes The system-bath coupling in such complexes is thought to be not weak but to be at least in the intermediate regime [@Ref16] ; instead of employing the usual quantum master equation techniques valid for the weak-coupling limit, modified approaches valid for broader range of couplings have been studied [@R16; @Amit].
This paper is organized as follows. In section II, we introduce the IRHM coupled with a bosonic bath. Using Holestien-Primarkoff (HP) transformations, we bosonize IRHM model and map it to a self interacting bosonic mode. The complete Hamiltonian becomes Dick model with multimode bosonic bath. In next section III, we make use of Keldysh-Schwinger funtional integral formalism to study the steady state solutions of the equations of motion. We see that the critical coupling depends on the spectral density of the bath. In section IV, we study the dissipative spectrum beyond mean field level and analyze the effect of fluctuations on different observables. Finally, we conclude in section V.
Bosonization of IRHM coupled with bosonic bath
==============================================
We consider a system of spin-$\frac{1}{2}$ particles interacting with each other through a infinite range anisotropic Heisenberg antiferromagnetic exchange interaction $H_{\rm IRHM}$, and coupled with a bosonic bath as: $$\begin{aligned}
H&=& H_{\rm IRHM}\nonumber \\
&&~~+ \sum_k \omega_k b_k^{\dagger} b_k + \frac{1}{\sqrt{N}} \sum_{i,k}S_i^x(g_k b_k + g_k^* b_k^{\dagger})
\label{th}
\end{aligned}$$ where $$\begin{aligned}
H_{\rm IRHM} = \frac{J }{N}\sum_{i,j>i} \!\! \left [ \vec{S_i}.\vec{S_j} + (\Delta
-1)S^z_i S^z_j \right]
\label{Hs}\end{aligned}$$ where $J > 0$, $\Delta \geq 0$, and $S_i=\frac{1}{2} \sigma_i$, $i=x,~y,~z$. We note that $H_{\rm IRHM}$ commutes with both $S^z_{Total}$ ($\equiv \sum_i S^z_i$) and $\left ( \sum_{i} \vec{S_i} \right
)^2$ ($\equiv S^2_{Total}$). The eigenstates of $H_{\rm IRHM}$ are characterized by $S_T$ (i.e., the total spin eigenvalue) and $S^z_T$ (or the eigenvalue of the z-component of the total spin $S^z_{Total}$). The ground state corresponds to $S^z_T =0$ and $S_T=0$ which is SU(2) invariant. The $H_{\rm IRHM}$ has relevance to many physical problems. The Lipkin-Meshkov-Glick (LMG) model [@LMG] $H_{\rm LMG} = -2h(\sum_jS_j^z) -2\lambda[(\sum_j S_j^x)^2+\gamma (\sum_jS_j^y)^2]/N$ well studied in nuclear many body problem (for $h=0$ and $\gamma=1$) is a special case of the above mentioned long-range model for certian set of paramters. Long-range interactions can actually occur quite naturally in cavity quantum electrodynamics; by varying the external model parameters, it has been proposed that positive and negative values of $\lambda$ as well as $-1 \le \gamma \le 1$ values can be achieved [@parkins1; @parkins2]. It has been shown by Ezawa that the long-range ferromagnetic Heisenberg model describes well a zigzag graphene nanodisc [@ezawa]. For spin systems with spins defined on the corners of a regular tetrahedron can be realized (from a Hubbard model) as exact special cases of the above long-range model [@Hubbard]. In solid state quantum computation using semiconductor quantum dots, spin states are prepared, manipulated, and measured using rapid control of Heisenberg exchange interaction [@semidqd; @s1].
Next we define $S^{+}= \sum_i S^+_i$ and $S^{z}= \sum_i S^z_i$ and bosonize the $H_{\rm IRHM}$ using Holestien-Primarkoff transformations [@HolP]: $$\begin{aligned}
S^{+}&=& \sqrt{N-a^{\dagger} a}~ a\\
S^{-}&=& a^{\dagger}\sqrt{N-a^{\dagger} a}\\
S^z &=& N-a^{\dagger} a\end{aligned}$$ with $N$ as the total number of spin-1/2 particles. Therefore, we get upto $\frac{1}{N}$ order:
$$\begin{aligned}
H_a&=& J(1-2\Delta)a^{\dagger}a + \frac{J(\Delta-1)}{N} (a^{\dagger}a)^2 \nonumber \\
&=& \omega_0 a^{\dagger}a + \frac{\lambda}{N} (a^{\dagger}a)^2\end{aligned}$$
where in second line in above equation we have used $\omega_0 = J(1-2\Delta)$ and $\lambda =J(\Delta-1)$. Thus, we have mapped a spin Hamiltonian to a self interacting bosonic mode upto $O(\frac{1}{N})$. Therefore, we write the total Hamiltonian given by equation \[th\] as $$\begin{aligned}
H &=& \omega_0 a^{\dagger}a + \frac{\lambda}{N} (a^{\dagger}a)^2 \nonumber \\
&&~+ \sum_k \omega_k b_k^{\dagger} b_k + \frac{1}{2}\sum_{k}(a+a^{\dagger})(g_k b_k + g_k^* b_k^{\dagger})
\label{bosonize}\end{aligned}$$ where we have ignored terms of $O(1/N)$ in interaction term. This is just Dick model [@Dick] coupled with a multimode bath and $\phi^4$-type interaction at $\frac{1}{N}$-order. The conservation of $S^z_{Total}$ is now reflected by the conservation of particle number. The above model possess $\mathcal{Z}_2$-symmetry. In the thermodynamic limit and for strong coupling, the ground state of the above model breaks this $\mathcal{Z}_2$-symmetry and exhibits a phase transition to phase with $\langle a\rangle\neq0 $.
Next, we assume a dissipative process in addition to the coherent dynamics represented by the Hamiltonian in equation \[bosonize\], due to spin flipping (spontaneous emission) at site $i$ from $|\uparrow \rangle $ to $|\downarrow\rangle$ at a rate of $k$, represented by Lindblad master equation: $$\begin{aligned}
\frac{d\rho_s}{dt} &=& -i[H_{\rm IRHM}, \rho_s]\nonumber \\
&&~+ \frac{k}{N} \sum_{i,j}[2 S^+_i \rho_s S_j^- -\{S^+_i S^-_j,\rho_s\}] \\
&=& -i[H_a, \rho_s]
+ k [2 a \rho_s a^{\dagger} -\{a^{\dagger} a,\rho_s\}]\end{aligned}$$ where in second line we have used Holestien-Primarkoff transformations, $\rho_s$ is the density matrix corresponding to $a$-fields.
Keldysh Field Theory
====================
In this section we use Keldysh field theoretic technique to study the dynamics in the model considered. We write Keldysh action for different fields as $$\begin{aligned}
S=S_a + S_b+ S_{ab}\end{aligned}$$ where the action for a-fields is $$\begin{aligned}
S_a&=& \int \!dt \Bigg[ \sum_{\sigma=\pm} \sigma [\bar{\phi}_{\sigma}(i\partial_t-\omega_0)\phi_{\sigma}
+ \frac{\lambda}{N} (\bar{\phi}_{\sigma} \phi_{\sigma})^2] \nonumber \\
&&~~~~~~-ik ( 2 \phi_+ \bar{\phi}_- - \bar{\phi}_+ \phi_+ - \bar{\phi}_- \phi_- ) \Bigg].\end{aligned}$$ Here $\phi$ represents the bosonic coherent state of $a$-type bosons, $\bar{\phi}$ represents the complex conjugate of $\phi$. Plus (minus) signs refers to field defined on forward (backward) branch of Keldysh contour. Similarly, if $\psi$ represents the bosonic coherent state of $b$-type bosons, we can write $$\begin{aligned}
S_b &=& \int \!dt \sum_k \sum_{\sigma=\pm} \sigma [\bar{\psi}_{k\sigma}(i\partial_t-\omega_0)\psi_{k\sigma} \nonumber \\
S_{ab} &=& -\frac{1}{2} \int \!\!dt\sum_k g_k \!\! \sum_{\sigma=\pm} \sigma (\bar{\phi}_{\sigma} + \phi_{\sigma}) ( \bar{\psi}_{k\sigma} + \psi_{k\sigma} )\end{aligned}$$
Next we implement Keldysh rotation defined as: $$\begin{aligned}
\phi_{cl} = \frac{\phi_+ + \phi_- }{\sqrt{2}}\\
\phi_q = \frac{\phi_+ - \phi_- }{\sqrt{2}}\end{aligned}$$ The subscripts $cl$ and $q$ stand for the classical and the quantum components of the fields, respectively, because the first one can acquire expectation value while the second one cannot. In this basis, with the same transformations for $\psi_k$-field as well, we get
$$\begin{aligned}
S_a &=& \int \!\!dt \Bigg[\begin{pmatrix} \bar{\phi}_{cl}(t)& \bar{\phi}_q(t) \end{pmatrix}
\begin{pmatrix}
0& i\partial_t - \omega_0 -ik\\
i\partial_t - \omega_0 +ik & 2ik
\end{pmatrix}
\begin{pmatrix}
{\phi}_{cl}(t)\\
\phi_q(t)
\end{pmatrix}
+ \frac{\lambda}{2N} (|\phi_{cl}|^2 + |\phi_q|^2)(\bar{\phi}_{cl} \phi_q + \phi_{cl} \bar{\phi}_q)\Bigg]\\
S_b &=& \sum_k\int \!\!dt \begin{pmatrix} \bar{\psi}_{k cl}(t)~~ \bar{\psi}_{kq}(t) \end{pmatrix}
\begin{pmatrix}
0& i\partial_t - \omega_k -i\epsilon\\
i\partial_t - \omega_k +i\epsilon & 2i\epsilon
\end{pmatrix}
\begin{pmatrix}
{\psi}_{kcl}(t)\\
\psi_{kq}(t)
\end{pmatrix}\\
S_{ab} &=& -\frac{1}{2} \sum_k g_k\int \!\!dt \Bigg[ (\bar{\phi}_{cl} + \phi_{cl}) ( \bar{\psi}_{k q} + \psi_{k q} )
+ ( \bar{\psi}_{k cl} + \psi_{k cl} )(\bar{\phi}_{q} + \phi_{q}) \Bigg]\end{aligned}$$
where $\epsilon$ is the regularization parameter. The key property of Markovian dissipation is that Keldysh component is frequency independent [@Ref11]. Next we perform saddle point approximation by varying action $S$ with respect to quantum component of the fields,i.e. $\frac{\delta S}{\delta \bar{\phi}_q}=0$ and $\frac{\delta S}{\bar{\psi}_{kq}}=0$ at $\phi_{cl}=\phi_0,~ \phi_q=0$ and $\psi_{kcl}=\psi_{k0},~ \psi_{kq}=0$ and get $$\begin{aligned}
(-\omega_0+ ik) \phi_0 + \frac{\lambda}{2N}|\phi_0|^2 \phi_0 - \frac{1}{2}\sum_k g_k (\bar{\psi}_{k0} + \psi_{k0})&=&0 \nonumber\\
(-\omega_k + i\epsilon) \psi_{k0} -\frac{1}{2}g_k (\bar{\phi}_0 + \phi_0)&=&0 \nonumber\\
\label{saddle}\end{aligned}$$
In order to solve above equations, we define bath spectral density $J(\omega)=\sum_k g_k^2 \delta(\omega-\omega_k)$. We consider the following general form of $J(\omega)$ with Drude-Lorentz cutoff: $$\begin{aligned}
J(\omega) =2\pi \gamma \omega \Bigg(\frac{\omega}{\Omega}\Bigg)^{s-1} \frac{\Omega}{\omega^2 + \Omega^2}
\label{ohm}\end{aligned}$$ with $\gamma$ as the effective coupling between system and bath, $\Omega$ is the cutoff frequency. $s=1$ correspond to Ohmic bath, $0<s<1$ and $s>1$ are called sub-ohmic and super-ohmic baths respectively. However, we will work with ohmic bath $s=1$ for simplicity. Using this form of spectral density, we see that the saddle point equations \[saddle\] admit a trivial solution $\phi_0 =0$ for $\gamma >\gamma_0$ and a non-trivial solution $\phi_0 \ne 0$ for $\gamma <\gamma_0$ which is given by $$\begin{aligned}
|\phi_0| = \pm \sqrt{\frac{N\pi}{\lambda}} \Bigg(\gamma_0 -\gamma\Bigg)^{\frac{1}{2}}
\end{aligned}$$ where $\gamma_0 = \frac{1}{\pi} \frac{\omega_0^2 + k^2}{\omega_0}$ is the critical coupling.
[{width="2.3in" height="2.5in"}]{}
Now we evaluate the various correlation function corresponding to $\phi$-field within the mean field level. In the thermodynamic limit $N\rightarrow \infty$, the contribution from $O(1/N)$ terms can be ignored. We first eliminate the $\psi$-field using Gaussian integration. Defining $ \Phi_{cl/q} = \begin{pmatrix}
\phi_{cl/q}(\omega) \\
\bar{\phi}_{cl/q}(-\omega)
\end{pmatrix} $ and $ \Psi_{cl/q} = \begin{pmatrix}
\psi_{cl/q}(\omega) \\
\bar{\psi}_{cl/q}(-\omega)
\end{pmatrix} $ such that Keldysh-Nambu spinor is defined as $\eta_8(\omega) = [ \Phi_{cl} ~ \Psi_{kcl}~ \Phi_{q}~ \Psi_{kq} ]^T $. Using the notation $\int_{\omega} = \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}$ and $\phi_{cl/q}(t) =\int_{\omega} e^{-i\omega t} \phi_{cl/q}(\omega)$ as the Fourier transform of the $\phi$-field, we integrate out $\psi$-field to get the following effective action for $\phi$-field:
$$\begin{aligned}
S_{\rm eff} = \int_{\omega} \eta_4^{\dagger}(\omega) \begin{pmatrix}
0 & [G^A_{2\times 2}]^{-1}(\omega) \\
[G^R_{2\times 2}]^{-1}(\omega)& D_{2\times 2}^K \end{pmatrix} \eta_4(\omega)
\label{Sef}
\end{aligned}$$
where $\eta_4(\omega) = \begin{pmatrix}\Phi_{cl}(\omega) \\ \Phi_{q}(\omega) \end{pmatrix}$ , $ D^K_{2\times 2} = {\rm diag}(2ik,2ik)$. The retarted Green’s function is given by
$$\begin{aligned}
[G^R_{2\times 2}]^{-1}(\omega) =\begin{pmatrix}
\omega-\omega_0 + ik + \Sigma^R(\omega) & \Sigma^R(\omega)\\
[\Sigma^{R}(-\omega)]^*& -\omega-\omega_0 - ik + [\Sigma^R(-\omega)]^*
\end{pmatrix}. \nonumber \\\end{aligned}$$
Here $\Sigma^R(\omega) =[\Sigma^{R}(-\omega)]^* = -\frac{1}{2}\sum_k\frac{|g_k|^2\omega_k}{\omega^2-\omega_k^2} $ is the self energy function. Thus it is evident that self energy depends on the density of bath states.Using the density of states given by equation \[ohm\], we write the self-energy function $\Sigma(\omega) \equiv\Sigma^R(\omega) $ for Ohmic case as $$\begin{aligned}
\Sigma(\omega) = \frac{\pi}{2} \gamma \frac{\Omega^2}{\omega^2 + \Omega^2}\end{aligned}$$
The characteristic frequencies of the system are defined by the zeros of the determinant $[G^R_{2\times 2}]^{-1}(\omega)$ those correspond to the poles of the response function $G^R_{2\times2}(\omega)$. Since Green’s function possess the symmetry that $\sigma_x G^R_{2\times 2}(\omega) \sigma_x = [G^R_{2\times 2}(-\omega)]^{\star} $, so that the roots come into pairs with opposite real parts or are purely imaginary. Thus the dispersion of dissipative modes are given by ${\rm det} [G^R_{2\times 2}]^{-1}(\omega) =0$ which implies $$\begin{aligned}
\omega =- ik \pm \sqrt{\omega_0^2- 2\omega_0 \Sigma(\omega) }
\label{char}\end{aligned}$$
Figure \[Fig1\] is the plot of real and imaginary parts of the roots of the above characteristic equation for different values of $k$ with anisotropic parameter $\Delta= 0.7$, $J=1$ that corresponds to $\omega_0 = 0.4$ and $\lambda=0.3$. We see that for no spin-flipping case $k=0$, we have all the roots vanishing at transition point $\frac{\gamma}{\gamma_0}=1$ as expected. As we increase value of $k$, different modes hybridize and get shifted in the opposite directions . On approach to transition point two solutions become purely imaginary and correspond to damped modes as shown by blue and black curves in the figure \[Fig1\](b) $\&$ (c) . While at transition point only one mode shown by red curve in figure \[Fig1\] (b) $\&$ (c) vanish and thus making the system dynamically unstable.
Correlation Functions
---------------------
The phyically measurable quantities are correlation functions. The spectral response function $A(\omega)$ encodes the systems response to the active, external perturbations. It is defined as $$\begin{aligned}
A(\omega) = i[G^R(\omega)-G^A(\omega)].\end{aligned}$$ In the present case, we write $A(\omega) = -2{\rm Im} G^R(\omega)$ and is given by $$\begin{aligned}
\!\!\!\!\!\!\!
A(\omega) = \frac{2[ (\omega^2 + k^2+\omega_0^2 + 2 \omega \omega_0 )k - 2k(\omega_0 + \omega)\Sigma]}
{(\omega^2 -k^2 -\omega_0^2 + 2 \omega_0 \Sigma)^2 + 4 \omega^2 k^2 }
\label{spec}\end{aligned}$$ At $\gamma=0$, we see from the figure \[Fig2\] that $A(\omega)$ has Lorentzian shape centered at $\omega_0$. As $\gamma$ increases towards $\gamma_0$, the Lorentzian peak gets shifted towards low frequency mode $\omega=0$ at transition point.
The correlation function encodes the systems internal correlations and is defined as $$\begin{aligned}
C(t,t^{\prime}) =\langle \{\hat{a}(t), \hat{a}^{\dagger}(t^{\prime})\} \rangle= i G^K(t,t^{\prime}) \end{aligned}$$ In steady state, we write $$\begin{aligned}
C= 2\langle a^{\dagger} a\rangle + 1 = i\int \frac{d\omega}{2\pi} G^K(\omega)
\label{corr}\end{aligned}$$ with $$\begin{aligned}
iG^K (\omega) = \frac{2k[(\omega+ \omega_0 -\Sigma)^2 + k^2 + \Sigma^2 ]}{(\omega^2 -k^2 -\omega_0^2 + 2 \omega_0 \Sigma)^2 + 4\omega^2 k^2}\end{aligned}$$
For a decaying bosonic mode with no coupling to the bath i.e. $\gamma=0$, we see from the equations \[spec\] and \[corr\] that $C(\omega)=A(\omega)$, and the steady state boson density $\langle a^{\dagger} a \rangle=0$, which corresponds to the vacuum of the $\phi$-field. We see from the figure \[Fig3\] that there occurs divergence $C(\tilde{\omega})$ for $\tilde{\omega}=0$ at transition point $\frac{\gamma}{\gamma_0}=1$ resulting in the divergence of occupation density of bosons, see for example figure \[Fig4\]. The average number of bosons diverge at transition point as $$\begin{aligned}
2\langle a^{\dagger} a \rangle +1 \sim |\gamma_0 - \gamma|^{-\alpha}\end{aligned}$$ with $\alpha = 0, ~ 1 , 1.6$ for $k=0~, 0.3,~1$ respectively.
![Spectral response function $A(\tilde{\omega},\frac{\gamma}{\gamma_0})$ as a function of $\frac{\gamma}{\gamma_0}$ and $\tilde{\omega} = \frac{\omega}{\Omega}$ for $k=0.3$ , $\omega_0=1$. The Lorentzian peak at $\gamma=0$ is shifted towards low frequency mode at transition point.[]{data-label="Fig2"}](3DplotA2.png){width="3.0in" height="2.0in"}
![Correlation fucntion $C(\tilde{\omega},\frac{\gamma}{\gamma_0})$ as a function of $\frac{\gamma}{\gamma_0}$ and $\tilde{\omega} = \frac{\omega}{\Omega}$ for $k=0.3$, $\omega_0=1$.[]{data-label="Fig3"}](3DplotC.png){width="3.0in" height="2.0in"}
![Steady state number density for different values of $k$ and $\omega_0=1$. []{data-label="Fig4"}](numberdensity.png){width="3.0in" height="2.0in"}
Effective Temperature
---------------------
The response and correlation functions allows us to define a fluctuation-dissipation relationship by introducing distribution function $F(\omega)$: $$\begin{aligned}
G^K(\omega) = G^R(\omega) F(\omega) - F(\omega) G^A(\omega)
\label{FDT}\end{aligned}$$ At thermal equilibrium, the distribution function $F_{eq}(\omega)= 2n(\omega)+1={\rm coth} (\frac{\omega}{2T})$ with $n(\omega) = \frac{1}{e^{\beta \omega}-1}$ is the bose distribution function. Since, the system considered here is out of equilibrium, the notion of effective temperature is determined through the low frequency analysis of eigenvalues of the distribution function $F(\omega)$. For our problem, we write $$\begin{aligned}
F(\omega)= \sigma^z - \frac{1}{2\omega}\sum_k \frac{g_k^2 \omega_k}{\omega^2-\omega_k^2} \sigma^x
\label{temp1}\end{aligned}$$ where $\sigma^z$ and $\sigma^x$ are Pauli spin matrices. Since $F(\omega)$ is hermitian and traceless, so its eigen values are real and opposite: $$\begin{aligned}
\lambda_{\pm}(\omega) = \pm \sqrt{1+ \Bigg(\frac{\Sigma(\omega)}{\omega} \Bigg)^2}
\label{temp}\end{aligned}$$ Since, in thermal equilibrium, $F(\omega)$ at high energy approaches unity exponentially while it diverges at low frequencies as $\frac{2T}{\omega}$. We see from equation \[temp\], at low frequencies, eigen values $\lambda_{\pm}$ diverge as $\frac{1}{\omega}$. The dimensional coefficient of $\frac{1}{\omega}$ defines the effective low frequency temperature $T_{\rm eff}$. Therefore, we see that $T_{\rm eff} = \gamma $ and is independent of the decay rate $k$, cutoff frequency $\Omega$ of the bath. It can be shown true for all cases of spectral densities wit Drude-Lorentz cutoff. Moreover, if we chose exponential cutoff for the bath spectral density, we can show that effective temperature depends on cutoff frequency as well besides coupling $\gamma$. In comparison to the equilibrium, the effective temperature is not an external parameter but an intrinsinc quantity that arises due to interplay of unitary and dissipative dynamics.
[{width="2.3in" height="2.5in"}]{}
Fluctuations over Mean Field
============================
Having found out the mean field solution, we now consider the stability of these solutions to small fluctuations around mean field. We therefore add small fluctuations at tree level by taking $\phi_{cl} \rightarrow \phi_0 + \delta \phi$ and $\phi_q \rightarrow \delta \phi_{q}$. Therefore, from equation \[Sef\] and taking ${\rm O}(1/N)$ terms into account, we write
$$\begin{aligned}
\tilde{S} =\!\!\! \int_{\omega} \delta \eta^{\dagger}_4(\omega)
\begin{pmatrix}
0 & [\tilde{G}_{2 \times 2}^A]^{-1}(\omega) \\
[\tilde{G}_{2 \times 2}^R]^{-1}(\omega) & \tilde{D}^K
\end{pmatrix}
\delta \eta_4(\omega)
-\frac{\lambda}{2N}\int_t \Bigg[ ( 2\phi_0 |\phi_{cl}|^2 + \phi_0^{*} \phi_{cl}^2)\phi^{*}_q
+ (|\phi_{cl}|^2 + |\phi_q|^2)\phi_{cl} \phi_q^{*} +{\rm c. c.}\Bigg]\end{aligned}$$
with $\delta \eta_4(\omega) =\begin{pmatrix}
\delta \Phi_{cl}(\omega) \\ \delta \Phi_{q}(\omega)
\end{pmatrix}
$ and $$\begin{aligned}
[\tilde{G}_{2 \times 2}^R]^{-1}(\omega)
=\begin{pmatrix}
\omega-\omega_0 + ik + \Sigma(\omega) -\frac{\lambda}{N}|\phi_0|^2 & \Sigma(\omega) -\frac{\lambda}{2N}\phi_0^2\\
\Sigma (\omega) -\frac{\lambda}{2N} \phi_0^{*2}& -\omega-\omega_0 - ik + \Sigma(\omega)-\frac{\lambda}{N}|\phi_0|^2
\end{pmatrix},\end{aligned}$$
while contribution to action at ${\rm O}(\frac{1}{N})$ are due to cubic and quartic terms. Thus we observe that the fluctuations vansih in the thermodynamic limit $N\rightarrow \infty$. The poles of the retarded Green’s function, give the spectrum of excitations, while the signs of their imaginary parts determine whether the proposed mean-field steady state is stable. A positive imaginary part of the spectrum implies the instability to mean field solution. Thus, to find the dissipative spectrum of fluctuations, we solve ${\rm det} [\tilde{G}_{2 \times 2}^R](\omega) =0 $ and get $$\begin{aligned}
\omega = - ik \pm \sqrt{ (\omega_0^2 - 2\omega_0 \Sigma) -\frac{\lambda}{2N}
[(\phi_0- \phi_0^{*} )^2 \Sigma + 2\omega |\phi_0|^2] } \nonumber \\
\label{diss}\end{aligned}$$
Next, we analyze the effect of fluctuations on the distribution matrix $F(\omega)$ that provides the information regarding effective temperature. From fluctuation-dissipation relation \[FDT\], we can write $$\begin{aligned}
F(\omega) = \sigma^z + \frac{1}{\omega} [\Sigma(\omega) - \frac{\lambda}{4N} (\phi_0^2 + \phi_0^{* 2})] \sigma^x,\end{aligned}$$ which has the same form in thermodynamic limit $N\rightarrow \infty$ as defined in equation \[temp1\]. Thus fluctuations due to finite number of particles $N$ reduce the effective temperature.
Now, we take into account the contribution of cubic and quartic terms in the effective action. In principle we can sum upto all orders of perturbation and get the following equation $$\begin{aligned}
[{\rm G}_0^{-1} - \Sigma] \circ \mathcal{G}=I_{2\times 2}\end{aligned}$$ where ${\rm G}_0^{-1}$ is the bare Greens function, $\mathcal{G}$ is the dressed Greens function due to the interactions and the self energy matrix is $\Sigma= \begin{pmatrix}
0 & \Sigma^A \\
\Sigma^R & \Sigma^K
\end{pmatrix} $. However, we restrict here to the qualitative ideas, where as the full details of effects of interactions are treated seperately [@MQLU] within the renormalization group approach in Keldysh space.
We consider the effect of fluctuations at first order of $\frac{\lambda}{N}$. The cubic terms at this order are $\int_{t}[2\phi_0 \phi_{cl}^2 \phi_{q}^* + \phi_0^* \phi_{cl}^2 \phi_q + {\rm c.c.} ]$. This term breaks the $\mathcal{Z}_2$- symmetry, $\phi_{cl/q} \rightarrow -\phi_{cl/q}$ and can be treated as the external “magnetic“ field term. In general, the fluctuations can modify the position of the critical point and these terms serve the corrections to the mean field position of the phase transition. However, we can eliminate these odd order terms by applying the external drive. This kind of situation also arises in the liquid-gas transition, where there is no obvious symmetry, however, one can choose parameters such as density to eliminate odd terms. This phase transition, despite the absence of symmetry, is of the Ising type [@chaikin]. A similar conclusion holds if we take fluctuations at higher order of $\lambda/N$. Moreover, we can show [@MQLU] that this model undergoes a second order thermodynamic phase transition of $\phi^4$-theory with $\mathcal{Z}_2$-symmetry We thus conclude that the driven-dissipative model considered here undergoes a continuous Ising- type phase transition.
conclusions
===========
In conclusion, we have analyzed the non-equilibrium dynamics in a long range interaction Heisenberg model coupled to bath and driven by the dissipation at each site due to flipping of spin (spontaneous emission ) . We have shown that this long range model can be mapped to collective bosonic mode with $\phi^4$-type self interaction and thus to multimode Dick model with $\phi^4$ nonlinearity. Using the Keldysh field theory, we have shown in the thermodynamic limit that the system boson density has a power law behavior with the critical exponent depending on the values of decay constant $k$ and the type of spectral density used.
Also that , an effective temperature arise due to dissipation, and is shown to be depend linearly on the effective coupling $\gamma$, independent of the cutoff frequency of the bath in wide class of bath spectral densities. It is shown that the fluctuations due to cubic field terms in the perturbation expansion violate $\mathcal{Z}_2$-symmetry and modify the mean field critical point. Near the steady state, however it can be shown that the dynamics is generically described by a thermodynamic universality class [@MQLU; @Alexy] of $\phi^4$-theory of Landau and Ginzburg . The emergent thermal character of driven-dissipative systems may be expected as the quantum coherence is lost to dissipation.
G. D. Mahan, *Quantum Many Particle Systems* (Springer 2000). S. Sachdev, *Quantum Phase Transitions* (Cambridge University Press, 2011). J. Kasprzak, et al., Nature [**443**]{}, 409 (2006). I. Carusotto and C. Ciuti, Rev. Mod. Phys. [**85**]{}, 299 (2013). K. G. Lagoudakis, et al., Nature Phys. [**4**]{}, 706 (2008). A. A. Houck, H. E. T$\ddot{u}$reci, and J. Koch, Nature Phys. [**8**]{}, 292 (2012). R. Blatt and C. F. Roos, Nature Phys. [**8**]{}, 277 (2012). J. W. Britton, et al., Nature [**484**]{}, 489 (2012). D. E. Chang, A. H. Safavi-Naeini, M. Hafezi, and O. Painter, New J. Phys. [**13**]{}, 023003 (2011). M. Ludwig and F. Marquardt, Phys. Rev. Lett. [**111**]{},073603 (2013). Y. O. Dudin and A. Kuzmich, Science [**336**]{}, 887 (2012). J. D. Pritchard, et al., Phys. Rev. Lett. [**105**]{}, 193603 (2010). S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. B$\ddot{u}$chler, and P. Zoller, Nature Phys. [**4**]{}, 878 (2008). M. F. Maghrebi and A. V. Gorshkov, Phys.Rev. B [**93**]{}, 014307 (2016).
L. M. Sieberer, M. Buchhold, and S. Diehl, Rep. Prog. Phys. [**79**]{}, 9 (2016). Nigel Goldenfeld, *Lectures on Phase Transitions and the Renormalization Group*, (Perseus Books, Reading, 1992). A. Kamenev, *Field Theory of Non-Equilibrium Systems* (Cambridge University Press, Cambridge, 2011). L. M. Sieberer, S. D. Huber, E. Altman, and S. Diehl,Phys. Rev. Lett. [**110**]{}, 195301 (2013). L. M. Sieberer, S. D. Huber, E. Altman, and S. Diehl, Phys. Rev. B [**89**]{}, 134310 (2014). E. Altman, L. M. Sieberer, L. Chen, S. Diehl, and J. Toner, Phys. Rev. X [**5**]{}, 011017 (2015).
E. G. D. Torre, S. Diehl, M. D. Lukin, S. Sachdev, and P. Strack, Phys. Rev. A 87, 023831 (2013) M. Q. Lone and S. Yarlagadda, Int. J. Mod. Phys. B [**30**]{}, 1650063 (2016). J. M. Fink, et al., Phys. Rev. Lett. [**103**]{}, 083601 ( 2009). J. Majer, et al., Nature [**449**]{} 443 (2007). A. W. Chin, A. Datta, F. Caruso, S. F. Huelga, and M. B. Plenio, New J.Phys. [**12**]{}, 065002 ( 2010) Y.-C. Cheng and G. R. Fleming, Annu. Rev. Phys. Chem. [**60**]{}, 241 ( 2009). G. S. Engel, et al., Nature [**446**]{}, 782 ( 2007). A. Dey, M. Q. Lone and S. Yarlagadda, Phys. Rev. B [**92**]{}, 094302 (2015). H. J. Lipkin, N. Meshkov, and A. J. Glick , Nucl. Phys. [**62**]{}, 188 (1965). S. Morrison and A. S. Parkins, Phys. Rev. A [**77**]{}, 043810 (2008); S. Morrison and A. S. Parkins, Phys. Rev. Lett. [**100**]{}, 040403 (2008). M. Ezawa, New J. of Phys. [**11**]{} 095005 (2009) A. Auerbach, [*Interacting Electrons and Quantum Magnetism*]{} ( Springer-Verlag 1994). M. Fowler, Phy. Rev. B [**17** ]{}, 7 (1978); V. J. Emmery, Phys. Rev. B [**14**]{}, 7 (1976). J. R. Petta, A. C. Johnson, J. M. Taylor, E. A. Laird, A. Yacoby, M. D. Lukin, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Science [**309**]{}, 2180 (2005). H. Bluhm, S. Foletti, I. Neder, M. Rudner, D. Mahalu, V. Umansky, and A. Yacoby, Nature Phys. [**7**]{}, 109 (2011). T. Holstein and H. Primakoff, Phys. Rev. [**58**]{}, 1098 (1940). K. Hepp and E. H. Lieb, Ann. of Phys. [**76**]{}, 360 (1973). M. Q. Lone, unpublished. P. M. Chaikin and T. C. Lubensky, *Principles of condensed matter physics* (Cambridge Univ Press, 1995).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A shortcoming in the authors’ interpretation of this beautiful new experiment is pointed out and briefly discussed.'
author:
- Travis Norsen
date: 'Nov. 2, 2006'
title: 'Comment on “Experimental realization of Wheeler’s delayed-choice GedankenExperiment”'
---
The new experimental realization [@er] of Wheeler’s delayed-choice thought experiment by Jacques, Wu, Grosshans, Treussart, Grangier, Aspect, and Roch (hereafter, “the experimenters”) is a fantastic achievement. In the experiment, single photons are split at an initial beamsplitter, with the two “parts” then propagating along separate paths toward a detection area where a second beamsplitter can, at the last possible moment, either be inserted (causing the two “parts” to recombine and interfere) or removed (in which case one may simply observe which of the two paths was taken by the photon).
Of course, it is the aspect of delayed-choice which makes this so puzzling. With the second beam splitter in place, the observed interference can only be understood if something “split in half” and took both paths through the interferometer. But with the second beam splitter removed, the photon is (with high precision) observed in one or the other of the two beams exclusively, but never both.
As the experimenters explain it, “the striking feature is that the phenomenon of interference, interpreted as a wave following simultaneously two paths, is incompatible with our common sense representation of a particle which implies to follow one route or the other but not both.” [@er] But, because the “choice” (made by a Quantum Random Number Generator in this experimental realization) of whether the second beam splitter is to be inserted or removed is made *after* the photon has long since passed the initial beam splitter (at which it presumably would have to decide whether to split in half and take both paths, or select a single path) there appears to be a kind of non-local or backwards-in-time causation.
Actually, perhaps because he rejected as absurd any such non-local or reverse-temporal causation, Wheeler himself interpreted the significance of the thought experiment this way:
> “Then let the general lesson of this apparent time inversion be drawn: ‘No phenomenon is a phenomenon until it is an observed phenomenon.’ In other words, it is not a paradox that we choose what *shall* have happened after ‘it has *already* happened.’ It has not really happened, it is not a phenomenon, until it is an observed phenomenon.” [@wheeler]
The experimenters are apparently less comfortable with this radically subjectivist and anti-realist philosophy, and simply claim that the experiment demonstrates a surprising sort of causality:
> “Our realization of Wheeler’s delayed-choice GedankenExperiment demonstrates beyond any doubt that the behavior of the photon in the interferometer depends on the choice of the observable which is measured, even when that choice is made at a position and a time such that it is separated from the entrance of the photon in the interferometer by a space-like interval.” [@er]
\* \* \*
But does the experimenters’ experiment really establish such non-local causation (or, for that mattter, Wheeler’s subjectivism) *“beyond any doubt”*?
The answer is demonstrably negative. For a theory exists which can account for the observed results of Wheeler’s delayed-choice experiment in a completely ordinary, local, common-sensical fashion. To see how this is possible, it is helpful to note an additional premise that Wheeler and the experimenters use in deducing from the observed results their respective conclusions. The premise is this: each “individual photon” is fundamentally, unanalyzably, ontologically *one thing*. It is only in the presence of this tacit premise that the claims
- something took exclusively one of the two available paths through the interferometer, and
- something took simultaneously both paths through the interferometer
form together a logical contradiction which must be avoided by saying (naive appearances to the contrary notwithstanding) that, really, only one of (i) and (ii) is true. And it is precisely saying this which implies non-local causation since *which thing happened* is apparently influenced by our (later) choice to insert (or not) the beamsplitter.
But suppose, as postulated by the pilot-wave theory of de Broglie and Bohm, that each “individual photon” consists of two ontologically distinct aspects: a wave *and* a particle. [@bm] According to this theory, which is empirically equivalent to standard quantum theory, the photon *particle* obeys (i), i.e., it follows a definite trajectory and thus takes exclusively one or the other of the two possible paths through the interferometer. Meanwhile, the *wave*, in accordance with (ii), takes both paths. The trajectory of the particle is influenced by the wave in a way that explains exactly why the particle ends up where it ends up and with precisely the observed empirical frequencies under the various experimental conditions. And, crucially, the theory does this without in any way requiring us to posit a spooky backwards-in-time causation (or worse, dropping altogether the idea that something actually happened between the production and detection of the photon).
The theory of de Broglie and Bohm really exists, and really works. And it provides a stark counterexample to the claim that the results of Wheeler’s delayed-choice experiment *require* “beyond any doubt that the \[earlier\] behavior of the photon in the interferometer depends on the \[later\] choice of the observable which is measured”.
It is frustrating that this needs to be pointed out. The de Broglie - Bohm theory has existed for more than 50 years. Moreover, 25 years ago, J.S. Bell wrote an entire paper aimed at making this same point – that the pilot-wave theory provides an elegant alternative to the kinds of inferences made from Wheeler’s delayed-choice experiment by physicists who are unduly in the grip of the orthodox quantum philosophy. [@bell]
One can do no better than simply quote Bell’s penetrating summary of Wheeler’s argument, and his explanation of how the de Broglie - Bohm theory eludes Wheeler’s conclusion. First:
> The decision, to interpose the \[beam splitter\] or not, is made only *after* the pulse has passed the slits. As a result of this choice the particle *either* falls on one of the two counters, indicating passage through one of the two \[arms of the interferometer\], *or* contributes \[to the building of an\] interference pattern after many repititions. Sometimes the interference pattern is held to imply ‘passage of the particle through both slits’ – in some sense. Here it seems possible to *choose*, *later*, whether the particle, *earlier*, passed through one \[arm\] or two! Perhaps it is better not to think about it. ‘No phenomenon is a phenomenon until it is an observed phenomenon.’” [@bell]
Second: as Bell explains, in the de Broglie - Bohm theory
> “the wave always goes through both \[arms\] (as is the nature of waves) and the particle goes through only one (as is the nature of particles). But the particle is guided by the wave toward places where $|\psi|^2$ is large, and away from places where $|\psi|^2$ is small. And so if the \[second beam splitter\] is in position the particle contributes a spot to the interference pattern ... or if the plate is absent the particle proceeds to one of the counters. *In neither case is the earlier motion, of either particle or wave, affeted by the later insertion or noninsertion of the \[beam splitter\].*” (emphasis added) [@bell]
\* \* \*
There is a certain irony here associated with the fact that most physicists (at least, among those who have even heard of it) reject the de Broglie - Bohm theory because it is explicitly non-local. It’s certainly correct that it is: the theory posits a mechanism whereby goings-on at the location of one particle, can affect the trajectory of another, distant (entangled) particle, sooner than signals propagating at the speed of light would permit. And this non-locality is crucial to the theory’s ability to match the empirically correct predictions of standard quantum theory.
But the rejection of the pilot-wave theory on this basis is fallacious, for, as proved by Bell’s Theorem, *any* theory which is in agreement with the experimental tests of Bell’s Inequality must display a similar non-local causality.
Proponents of orthodox quantum theory, however, are often confused about this and think of *their* theory as perfectly local. But, simply put, it isn’t: either one accepts (with Wheeler) an anti-realism which prevents the theory from saying anything dynamical at all (about, for example, photons), such that it simply doesn’t say anything about the kinds of processes to which the terms “local” and “non-local” apply; or (like the experimenters) one must admit that the dynamics of the theory (in particular processes involving “measurement”) are manifestly non-local. Either one stubbornly insists that the theory doesn’t say *anything*, or one admits that what it says involves non-locality. The point is, in neither case can one claim that the theory provides a local description of the dynamics of photons (etc.).
The primary insight offered by the Wheeler delayed-choice experiment is that, while (as proved by Bell) any theory which agrees with *all* of the quantum mechanical predictions must be non-local, *some* theories display that troubling non-locality more often or more blatantly than others. Here is a situation which can be explained simply and locally by the de Broglie - Bohm theory, but whose explanation in terms of the orthodox quantum theory requires non-locality or worse. And so the irony is that those who reject the de Broglie - Bohm theory because it is non-local, and favor instead the standard version of quantum theory, unwittingly end up favoring something that is (in the sense just elaborated) *more non-local* than the theory they reject because it is non-local. Co-opting (for a purpose he wouldn’t like) an infamous passage of N.D. Mermin: those for whom non-locality is anathema should (in response to Wheeler’s experiment) reject orthodox quantum theory and flock to the pilot-wave picture! [@mermin]
Of course, the real lesson here is just that anyone not conversant with the pilot-wave theory is severely hampered when it comes to interpreting the significance and meaning of fundamental experiments in physics. As Bell noted,
> “Even now the de Broglie - Bohm picture is generally ignored, and not taught to students. I think this is a great loss. For that picture exercises the mind in a very salutary way.” [@bell2]
And so one is naturally led to wonder, again following Bell:
> “Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?” [@bell3]
Tragically, decades later, physicists (who apparently still remain ignorant of the important lessons of de Broglie and Bohm) are still making the latter choice (apparently without even realizing they are making a choice). One can only hope that the more reasonable choice – of acknowledging the real existence of the pilot wave theory and learning the important lessons it has to teach – will be not much longer delayed.
[0]{}
V. Jacques, E. Wu, F. Grosshans, F. Treussart, P. Grangier, A. Aspect, and J.-F. Roch, “Experimental realization of Wheeler’s delayed-choice GedankenExperiment” quant-ph/0610241
J.A. Wheeler, “The ‘Past’ and the ‘Delayed-Choice Double-Slit Experiment’’, pages 9 - 48 in *Mathematical Foundations of Quantum Theory*, (A.R. Marlow, editor), Academic Press, 1978.
For an introduction to this theory, one can do no better than S. Goldstein’s article on “Bohmian Mechanics” in *The Stanford Encyclopedia of Philosophy (Summer 2006 edition)*, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/sum2006/entries/qm-bohm/.
J.S. Bell, “de Broglie - Bohm, delayed-choice double-slit experiment, and density matrix”, *International Journal of Quantum Chemistry*: Quantum Chemistry Symposium, [**[14]{}**]{} (1980) 155-9.
N. David Mermin, “Hidden Variables and the Two Theorems of John Bell”, *Rev. Mod. Phys.*, [**[65]{}**]{} (1992), pp. 803-815. See also T. Norsen, “EPR and Bell Locality”, *AIP Conference Proceedings, Quantum Mechanics: Are There Quantum Jumps?* and *On the Present Status of Quantum Mechanics*, Vol. 844, pp. 281-293, June 2006. (quant-ph/0408105)
J.S. Bell, “Speakable and unspeakable in quantum mechanics”. Introductory remarks at Naples-Amalfi meeting, May 7, 1984. Reprinted in *Speakable and Unspeakable in Quantum Mechanics*, Second Edition, Cambridge University Press, 2004.
J.S. Bell, “On the impossible pilot wave”, *Foundations of Physics*, [**[12]{}**]{} (1982) pp 989-99.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We define and study $C^1-$solutions of the Aronsson equation (AE), a second order quasi linear equation. We show that such super/subsolutions make the Hamiltonian monotone on the trajectories of the closed loop Hamiltonian dynamics. We give a short, general proof that $C^1-$solutions are absolutely minimizing functions. We discuss how $C^1-$supersolutions of (AE) become special Lyapunov functions of symmetric control systems, and allow to find continuous feedbacks driving the system to a target in finite time, except on a singular manifold. A consequence is a simple proof that the corresponding minimum time function is locally Lipschitz continuous away from the singular manifold, despite classical results show that it should only be Hölder continuous unless appropriate conditions hold. We provide two examples for Hörmander and Grushin families of vector fields where we construct $C^1-$solutions (even classical) explicitly.'
author:
- |
Pierpaolo Soravia[^1]\
Dipartimento di Matematica\
Università di Padova, via Trieste 63, 35121 Padova, Italy
title: 'The Aronsson equation, Lyapunov functions and local Lipschitz regularity of the minimum time function'
---
[*2010 Mathematics Subject Classification:*49L20; Secondary 35F21, 35D40, 93B05.]{}
Introduction
============
In this note we want to describe a possible new, non standard way of using the Aronsson equation, a second order partial differential equation, to obtain controllability properties of deterministic control systems. We investigate a symmetric control system $$\label{eqsystem}
\left\{
\begin{array}{ll}
\dot x_t=f(x_t,a_t),\\
x_0=x_o\in\Omega,
\end{array}\right.$$ where $-f(a,A)\subset f(x,A)$, $A$ is a nonempty and compact subset of a metric space. We define the Hamiltonian $$H(x,p)=\max_{a\in A}\{-f(x,a)\cdot p\},$$ which is therefore nonnegative and positively one homogeneous in the adjoint variable, and we want to drive the system to a target, temporarily we say the origin. We are interested in the relationship of (\[eqsystem\]) with the Aronsson equation (AE) $$-\nabla\left(H(x,\nabla U(x))\right)\cdot H_p(x,\nabla U(x))=0,$$ which is a quasilinear degenerate elliptic equation. Ideally, if everything is smooth, when we are given a classical solution $U$ of (AE) and we consider a trajectory $x_t$ of the Hamiltonian dynamics $$\dot x_t=-H_p(x_t,\nabla U(x_t)),$$ which is a closed loop dynamics for the original control system, we find out that (AE) can be rewritten as $$\frac d{dt}H(x_t,\nabla U(x_t))=0.$$ Therefore $H(x_t,\nabla U(x_t))$ is constant. This is a very desirable propery on the control system since it allows to use $U$ as a control Lyapunov function, despite the presence of a possibly nonempty singular set $$\label{eqsing}{\mathcal H}=\{x:H(x,\nabla U(x))=0\},$$ which possibly contains the origin. Indeed if $x_o$ is outside the singular set and $U$ has a unique global minimum at the origin, then the trajectory of the Hamiltonian dynamics will reach the origin in finite time.
In general however, several steps of this path break down. From one side, (AE) does not have $C^2$ classical solutions in general. Even in the case where $f=a$, $A=B_1(0)\subset{\mathbb R}^n$ is the closed unit ball, $H(p)=|p|$ and (AE) becomes the well known infinity Laplace equation $$-\Delta U(x)\;\nabla U(x)\cdot \nabla U(x)=0,$$ solutions are not classical, although known regularity results show that they are $C^{1,\alpha}$. Therefore solutions of (AE) have to be meant in some weak sense, as viscosity solutions. For generic viscosity solutions, we can find counterexamples to the fact that the Hamiltonian is constant along trajectories of the Hamiltonian dynamics, as we show later. For an introduction to the theory of viscosity solutions in optimal control, we refer the reader to the book by Bardi, Capuzzo-Dolcetta [@bcd].
In this paper we will first characterize when, for a given super or subsolution of (AE) the Hamiltonian is monotone on the trajectories of the Hamiltonian dynamics (e.g. satisfies the [*monotonicity property*]{}). To this end we introduce the notion of $C^1-$super/subsolution and prove for them that they satisfy the monitonicity property of the Hamiltonian. We emphasize the fact that not all viscosity solution that are $C^1$ functions, are $C^1-$ solutions according to our definition. Moreover, as a side result, we also show that our $C^1-$solutions are absolutely minimizing functions, i.e. local minimizers of the functional that computes the $L^\infty$ norm of the Hamiltonian. It is a well know equivalent property to being a viscosity solution of (AE) at least when $H$ is coercive or possibly in some Carnot Caratheodory spaces, but this fact is not completely understood in general. Therefore $C^1-$solution appears to be an appropriate notion.
We then prove that if (AE) admits a $C^1-$supersolution $U$ having a unique minimum at the origin, then our control system can be driven to the origin in finite time with a continuous feedback, starting at every initial point outside the singular set $\mathcal H$. If moreover $U$ satisfies appropriate decay in a neighborhood of the origin only at points where the Hamiltonian $H$ stays away from zero, then we show that the corresponding minimun time function is locally Lipschitz continuous outside the singular set, despite the fact that even if the origin is small time locally attainable, then the minimum time function can only be proved to be Hölder continuous in its domain, in general, under appropriate conditions. Thus the loss of regularity of the minimum time function is only concentrated at points in the singular set. Finally for two explicit well known examples, where the system has an Hörmander type, or a Grushin family of vector fields, we exibit two explicit not yet known classical solutions of (AE), their gauge functions, providing examples of smooth absolute minimizers for such systems and the proof that their minimum time function is locally Lipschitz continuous outside the singular set. We remark the fact that neither in the general statement nor in the examples, the family of vector fields is ever supposed to span the whole space at the origin, therefore the classical sufficient attainability condition ensuring that the minimum time function is locally Lipschitz continuous will not be satisfied in general. Indeed in the explicit examples that we illustrate in Section 4, the minimum time function is known to be locally only $1/2-$Hölder continuous in its domain.
Small time local attainability and regularity of the minimum time function is an important subject in optimal control. Classical results by Petrov [@pe] show sufficient conditions for attainability at a single point by requiring that the convex hull of the vector fields at the point contains the origin in its interior. Such result was later improved by Liverovskii [@li] augmenting the vector fields with the family of their Lie brackets, see also the paper by author [@so7]. More recently such results had several extensions in the work by Krastanov and Quincampoix [@kr3] and Marigonda, Rigo and Le [@ma; @ma3; @ma4]. Our regularity results rather go in the direction of those contained in two recent papers by Albano, Cannarsa and Scarinci [@alcasc; @alcasc1], where they show, by completely different methods, that if a family of smooth vector fields satisfies the Hörmander condition, then the set where the local Lipschitz continuity of the minimum time function fails is the union of singular trajectories, and that it is analytic except on a subset of null measure. Our approach is instead more direct and comes as a consequence of constructing Lyapunov functions as $C^1-$supersolutions of the Aronsson equation. We finally mention the paper by Motta and Rampazzo [@mr] where the authors study higher order hamiltonians obtained by adding iterated Lie brackets as additional vector fields, in order to prove global asymptotic controllability to a target. While we do not study asymptotic controllability in this paper, their idea of constructing a higher order Hamiltonian may be seen complementary to ours, using instead the equation (AE).
Equation (AE) was introduced by Aronsson [@ar0], as the Euler Lagrange equation for absolute minimizers, i.e. local minima of $L^\infty$ functionals, typically the $L^\infty$ norm of the gradient. There has been a lot of work in more recent years to develop that theory using viscosity solutions by authors like Jensen [@jen1], Barron-Jensen-Wang [@bjw], Juutinen [@ju], Crandall [@cr]. For the main results on the infinity Laplace equation, we refer the reader to the paper [@acj] and the references therein. For results for equation (AE) especially in the $x$ dependent case, we also refer to the paper by the author [@soae] and the references therein, see also [@so2; @so6]. In particular we mention that equation (AE) has been studied in Carnot groups by Bieske-Capogna [@bica], by Bieske [@bie] in the Grushin space, and by Wang [@wa] in the case of $C^2$ and homogeneous Hamiltonians with a Carnot Caratheodory structure.
The structure of the paper is as follows. In Section 2 we introduce the problem and give a motivating example. In Section 3 we introduce $C^1-$solutions of (AE) and show for them some important properties: monotonicity of the Hamiltonian on the hamiltonian dynamics, an equivalent definition and the fact that they are absolutely minimizing functions. In Section 4, we use $C^1-$solutions of (AE) as Lyapunov functions for nonlinear control systems and obtain local Lipschitz regularity of the minimum time function away from the singular set. In Section 5 we provide two new examples of explicit classical solutions of (AE) in two important cases of nonlinear control systems where the results of Section 4 apply.
Control theory and the Aronsson equation
========================================
As we mentioned in the introduction, throughout the paper we consider the controlled dynamical system (\[eqsystem\]) where $\Omega\subset{\mathbb R}^n$ is open, $A$ is a nonempty, compact subset of some metric space, $a_\cdot \in L^\infty((0,+\infty);A)$ and $f:\Omega\times A\to{\mathbb R}^n$ is a continuous function, continuously differentiable and uniformly Lipschitz continuous in the first group of variables, i.e. $$|f(x^1,a)-f(x^2,a)|\leq L|x^1-x^2|\quad\mbox{for all }x^1,x^2\in\Omega,\;a\in A.$$ We suppose moreover that $f(x,A)$ is convex for every $x\in\Omega$ and that the system is symmetric, i.e. $-f(x,A)\subset f(x,A)$ for all $x\in {\mathbb R}^n$ and define the Hamiltonian $$\label{eqhamiltonian}
H(x,p)=\max_{a\in A}\{-f(x,a)\cdot p\}\in C(\Omega\times {\mathbb R}^n),$$ so that $H\geq0$ and $H(x,-p)=H(x,p)$ by symmetry. Notice that $H$ is at least locally Lipschitz continuous, and $H(x,\cdot)$ is positively homogeneous of degree one by compactness of $A$. We will also assume that $H$ is continuously differentiable on $\{(x,p):\in\Omega\times {\mathbb R}^{n}:H(x,p)>0\}$.
The case we are mostly interested in the following sections is when $$\label{eqsigma}
f(x,a)=\sigma(x)a, \quad\sigma:{\mathbb R}^n\to M_{n\times m}$$ where $M_{n\times m}$ is set of $n\times m$ matrices and $A=B_1(0)\subset{\mathbb R}^m$ is the closed unit ball. In this case $H(x,p)=|p\sigma(x)|$.
Given a smooth function $U\in C^1(\Omega)$ and $x_o\in \Omega\backslash{\mathcal H}$, where $\mathcal H$ is the singular set as in (\[eqsing\]), we consider the hamiltonian dynamics $$\label{eqhd}
\left\{\begin{array}{ll}
\dot x_t=-H_p(x_t,\nabla U(x_t)),\\
x_0=x_o\in\Omega,
\end{array}\right.$$ where $H_p$ indicates the gradient of the Hamiltonian $H=H(x,p)$ with respect to the group of [*adjoint*]{} variables $p$.
When the Hamiltonian $H(x,\nabla U(x))$ is differentiable, notice that for $a_{x}\in A$ such that $-f(x,a_x)\cdot \nabla U(x)=H(x,\nabla U(x))$ we have that $$-H_p(x,\nabla U(x))=f(x,a_x).$$ Therefore trajectories of (\[eqhd\]) are indeed trajectories of the system (\[eqsystem\]) and moreover (\[eqhd\]) is a closed loop system of (\[eqsystem\]) with feedback $a_x$. If in particular $f(x,a)$ is as in (\[eqsigma\]), then, for $|p\sigma(x)|\neq0$, $$H(x,p)=|p\sigma(x)|, \quad H_p(x,p)=\sigma(x)\frac{^t\sigma(x) \;^tp}{H(x,p)}, \quad a_x=-\frac{^t\sigma(x)\nabla U(x)}{H(x,\nabla U(x))}\in B_1(0).$$ Therefore in this case the feedback control is at least continuous on $\Omega\backslash{\mathcal H}$ and the closed loop system always has a well defined local solution starting out on that set.
We want to discuss when $H(x_t,\nabla U(x_t))$ is monotone on a trajectory $x_t$ of (\[eqhd\]). If we can compute derivatives, then we need to discuss the sign of $$\frac d{dt}H(x_t,\nabla U(x_t))=\nabla(H(x_t,\nabla U(x_t)))\cdot \dot x_t=-\nabla(H(x_t,\nabla U(x_t)))\cdot H_p(x_t,\nabla U(x_t)).$$ Therefore a sufficient condition is that $U\in C^2(\Omega\backslash{\mathcal H})$ is a super or subsolution of the following pde $$\label{eqae}
-\nabla(H(x,\nabla U(x)))\cdot H_p(x,\nabla U(x))=0,\quad x\in \Omega\backslash{\mathcal H},$$ which is named Aronsson equation in the literature. Notice that $H(x_t,\nabla U(x_t))$ is actually constant if $U$ is a classical solution of (\[eqae\]). The above computation is correct only under the supposed regularity on $U$ and unfortunately if such regularity is not satisfied and we interpret super/subsolutions of (\[eqae\]) as viscosity solutions this is no longer true in general, as the following example shows. Notice that if $H$ is not differentiable at a point $(x_o,\nabla U(x_o))$ where $H(x_o,\nabla U(x_o))=0$, then $H_p(x_o,\nabla U(x_o))$ is multivalued, precisely the closed convex subgradient of the Lipschitz function $H(x_o,\cdot)$ computed at $
\nabla U(x_o)$and contains the origin by the symmetry of the system. Therefore the dynamics (\[eqhd\]) has at least the constant solution also in this case. In some statements below it will be sometimes more convenient to look at (AE) for $H^2$ in order to gain regularity at points where $H$ vanishes.
\[exinfinity\] In the plane, suppose that $H^2(x,y,p_x,p_y)=(|p_x|^2+|p_y|^2)/2$ hence it is smooth and independent of the state variables. In this case (AE) becomes the well known infinity Laplace equation $$-\Delta_\infty U(x)=-D^2U(x)\nabla U(x)\cdot\nabla U(x)=0.$$ It is easy to check that a viscosity solution of the equation is $u(x,y)=|x|^{4/3}-|y|^{4/3}$. The function $u\in C^{1,1/3}({\mathbb R}^2)\backslash C^2$. Among solutions of the Hamiltonian dynamics $(\dot x_t,\dot y_t)=-\nabla U(x_t,y_t)$, we can find the following two trajectories $$(x^{(1)}_t,y^{(1)}_t)=\left(\left(1-\frac89t\right)^{3/2},0\right),\quad(x^{(2)}_t,y^{(2)}_t)=\left(0,\left(1+\frac89t\right)^{3/2}\right),$$ defined in a neighborhood of $t=0$. Clearly the Hamiltonian along the two trajectories is $$H(\nabla U(x_t^{(1)},y_t^{(1)}))=\frac{2\sqrt{2}}3\sqrt{1-\frac89t},\quad H(\nabla U(x_t^{(2)},y_t^{(2)}))=\frac{2\sqrt{2}}3\sqrt{1+\frac89t},$$ it is strictly decreasing in the first case, strictly increasing in the second but it is never constant. Therefore the remark that we made at the beginning fails in this example. In the next section we are going to understand the reason.
Monotonicity of the Hamiltonian along the Hamiltonian dynamics
==============================================================
Throughout this section, we consider a Hamiltonian not necessarily with the structure as in (\[eqhamiltonian\]) but satisfying the following: $$\tag{H1}\label{eqh1}
\begin{array}{c}
\;H:\Omega\times{\mathbb R}^n\to{\mathbb R}\hbox{ is continuous and }H(x,-p)=H(x,p),\\
H_p(x,p)\hbox{ exists and is continuous for all }
(x,p)\in \Omega\times{\mathbb R}^n\hbox{ if }H(x,p)>0.
\end{array}$$ We will also refer to the following property: $$\tag{H2}\label{eqh2}
H(x,\cdot) \hbox{ is positively }r>0 \hbox{ homogeneous, for all }x\in\Omega.$$ Given $U\in C^1(\Omega)$, the monotonicity of the Hamiltonian along trajectories of (\[eqhd\]) is the object of this section. It is a consequence of the following known general result.
\[propmonotone\] Let $\Omega\subset{\mathbb R}^n$ be an open set and $F:\Omega\to{\mathbb R}^n$ be a continuous vector field. The following are equivalent:
- [$V:\Omega\to{\mathbb R}$ is a continuous viscosity solution of $-F(x)\cdot\nabla V(x)\leq 0$ in $\Omega$.]{}
- [The system $(V,F)$ is forward weakly increasing, i.e. for every $x_o\in\Omega$, there is a solution of the differential equation $\dot x_t=F(x_t)$, for $t\in [0,{\varepsilon})$, $x_0=x_o$ such that $V(x_s)\leq V(x_t)$ for $0\leq s\leq t$.]{}
Moreover the following are also equivalent
- [$V:\Omega\to{\mathbb R}$ is a continuous viscosity solution of $F(x)\cdot\nabla V(x)\geq 0$ in $\Omega$.]{}
- [The system $(V,F)$ is backward weakly increasing, i.e. for every $x_o\in\Omega$, there is a solution of the differential equation $\dot x_t=F(x_t)$, for $t\in (-{\varepsilon},0]$, $x_0=x_o$ such that $V(x_s)\leq V(x_t)$ for $s\leq t\leq0$.]{}
\[corclarke\] Let $\Omega\subset{\mathbb R}^n$ be an open set and $F:\Omega\to{\mathbb R}^n$ be a continuous vector field. The following are equivalent:
- [$V:\Omega\to{\mathbb R}$ is a continuous viscosity solution of $-F(x)\cdot\nabla V(x)\leq 0$ and of $F(x)\cdot\nabla V(x)\geq 0$ in $\Omega$.]{}
- [The system $(V,F)$ is weakly increasing, i.e. for every $x_o\in\Omega$, there is a solution of the differential equation $\dot x_t=F(x_t)$, for $t\in (-{\varepsilon},{\varepsilon})$, $x_0=x_o$ such that $V(x_s)\leq V(x_t)$ for $s\leq t$.]{}
The proof of the previous statement can be found in [@clsw0], see also [@clsw]. When $F\in C^1$ another proof can be found in Proposition 5.18 of [@bcd] or can be deduced from the optimality principles in optimal control proved in [@soopt], when $F$ is locally Lipschitz continuous. In the case when $F$ is locally Lipschitz, the two differential inequalities in (i) of Corollary \[corclarke\] turn out to be equivalent and of course there is also uniqueness of the trajectory of the dynamical system $\dot x=F(x)$, $x(0)=x_o$. When (ii) in the Corollary is satisfied by all trajectories of the dynamical system then the system is said to be strongly monotone. This occurs in particular if there is at most one trajectory, as when $F$ is locally Lipschitz continuous. More general sufficient conditions for strong monotonicity can be found in [@clsw], see also [@drw].
In view of the above result, we introduce the following definition.
\[defcone\] Let $\Omega\subset{\mathbb R}^n$ be open and let $H:\Omega\times{\mathbb R}^n\to{\mathbb R}$ satisfying (\[eqh1\]). We say that a function $U\in C^1(\Omega)$ is a $C^1-$supersolution (resp. subsolution) of the Aronsson equation (\[eqae\]) in $\Omega$, if setting $V(x)=H(x,\nabla U(x))$ and $F(x)=-H_p(x,\nabla U(x))$ we have that $V$ is a viscosity subsolution (resp. supersolution) of $-F(x)\cdot\nabla V(x)=0$ and a supersolution (resp. a subsolution) of $F(x)\cdot\nabla V(x)=0$.
It is worth pointing out explicitely the consequence we have reached by Proposition \[propmonotone\].
Let $U\in C^1(\Omega)$ be a $C^1-$supersolution (resp, subsolution) of (\[eqae\]). For $x_o\in\Omega\backslash{\mathcal H}$, then there is a trajectory $x_t$ of the Hamiltonian dynamics (\[eqhd\]) such that $H(x_t,\nabla U(x_t))$ is nondecreasing (resp. nonincreasing).
- [Notice that if $U$ is a $C^1-$solution of (\[eqae\]) and the Hamiltonian dynamics (\[eqhd\]) is either strongly decreasing and strongly increasing, as for instance if it has a unique solution for a given initial condition, then for all trajectories $x_t$ of (\[eqhd\]), $H(x_t,\nabla U(x_t))$ is constant. ]{}
- [In order to comment back to Example \[exinfinity\], notice that while $U(x,y)=|x|^{4/3}-|y|^{4/3}$ is a $C^1$ function, nevertheless, as easily checked, $V(x,y)=H^2(\nabla U(x,y))=16(|x|^{2/3}+|y|^{2/3})/9$ is only a viscosity subsolution but not a supersolution of $$-\nabla V(x)\cdot (-H_p^2(\nabla U(x)))=0,$$ while it is a viscosity solution of $\nabla V(x)\cdot (-H_p^2(\nabla U(x)))=0$. Then it turns out that the Hamiltonian is weakly increasing on the trajectories of the Hamiltonian dynamics. Indeed there is another trajectory of the Hamiltonian dynamics such that $(x^{(3)}(0),y^{(3)}(0))=(1,0)=(x^{(1)}(0),y^{(1)}(0))$, namely $$(x^{(3)}(t),y^{(3)}(t))=\left(\left(1-\frac89t\right)^{3/2},\left(\frac89t\right)^{3/2}\right)$$ along which the Hamiltonian is actually constant, until the trajectory is well defined. ]{}
- [It is clear by Example \[exinfinity\] that while classical $C^2$ solutions of (\[eqae\]) are $C^1-$solutions, continuous or even $C^1$ viscosity solutions in general are not. The definition of $C^1-$solution that we introduced is meant to preserve the monotonicity property of the Hamiltonian on the trajectories of the Hamiltonian dynamics. ]{}
- [Observe that if $U$ is a $C^1-$solution, then $-U$ is a $C^1-$solution as well, since the Hamiltonian is unchanged and the vector field in the Hamiltonian dynamics becomes the opposite. ]{}
It may look unpleasant that Definition \[defcone\] of solution of (\[eqae\]) refers to a property that is not formulated directly for the function $U$. Therefore in the next statement we will reformulate the above definition. The property (ED) below will give an equivalent definition of a $C^1-$solution.
Let $U\in C^1(\Omega)$ and $H$ satisfying (\[eqh1\]), (\[eqh2\]). The following two statements are equivalent:
- [ for all $x_o\in\Omega\backslash{\mathcal H}$, there is a trajectory $x_t$ of the Hamiltonian dynamics (\[eqhd\]), such that if $\varphi\in C^2([0,{\varepsilon}))\cup C^2([(-{\varepsilon},0])$ is a test function and $U(x_t)-\varphi(t)$ has a minimum (respectively maximum) at $0$ and $\dot\varphi(0)=\frac{d}{dt}U(x_t)_{t=0}$, then we have that $$-\ddot \varphi(0)\geq0\;(\hbox{resp. }\leq0).$$]{}
- [$U$ is a $C^1-$supersolution (resp. subsolution) of (\[eqae\]).]{}
In particular, if $H$ is $C^1$ at $\{(x,p):H(x,p)\neq0\}$, a $C^1-$supersolution (resp. subsolution) is a viscosity supersolution (resp. subsolution) of (\[eqae\]).
In the statement of (ED), when the hamiltonian vector field $F(x)=-H_p(x,\nabla U(x))$ is locally Lipschitz continuous, we may restrict the test functions to $\varphi\in C^2(-{\varepsilon},{\varepsilon})$.
We only prove the statement for supersolutions, the other case being similar. Let $U\in C^1(\Omega)$.
Suppose first that (ED) holds true. Let $V(x)=H(x,\nabla U(x))$ and $\Phi\in C^1(\Omega)$ such that $V-\Phi$ has a maximum at $x_o$, $V(x_o)=\Phi(x_o)$. Therefore if $x_t$ is a solution of the hamiltonian dynamics (\[eqhd\]) that satisfies (ED), we have that, by homogeneity of $H(x,\cdot)$ and for $F(x)=-H_p(x,\nabla U(x))$, $$r\Phi(x_t)\geq rV(x_t)=rH(x_t,\nabla U(x_t))=-\nabla U(x_t)\cdot F(x_t)=-\frac d{dt}U(x_t).$$ Thus integrating for small $t>0$ we get $$\label{eqmis}
\varphi(t):=U(x_o)-r\int_0^t\Phi(x_s)\;ds\leq U(x_t),$$ and thus $U(x_t)-\varphi(t)$ has a minimum at $t=0$ on $[0,{\varepsilon})$ for ${\varepsilon}$ small and $\dot\varphi(0)=-r\Phi(x_t)=\frac{d}{dt}U(x_t)|_{t=0}$. If instead $V-\Phi$ had a minimum at $x_o$, then integrating on $(t,0]$ for $t<0$ small enough, we would still obtain the same as in (\[eqmis\]). By (ED), from (\[eqmis\]) we get in both cases $$0\geq\ddot\varphi(0)=-r\frac d{dt}\Phi(x_t)|_{t=0}=-r\nabla \Phi(x_o)\cdot F(x_o),$$ where $F(x)=-H_p(x,\nabla U(x))$. Therefore we conclude that $V$ is a viscosity subsolution of $-\nabla V\cdot F\leq 0$ (or a supersolution of $\nabla V\cdot F\geq 0$ when $V-\phi$ has a minimum st $x_o$). Finally by definition, $U$ is a $C^1-$supersolution of (\[eqae\]).
Suppose now that $U$ is a $C^1-$supersolution of (\[eqae\]). Then by Proposition \[propmonotone\], for all $x_o\in \Omega\backslash{\mathcal H}$, we can find a trajectory $x_t$ of the dynamics (\[eqhd\]) such that $rV(x_t)=-\frac d{dt}U(x_t)$ is nondecreasing. Therefore $U(x_t)$ is a concave function of $t$. Let $\varphi\in C^2((-{\varepsilon},0])\cup C^2([0,{\varepsilon}))$ be such that $U(x_t)-\varphi(t)$ has a minimum at $t=0$, $U(x_o)=\varphi(0)$ and $\frac d{dt}U(x_t)|_{t=0}=\dot\varphi(0)$. If we had $\ddot\varphi(0)>0$ then $\varphi$ would be strictly convex in its domain. Therefore for $t\neq0$ small enough, and in the domain of $\varphi$, $$U(x_t)\geq\varphi(t)>\varphi(0)+\dot\varphi(0)t=U(x_o)+\frac d{dt}U(x_t)|_{t=0}t\geq U(x_t),$$ by concavity of $U(x_t)$. This is a contradiction.
We prove the last statement on the fact that a $C^1-$solution is a viscosity solution. Therefore for a $C^1-$supersolution $U$ of (\[eqae\]) let now $\Phi\in C^2(\Omega)$ be such that $U-\Phi$ has a minimum at $x_o$. By (ED), for a suitable solution $x_t$ of (\[eqhd\]) we have that $U(x_t)-\varphi(t)$ has a minimum at $t=0$ if $\varphi(t)=\Phi(x_t)$, in particular $\dot\varphi(0)=\frac{d}{dt}U(x_t)_{t=0}$. By (ED) and homogeneity of $H(x,\cdot)$, $$\begin{array}{l}
0\leq -\ddot\varphi(0)=\frac d{dt}\nabla\Phi(x_t)\cdot H_p(x_t,\Phi(x_t))|_{t=0}=r\frac d{dt}H(x_t,\nabla \Phi(x_t))|_{t=0}
\\=-r\nabla(H(x_o,\nabla \Phi(x_o)))\cdot H_p(x_o,\nabla \Phi(x_o)).
\end{array}$$ Therefore $U$ is a viscosity supersolution of (\[eqae\]). The case of subsolutions is similar and we skip it.
We end this section by proving another important property of $C^1-$ solutions of (\[eqae\]) that in the literature was the main motivation to the study of (AE).
\[teoam\] Let $\Omega\subset{\mathbb R}^n$ open and bounded, $H$ satisfying (\[eqh1\]), and having the structure (\[eqhamiltonian\]). Let $U\in C^1(\Omega)\cap C({\overline\Omega})$ be a $C^1-$solution of (\[eqae\]). For any function $W\in C({\overline\Omega})$ such that : $$\label{eqam}\left\{\begin{array}{ll}
H(x,\nabla W(x))\leq k\in{\mathbb R},\quad& x\in\Omega,\\
W(x)=U(x),&x\in\partial\Omega
\end{array}\right.$$ in the viscosity sense, then $H(x,\nabla U(x))\leq k$ in $\Omega$.
When $D\subset{\mathbb R}^n$ is an open set and the property of a function $U\in C^1(D)$ in Theorem (\[teoam\]) holds for all open subsets $\Omega\subset D$ then we say that $U$ is an [*Absolutely minimizing function*]{} in $D$ for the Hamiltonian $H$. This means that $U$ is a local minimizer of $\|H(\cdot,\nabla U(\cdot))\|_{L^\infty}$. It is well known that for the infinity Laplace equation, where we minimize the Lipschitz constant of $U$, it is equivalent to be a viscosity solution and an absolutely minimizing function. Such equivalence is also known for coercive Hamiltonians and for the norm of the horizontal gradient in some Carnot Caratheodory spaces. For more general Hamiltonians this equivalence is not known. Here we prove one implication at least for $C^1-$ solutions of (\[eqae\]).
Let $U,W$ be as in the statement and suppose for convenience that $H(x,\cdot)$ is positively 1-homogeneous. We define $V(x)=H(x,\nabla U(x))\geq0$ and look at solutions $x_t$ of the Hamiltonian dynamics (\[eqhd\]). If $V(x_o)=0$, then clearly $V(x_o)\leq k$ and we have nothing left to show. If otherwise $V(x_o)>0$ since $U$ is a $C^1-$solution of (\[eqae\]), we already know that we can construct a solution of (\[eqhd\]) starting out at $x_o\in\Omega$ such that $V(x_t)$ is nondecreasing for $t\geq0$ and nonincreasing for $t\leq0$ (by a concatenation of two trajectories of (\[eqhd\]) with monotone Hamiltonian). Since $\Omega$ is bounded, then the curve $x_t$ will not stay indefinitely in $\Omega$ because as we already observed $$U(x_t)-U(x_o)\leq -\int_0^tV(x_s)\;ds\leq-t V(x_o),\quad \hbox{for }t\geq 0,$$ and $$U(x_t)-U(x_o)\geq -t V(x_o),\quad \hbox{for }t\leq 0.$$ Hence $x_t$ will hit $\partial\Omega$ forward and backward in finite time. Let $t_1<0<t_2$ be such that $x_{t_1},x_{t_2}\in\partial\Omega$ and $x_t\in \Omega$ for $t\in(t_1,t_2)$. Therefore $$\label{eqab}
U(x_{t_2})+t_2V(x_o)\leq U(x_o)\leq U(x_{t_1})+t_1V(x_o)$$ and then $$\label{eqaa}
W(x_{t_1})-W(x_{t_2})=U(x_{t_1})-U(x_{t_2})\geq (t_2-t_{1})V(x_o).$$ Now we use the differential inequality (\[eqam\]) in the viscosity sense and the lower optimality principle in control theory as in [@soopt] for subsolutions of the Hamilton-Jacobi equation. Therefore since $x_t$ is a trajectory of the control system (\[eqsystem\]) we have that for all ${\varepsilon}>0$ and $t_1+{\varepsilon}<t<t_2$, as $x_s\in\Omega$ for $s\in[t_1+{\varepsilon},t]$, $$W(x_{t_1+{\varepsilon}})\leq k(t-t_1-{\varepsilon})+W(x_{t}).$$ By letting $t\to t_2-$ and ${\varepsilon}\to0+$ we conclude, by continuity of $W$ at the boundary of $\Omega$ and (\[eqaa\]), $$V(x_o)(t_2-t_1)\leq W(x_{t_1})-W(x_{t_2})\leq k(t_2-t_1)$$ which is what we want.
Notice that in (\[eqab\]) equalities hold if $V$ is constant on a given trajectory of (\[eqhd\]) and we obtain that $$\frac{U(x_o)-U(x_{t_1})}{t_1}=\frac{U(x_o)-U(x_{t_2})}{t_2}$$ and then $$U(x_o)=\frac{t_2}{t_2-t_1}U(x_{t_1})-\frac{t_1}{t_2-t_1}U(x_{t_2}),$$ which is an implicit representation formula for $U$ through its boundary values, since the points $x_{t_1},x_{t_2}$ depend on the Hamiltonian dynamics (\[eqhd\]) and $U$ itself.
Liapunov functions and (AE)
===========================
In this section, we go back to the stucture (\[eqhamiltonian\]) for $H$ and want to discuss the classical idea of control Lyapunov function. Let ${\mathcal T}\subset{\mathbb R}^n$ be a closed target set, we want to find $U:{\mathbb R}^n\to[0,+\infty)$ at least lower semicontinuous and such that: $U(x)=0$ if and only if $x\in{\mathcal T}$ and such that for all $x\in{\mathbb R}^n\backslash{\mathcal T}$ there exists a control $a_\cdot\in L^\infty(0,+\infty)$ and $t_x\leq+\infty$ such that the corresponding trajectory of (\[eqsystem\]) satisfies: $$U(x_t)\mbox{ is nonincreasing and }U(x_t)\to0,\quad\hbox{as }t\to t_x.$$ Classical necessary and sufficient conditions lead to look for strict supersolutions of the Hamilton Jacobi equation, namely to find $U$ such that $$\label{eqlyap}
H(x,\nabla U(x))\geq l(x),$$ with $l:{\mathbb R}^n\to[0,+\infty)$ continuous and such that $l(x)=0$ if and only if $x\in{\mathcal T}$. The case ${\mathcal T}=\{0\}$ is already quite interesting for the theory.
Here we will apply the results of the previous section and plan consider Lyapunov functions built as follows. We analyse the existence of $U\in C^1(\Omega\backslash({\mathcal T}\cap{\mathcal H}))\cap {C(\overline{\Omega\backslash{\mathcal T}})}$ such that $U$ is a $C^1-$supersolution of (AE), i.e. satisfies $$\label{eqaei}
-\nabla(H(x,\nabla U(x))\cdot H_p(x,\nabla U(x)))\geq0\quad x\in\Omega\backslash({\mathcal T}\cap{\mathcal H}).$$
To study (\[eqaei\]) in the case when $H$ is as in (\[eqhamiltonian\]) and $f$ as in (\[eqsigma\]), it is sometimes more convenient to write it for the Hamiltonian squared $H^2(x,\nabla U(x))=|\nabla U(x)\sigma(x)|^2$. Thus $$\begin{array}{ll}
-\nabla(H^2(x,\nabla U(x))\cdot (H^2)_p(x,\nabla U(x))=-4\;^tD(\nabla U\sigma(x))\;^t(\nabla U(x)\sigma(x))\cdot \left(\sigma(x)\;^t(\nabla U(x)\sigma(x))\right)\\
\quad=-4S^*\;^t(\nabla U(x)\sigma(x))\cdot \;^t(\nabla U(x)\sigma(x)),
\end{array}$$ where we indicated $$S=\;^t\sigma(x)^tD(\nabla U\sigma(x))=\;^t\sigma(x)D^2U(x)\sigma(x)+\left(D\sigma_j\sigma_i(x)\cdot \nabla U(x)\right)_{i,j=1,\dots,m},$$ $\sigma_j$, $j=1,\dots,k$ are the columns of $\sigma$, and $S^*=(S+\;^tS)/2$. Therefore a special sufficient condition for $U$ to satisfy (\[eqaei\]) is that $S^*$ is negative semidefinite, which means that $U$ is $\sigma-$concave with respect to the family of vector fields $\sigma_j$, in the sense of Bardi-Dragoni [@badr]. We recall that the matrix $S$ also appears in [@so3] to study second order controllability conditions for symmetric control systems.
Define the minimum time function for system (\[eqsystem\]) as $$T(x)=\inf_{a\in L^\infty(0,+\infty)}t_x(a),$$ where $t_x(a)=\inf\{t\geq0:x_t\in{\mathcal T},\;x_t \mbox{ solution of }(\ref{eqsystem})\}\leq+\infty$. We prove the following result, recall that ${\mathcal H}=\{x:H(x,\nabla U(x))=0\}$ is the singular set.
\[propfeed\] Let $\Omega\subset{\mathbb R}^n$ be open and ${\mathcal T}\subset\Omega$ a closed target. Let $H$ have the structure (\[eqhamiltonian\]). Assume that $U\in C(\overline{\Omega\backslash{\mathcal T}})\cap C^1(\Omega\backslash({\mathcal T}\cap{\mathcal H}))$ is nonnegative and a $C^1-$solution of (\[eqaei\]) in $\Omega\backslash({\mathcal T}\cap{\mathcal H})$ and that $U(x)=0$ for $x\in{\mathcal T}$, $U(x)=M$ for $x\in\partial\Omega$ and $U(x)\in (0,M)$ for $x\in\Omega\backslash{\mathcal T}$ and some $M>0$. For any $x_o\in\Omega\backslash({\mathcal T}\cup{\mathcal H})$ there exists a solution of the closed loop system (\[eqhd\]) such that
- [$H(x_t,\nabla U(x_t))$ is a nondecreasing function of $t$; ]{}
- [$U(x_t)$ is a strictly decreasing function of $t$ ]{}
- [The trajectory $(x_t)_{t\geq0}$ reaches the target in finite time and the minimum time function for system (\[eqsystem\]) satisfies the estimate $$\label{eqmte}
T(x_o)\leq \frac {U(x_o)}{H(x_o,\nabla U(x_o))}.$$ ]{}
The thesis (i) follows from the results of the previous section since $U$ is a supersolution of (AE). Let $x_o$ be a point where $H(x_o,\nabla U(x_o))>0$. By homogeneity of the Hamiltonian we get, for $t\geq0$ $$0<H(x_o,\nabla U(x_o))\leq H(x_t,\nabla U(x_t))=\nabla U(x_t)\cdot H_p(x_t,\nabla U(x_t))=-\frac d{dt}U(x_t)$$ and (ii) follows. Integrating now the last inequality we obtain $$0\leq U(x_t)\leq U(x_o)-H(x_o,\nabla U(x_o))t$$ and thus the solution of (\[eqhd\]) reaches the target before time $$\label{eqtime}
\bar t=\frac{U(x_o)}{H(x_o,\nabla U(x_o))}.$$ Therefore (\[eqmte\]) follows by definition.
The estimate (\[eqmte\]) can be used to obtain local regularity of the minimum time function. The proof of regularity now follows a more standard path although under weaker assumptions than usual literature and will allow us to obtain a new regularity result. We emphasize that nothing in the next statement is assumed on the structure of the vectogram $f(x,A)$ when $x\in{\mathcal T}$. In particular the target need not be even small time locally attainable.
\[thmregularity\] Let $\Omega\subset{\mathbb R}^n$ be open and ${\mathcal T}\subset\Omega$ a closed target. Assume that $U\in C(\overline{\Omega\backslash{\mathcal T}})\cap C^1(\Omega\backslash({\mathcal T}\cap{\mathcal H}))$ is nonnegative and $C^1-$solution of (\[eqaei\]) in $\Omega\backslash({\mathcal T}\cap{\mathcal H})$ and that $U(x)=0$ for $x\in{\mathcal T}$, $U(x)=M$ for $x\in\partial\Omega$ and $U(x)\in (0,M)$ for $x\in\Omega\backslash{\mathcal T}$ and some $M>0$. Let $d(x)=\mbox{dist}(x,{\mathcal T})$ be the distance function from the target. Suppose that $U$ satisfies the following: for all ${\varepsilon}>0$ there are $\delta,c>0$ such that $$\label{eqexcond}
U(x)\leq c\; d(x),\quad \mbox{if }H(x,\nabla U(x))\geq{\varepsilon},\;d(x)<\delta.$$ Then the minimum time function $T$ for system (\[eqsystem\]) to reach the target is finite and locally Lipschitz continuous in $\Omega\backslash({\mathcal T}\cup{\mathcal H})$.
Let $x_o\in\Omega$, $x_o\notin({\mathcal T}\cup{\mathcal H})$ and $r,{\varepsilon}>0$ be such that $H(x,\nabla U(x))\geq{\varepsilon}$, for all $x\in B_{r}(x_o)$. The parameter $r$ will be small enough to be decided later. We apply the assumption (\[eqexcond\]) and find $\delta,c>0$ correspondingly. The fact that $T$ is finite in $B_r(x_o)$, for $r$ sufficiently small, follows from Proposition \[propfeed\].
Take $x^1,x^2\in B_r(x_o)$ and suppose that $x^1_t,x^2_t$ are the trajectories solutions of (\[eqsystem\]) corresponding to the initial conditions $x_0=x^1,x^2$ respectively. To fix the ideas we may suppose that $T(x^2)\leq T(x^1)<+\infty$ and for any $\rho\in(0,1]$ we choose a control $a^\rho$ and time $t_2=t_{x^2}(a^{\varepsilon})\leq T(x^2)+\rho$ such that $d(x_{t_2})=0$. Note that by (\[eqmte\]), $t_2\leq \frac{U(x^2)}{{\varepsilon}}+\rho\leq M_{\varepsilon}$, for all $x^2\in B_r(x_o)$. Moreover by the Gronwall inequality for system (\[eqsystem\]) and since $d(x_{t_2})=0$, $$d(x^1_{t_2})\leq|x^1_{t_2}-x^2_{t_2}|\leq|x^1-x^2|e^{Lt_2}\leq|x^1-x^2|e^{LM_{\varepsilon}}$$ and the right hand side is smaller than $\delta$ if $r$ is small enough. Now we can estimate, by the dynamic programming principle and by (\[eqmte\]), (\[eqexcond\]), $$0\leq T(x^1)-T(x^2)\leq (t_2+T(x^1_{t_2}))-t_2+\rho\leq \frac{U(x^1_{t_2})}{\varepsilon}+\rho \leq\frac c{\varepsilon}d(x^1_{t_2})+\rho\leq \frac{ce^{LM_{\varepsilon}}}{\varepsilon}|x^1-x^2|+\rho.$$ As $\rho\to0+$, the result follows.
The extra estimate (\[eqexcond\]) is crucial in the sought regularity of the minimum time function but contrary to the existing literature is only asked in a possibly proper subset of a neighborhood of the target. We will show in the examples of the next section how it may follow from (AE) as well. In order to achieve small time local attainability of the target, one needs in addition that the system can evade from $\mathcal H$.
In addition to the assumptions of Theorem \[thmregularity\] suppose that ${\mathcal H}$ is a manifold of codimension at least one and that for all $x_o\in{\mathcal H}\cap(\Omega\backslash{\mathcal T})$ we have $f(x_o,A)\not\subset T_{x_o}({\mathcal H})$, the tangent space of $\mathcal H$ at $x_o$. Then for any $x_o\in \Omega\backslash{\mathcal T}$ we can reach the target in finite time.
By following the vector field $f(x_o,a)\notin T_{x_o}({\mathcal H})$, we immediately exit the singular set.
Some smooth explicit solutions of the Aronsson equation
=======================================================
In this section we show two examples of well known nonlinear systems where we can find an explicit smooth solution of (AE) and then apply Theorem \[thmregularity\] to obtain local Lipschitz regularity of the minimum time function. Our system will be in the form (\[eqhamiltonian\]), (\[eqsigma\]) and ${\mathcal T}=\{0\}$.
Hörmander-like vector fields.
-----------------------------
We consider the case where $x=(x_h,x_v)\in{\mathbb R}^{m+1}$ and $$\label{eqhormander}
\sigma(x)=\left(\begin{array}{cc}
I_m\\^t(Bx_h)\end{array}\right),$$ where $I_m$ is the $m\times m$ identity matrix and $B$ is not singular, $^tB=-B=B^{-1}$ is also $m\times m$. In particular $m$ is an even number and $|Bx_h|=|x_h|$. It is known that the corresponding symmetric control system is globally controllable to the origin and that its minimum time function is locally $1/2-$Hölder continuous. We want to prove higher regularity except on its singular set.
We consider the two functions $$\label{eqgauge}
u(x)=|x_h|^4+4x_v^2,\quad U(x)=(u(x))^{1/4},$$ and want to show that $U$ is a solution of (AE) for $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$. $U$ is a so called gauge function for the family of vector fields. We easily check that, after denoting $A(x)=\sigma(x)\;^t\sigma(x)$, $$\begin{array}{c}
\nabla u(x)=(4|x_h|^2x_h,8x_v),\quad
A(x)\;^t\nabla u(x)=\left(\begin{array}{cc}
4|x_h|^2x_h+8x_vBx_h\\8x_v|x_h|^2\end{array}\right),\\
H^2(x,\nabla u(x))=|\nabla U(x)\sigma(x)|^2=A(x)\;^t\nabla U(x)\cdot \;^t\nabla U(x)=16|x_h|^6+64x_v^2|Bx_h|^2=16|x_h|^2u(x),\\
H(x,\nabla U(x))=\frac{|x_h|}{U(x)}.
\end{array}$$ Notice in particular that $H(x,\nabla U(x))=0$ if and only if $x_h=0$ and thus the singular set $\{x:H(x,\nabla U(x))=0\}$ contains the target and is a smooth manifold, being the $x_v$ axis. As a consequence of the last displayed equation we have $$U(x)\leq\frac{|x_h|}{\varepsilon}\leq\frac{|x|}{\varepsilon},\quad\hbox{in }H(x,\nabla U(x))\geq{\varepsilon},$$ which is an information that we need to apply Theorem \[thmregularity\]. Finally, if $x\neq0$, $$\begin{array}{l}-\nabla(H^2(x,\nabla U(x)))\cdot (H^2)_p(x,\nabla U(x))
=-2\left(\frac{(x_h,0)}{U^2(x)}-\frac{|x_h|^2}{U^3(x)}\nabla U(x)\right)\cdot A(x)\;^t\nabla U(x)\\
=-\frac{2}{U^3(x)}\left(4U(x)\frac{|x_h|^4}{4U^3(x)}-|x_h|^2\frac{|x_h|^2}{U^2(x)}\right)=0.
\end{array}$$ Therefore $U$ is even a classical $C^2$ solution of (AE) for Hamiltonian $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$ and then $H$ is constant along the trajectories of the closed loop system (\[eqhd\]). Hence, by Theorem \[thmregularity\], the system (\[eqsystem\]) is controllable in finite time to the origin from $$\{x:H(x,\nabla U(x))>0\}={\mathbb R}^{m+1}\backslash\{(0,x_v):x_v\in{\mathbb R}\}$$ and the corresponding minimum time function is locally Lipschitz continuous on that set. Notice that, for ${\varepsilon}<1$, $\{x:H(x,\nabla U(x))\geq{\varepsilon}\}=\{x:4x_v^2\leq(1/{\varepsilon}^4-1)|x_h|^4\}$. Also the last Corollary applies.
Consider the symmetric control system $$\label{eqssystem}
\left\{\begin{array}{ll}
\dot x_t=\sigma(x_t)a_t,&\quad t>0,\\
x_o\in{\mathbb R}^n,
\end{array}\right.$$ where $\sigma$ is given in (\[eqhormander\]). Then the gauge function (\[eqgauge\]) is a solution of the Aronsson equation (\[eqae\]) for $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$, it is an absolutely minimizing function for the corresponding $L^\infty$ norm of the subelliptic gradient and the minimum time function to reach the origin is locally Lipschitz continuous in $\{x=(x_h,x_v)\in{\mathbb R}^{m+1}:x_h\neq0\}$. The system is small time locally controllable and there is a continuous feedback leading the system to the target outside the singular set.
Grushin vector fields.
----------------------
We consider the system where $x=(x_h,x_v)\in {\mathbb R}^{m+1}$ and $$\label{eqgrushin}
\sigma(x)=\left(\begin{array}{cc}
I_m\quad &0_m\\0&^tx_h\end{array}\right),$$ where $\sigma(x)$ is $(m+1)\times 2m$ matrix. Also in this case it is known that the corresponding symmetric control system is globally controllable to the origin and that its minimum time function is locally $1/2-$Hölder continuous. We consider $u,\;U$ as before in (\[eqgauge\]) want to show that $U$ is a solution of (AE) in ${\mathbb R}^{m+1}\backslash\{0\}$. In this case we can check that, $$A(x)\;^t\nabla u(x)=\left(\begin{array}{cc}
4|x_h|^2x_h\\8x_v|x_h|^2\end{array}\right),\quad H^2(x,\nabla u(x))=16|x_h|^2u(x),
\quad H(x,\nabla U(x))=\frac{|x_h|}{U(x)},$$ and again we have, for ${\varepsilon}>0$, $$U(x)\leq\frac{|x_h|}{\varepsilon}\leq\frac{|x|}{\varepsilon},\quad\hbox{in }H(x,\nabla U(x))\geq{\varepsilon}.$$ Finally, if $x\neq0$, $$\begin{array}{l}-\nabla(H^2(x,\nabla U(x)))\cdot (H^2)_p(x,\nabla U(x))
=-\frac{2}{U^3(x)}\left(U(x)(x_h,0)-|x_h|^2\nabla U(x)\right)\cdot A(x)\;^t\nabla U(x)\\
=-\frac{2}{U^3(x)}\left(4U(x)\frac{|x_h|^4}{4U^3(x)}-|x_h|^2\frac{|x_h|^2}{U^2(x)}\right)=0.
\end{array}$$ Therefore $U$ is a solution of (AE) for Hamiltonian $H^2$ and hence the system (\[eqsystem\]) is controllable in finite time to the origin from $\{x:H(x,\nabla U(x))>0\}$ and we prove the following result.
Consider the symmetric control system (\[eqssystem\]) where $\sigma$ is given in (\[eqgrushin\]). Then the gauge function (\[eqgauge\]) is a solution of (AE) for $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$, it is an absolutely minimizing function for the corresponding $L^\infty$ norm of the subelliptic gradient and the minimum time function to reach the origin is locally Lipschitz continuous in $\{x=(x_h,x_v)\in{\mathbb R}^{m+1}:x_h\neq0\}$.
[99]{}
, *Minimization problems for the functional ${\rm sup}\sb{x}\,F(x,\,f(x),\,f\sp{\prime} (x))$*, Ark. Mat. [**6**]{} (1965), 33–53.
, *A tour of the theory of absolutely minimizing functions*, Bull. Amer. Math. Soc. [**41**]{} (2004), no. 4, 439–505.
, *Partial regularity for solutions to subelliptic eikonal equations,* C. R. Math. Acad. Sci. Paris [**356**]{} (2018), no. 2, 172–176.
, *Regularity results for the minimum time function with Hörmander vector fields,* J. Differential Equations [**264**]{} (2018), no. 5, 3312–3335.
*Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. With appendices by Maurizio Falcone and Pierpaolo Soravia,* Systems & Control: Foundations & Applications. Birkhäuser Boston, Inc., Boston, MA, 1997.
*Convexity and semiconvexity along vector fields,* Calc. Var. Partial Differential Equations 42 (2011), no. 3-4, 405–427.
, *The Euler equation and absolute minimizers of L$^{\infty}$ functionals*, Arch. Ration. Mech. Anal. [**157**]{} (2001), no. 4, 255–283.
, *Properties of infinite harmonic functions of Grushin-type spaces*, Rocky Mountain J. Math. [**39**]{} (2009), 729–756.
, *The Aronsson-Euler equation for absolutely minimizing Lipschitz extensions with respect to Carnot-Caratheodory metrics*, Trans. Am. Math. Soc. [**357**]{} (2005), 795-823.
, *Qualitative properties of trajectories of control systems: a survey*, J. Dynam. Control Systems 1 (1995), no. 1, 1–48.
, *Nonsmooth analysis and control theory*, Graduate Texts in Mathematics, 178. Springer-Verlag, New York, 1998.
, *An efficient derivation of the Aronsson equation*, Arch. Ration. Mech. Anal. [**167**]{} (2003), no. 4, 271–279.
, *Strong invariance and one-sided Lipschitz multifunctions*, Nonlinear Anal. 60 (2005), no. 5, 849–862.
, *Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient*, Arch. Rational Mech. Anal. [**123**]{} (1993), no. 1, 51–74.
, *Minimization problems for Lipschitz functions via viscosity solutions*, Dissertation, University of Jyvaskula, Jyvaskula, 1998. Ann. Acad. Sci. Fenn. Math. Diss. [**115**]{} (1998), 53 pp.
, *On the small-time controllability of discontinuous piece-wise linear systems*, [*Systems Control Lett.*]{}, 62(2):218–223, 2013.
, *A [H]{}ölder condition for [B]{}ellman’s function*, [*Differencial’nye Uravnenija*]{}, 13(12):2180–2187, 2301, 1977.
, *Second order conditions for the controllability of nonlinear systems with drift*, [*Commun. Pure Appl. Anal.*]{}, 5(4):861–885, 2006.
, *Sufficient conditions for small time local attainability for a class of control systems*, In [*Large-scale scientific computing*]{}, [*Lect. Notes Comput. Sci.*]{} 9374, 117-125. Springer, Cham, 2015.
, *Small-time local attainability for a class of control systems with state constraints*, [*ESAIM Control Optim. Calc. Var.*]{}, 23(3):1003–1021, 2017.
, [*A*symptotic controllability and Lyapunov-like functions determined by Lie brackets,]{} SIAM J. Control Optim. 56 (2018), no. 2, 1508–1534.
, *Controllability of autonomous systems.*, , 4:606–617, 1968.
, *H[ö]{}lder continuity of the minimum-time function for $C^1$-manifold targets*, [*J. Optim. Theory Appl.*]{}, 75(2):401–421, 1992.
*Optimality principles and representation formulas for viscosity solutions of Hamilton-Jacobi equations. II. Equations of control problems with state constraints*, Differential Integral Equations 12 (1999), no. 2, 275–293.
*Existence of absolute minimizers for noncoercive Hamiltonians and viscosity solutions of the Aronsson equation,* Math. Control Relat. Fields 2 (2012), no. 4, 399–427.
, *Absolute minimizers, Aronsson equation and Eikonal equations with Lipschitz continuous vector fields*, In: International conference for the 25th anniversary of viscosity solutions. Tokyo, 4–6 June 2007, Gakuto Int. Series, Gakkotosho Co., Ltd., (2008), 30, 175–19.
*On Aronsson equation and deterministic optimal control,* Appl. Math. Optim. 59 (2009), no. 2, 175–201.
, *Existence of absolute minimizers for noncoercive Hamiltonians and viscosity solutions of the Aronsson equation*, Math. Control Relat. Fields 2 (2012), no. 4, 399–427.
*Some results on second order controllability conditions*, to appear.
, *The Aronsson equation for absolute minimizers of $L\sp \infty$-functionals associated with vector fields satisfying Hörmander’s condition*, Trans. Amer. Math. Soc. [**359**]{} (2007), 91–113.
[^1]: email: [email protected].
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Ozsel Kilinc\
Electrical Engineering Department\
University of South Florida\
Tampa, FL 33620\
`[email protected]`\
Ismail Uysal\
Electrical Engineering Department\
University of South Florida\
Tampa, FL 33620\
`[email protected]`\
bibliography:
- 'bibliography.bib'
title: 'GAR: An efficient and scalable Graph-based Activity Regularization for semi-supervised learning'
---
Appendices
==========
Datasets used in the experiments
--------------------------------
Table \[tab:datasets\] summarizes the properties of the datasets used in the experiments. The following preprocessing steps have been applied to these datasets.
- **MNIST**: Images were normalized by 255.
- **SVHN**: We applied sample-wise centering and normalization, i.e. we set each sample mean to 0 and divided each input by its standard deviation.
- **NORM**: Following [@MaaloeSSW16], images were downsampled to $32 \times 32$. We added uniform noise between 0 and 1 to each pixel value. First, we normalized the NORB dataset by 256, then applied both sample-wise centering and normalization and feature-wise centering and normalization.
Models used in the experiments
------------------------------
Table \[tab:models\] summarizes all models used in the experiments. Reported MNIST results have been obtained using the model named *6-layer CNN*, whereas SVHN results were obtained on *9-layer CNN-2* model and NORB results were obtained on *9-layer CNN* model. The results obtained on different models are also presented in the following sections to show the effect of the chosen model to the test accuracy. The ReLU activation function has been used for all models. For both supervised pretraining and unsupervised regularization, models were trained using stochastic gradient descent with following settings: $lr=0.01$, $decay=1e^{-6}$, $momentum=0.95$ with Nesterov updates.
Effects of hyperparameters
--------------------------
### Effect of the chosen model
Figure \[fig:img\_appdx\_model\] presents the test accuracy curves with respect to the unsupervised training epochs obtained using different models. The proposed unsupervised regularization improves the test accuracy in all models. However, the best case depends on the chosen model specifications.
### Effect of $\nicefrac{b_L}{b_U}$ ratio and applying dropout during unsupervised regularization
The labeled/unlabeled data ratio of unsupervised training batches is the most critical hyperparameter of the proposed regularization. Figure \[fig:img\_appdx\_batch\_ratio\] visualizes the effect of this ratio for MNIST and SVHN datasets. These two datasets have different characteristics. MNIST dataset has a lower variance among its samples with respect to SVHN. As a result, even when the labeled examples introduced to the network during the supervised pretraining are not blended to the unsupervised training batches, i.e. $b_L = 0$, this does not affect the performance dramatically. However, for SVHN dataset, reducing the $b_L$ proportion of the unsupervised training batches significantly affects the accuracy and further decrease of $b_L$ reduces the stability of the regularization.
One can also observe another phenomenon through the MNIST results in Figure \[fig:img\_appdx\_batch\_ratio\]. That is, as $b_L$ approaches to $m_L$, the generalization of the model reduces. This effect can be better observed in Figure \[fig:img\_appdx\_bad\_hypers\] including a further step, i.e. $b_L=96$ when $m_L=100$. Since the same examples start to dominate the batches of unsupervised regularization, overfitting occurs and ultimately test accuracy significantly reduces. Figure \[fig:img\_appdx\_bad\_hypers\] also presents the effect of applying dropout during the unsupervised regularization. Dropping out the weights the during unsupervised training dramatically affects the accuracy. This effect is more obvious when ${b_L}$ is smaller. Hence, for the experiments, we removed the dropouts from the models specified in Table \[tab:models\] during the unsupervised training and applied the following strategy to decide on the $\nicefrac{b_L}{b_U}$ ratio: $b_L$ is approximately assigned as one tenth of the number of all labeled examples $m_L$, i.e. $b_L \approx \nicefrac{m_L}{10}$, and then $b_U$ is chosen to complement the batch size up to 128, i.e. $b_U=128-b_L$.
### Effect of regularization coefficients $c_\alpha$, $c_\beta$ and $c_F$
The effects of regularization coefficients are presented in Figure \[fig:img\_reg\_coeffs\] for MNIST dataset. Part (a) of the figure visualizes the case when $c_F$ is held constant, but the ratio of $\nicefrac{c_\alpha}{c_\beta}$ changes. And part (b) of the figure illustrates the case when $\nicefrac{c_\alpha}{c_\beta}$ is held constant, but $c_F$ changes. We can say that as long as $c_\alpha \ge c_\beta$, the ratio of $\nicefrac{c_\alpha}{c_\beta}$ does not affect the accuracy significantly. Furthermore, the value of $c_F$ is not so critical (close performances both with $c_F=1e^{-6}$ and $c_F=1e^{-15}$) unless it is too big to distort the regularization. Therefore, we can say that the proposed unsupervised regularization term is considerably robust with respect to the coefficients $c_\alpha$, $c_\beta$ and $c_F$. This can also be seen through the fact that we have applied the same coefficients for the experiments of all three datasets.
More on activity regularization
-------------------------------
Recall that $$\boldsymbol{N}:=\boldsymbol{B}^T\boldsymbol{B}=
\begin{bmatrix}
\sum\limits^{m}{B_{i1}B_{i1}} & \sum\limits^{m}{B_{i1}B_{i2}} & \dots & \sum\limits^{m}{B_{i1}B_{in}} \\
\sum\limits^{m}{B_{i2}B_{i1}} & \sum\limits^{m}{B_{i2}B_{i2}} & \dots & \sum\limits^{m}{B_{i2}B_{in}} \\
\vdots & \vdots & \ddots & \vdots \\
\sum\limits^{m}{B_{in}B_{i1}} & \sum\limits^{m}{B_{in}B_{i2}} & \dots & \sum\limits^{m}{B_{in}B_{in}}
\end{bmatrix}$$
then we can rewrite $\alpha(\boldsymbol{B}\big)$ in terms of $\boldsymbol{B}$ only
$$\alpha\big(\boldsymbol{B}\big) =\frac{\sum\limits_{i \ne j}^n{N_{ij}}}{(n-1)\sum\limits_{i = j}^n{N_{ij}}}=
\frac{2\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n N_{ij}}{(n-1)\sum\limits_{i=1}^n{N_{ii}}}=
\frac{2\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \sum\limits_{k=1}^{m}B_{ki}B_{kj}}{(n-1)\sum\limits_{i=1}^n{\sum\limits_{k=1}^{m}B_{ki}B_{ki}}}$$
Also recall that $\boldsymbol{v}$ represents the diagonal entries of $\boldsymbol{N}$ such that $\boldsymbol{v}:=\begin{bmatrix}N_{11} & N_{22} & \dots & N_{nn}\end{bmatrix}$ and
$$\boldsymbol{V}:=\boldsymbol{v}^T\boldsymbol{v}=
\begin{bmatrix}
N_{11}N_{11} & N_{11}N_{22} & \dots & N_{11}N_{nn} \\
N_{22}N_{11} & N_{22}N_{22} & \dots & N_{22}N_{nn} \\
\vdots & \vdots & \ddots & \vdots \\
N_{nn}N_{11} & N_{nn}N_{22} & \dots & N_{nn}N_{nn}
\end{bmatrix}
=\begin{tiny}
\begin{bmatrix}
\sum\limits^{m}{B_{i1}^2}\sum\limits^{m}{B_{i1}^2} & \sum\limits^{m}{B_{i1}^2}\sum\limits^{m}{B_{i2}^2}& \dots & \sum\limits^{m}{B_{i1}^2}\sum\limits^{m}{B_{in}^2} \\
\sum\limits^{m}{B_{i2}^2}\sum\limits^{m}{B_{i1}^2} & \sum\limits^{m}{B_{i2}^2}\sum\limits^{m}{B_{i2}^2} & \dots & \sum\limits^{m}{B_{i2}^2}\sum\limits^{m}{B_{in}^2} \\
\vdots & \vdots & \ddots & \vdots \\
\sum\limits^{m}{B_{in}^2}\sum\limits^{m}{B_{i1}^2} & \sum\limits^{m}{B_{in}^2}\sum\limits^{m}{B_{i2}^2} & \dots & \sum\limits^{m}{B_{in}^2}\sum\limits^{m}{B_{in}^2}
\end{bmatrix}
\end{tiny}$$
$1-\beta(\boldsymbol{B}\big)$ can be written in terms of $\boldsymbol{N}$ as follows:
$$1- \beta\big(\boldsymbol{B}\big) = 1 - \frac{\sum\limits_{i \ne j}{V_{ij}}}{(n-1)\sum\limits_{i = j}{V_{ij}}} =
\frac{\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n (N_{ii}-N_{jj})^2}{(n-1)\sum\limits_{i=1}^n{N_{ii}^2}}$$
If we further replace $\boldsymbol{N}$ with $\boldsymbol{B}$ then $$1- \beta\big(\boldsymbol{B}\big) = \frac{\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \bigg(\sum\limits_{k=1}^{m}B_{ki}B_{ki}-\sum\limits_{k=1}^{m}B_{kj}B_{kj}\bigg)^2}{(n-1)\sum\limits_{i=1}^n{\sum\limits_{k=1}^{m}(B_{ki}B_{ki})^2}}$$
Recall that the proposed unsupervised loss is
$$\label{unsupervised_objective}
\mathcal{U}\big(\boldsymbol{B}\big)= c_{\alpha}\alpha\big(\boldsymbol{B}\big) + c_{\beta}\big(1-\beta\big(\boldsymbol{B}\big)\big) + c_F||\boldsymbol{B}||^2_F$$
then the overall unsupervised loss can be written in terms of $\boldsymbol{B}$ as follows:
$$\mathcal{U}\big(\boldsymbol{B}\big) = c_\alpha\begin{pmatrix}
\frac{2\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \sum\limits_{k=1}^{m}B_{ki}B_{kj}}{(n-1)\sum\limits_{i=1}^n{\sum\limits_{k=1}^{m}B_{ki}^2}}\end{pmatrix}+c_\beta\begin{pmatrix}\frac{\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \bigg(\sum\limits_{k=1}^{m}B_{ki}^2-\sum\limits_{k=1}^{m}B_{kj}^2\bigg)^2}{(n-1)\sum\limits_{i=1}^n{\sum\limits_{k=1}^{m}B_{ki}^4}}\end{pmatrix}+c_F\begin{pmatrix}\sum\limits_{i=1}^n \sum\limits_{k=1}^mB_{ki}^2\end{pmatrix}$$
where $m$ is the number of examples, $n$ is the number of output nodes and $$\boldsymbol{B} = g\big(\boldsymbol{X}\big) = \max{\bigg(\boldsymbol{0}, \big(\boldsymbol{Y}^{(L-1)}\boldsymbol{W}^{(L)} + \boldsymbol{b}^{(L)}\big)\bigg)}$$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report the relationship between epitaxial strain and the crystallographic orientation of the in-phase rotation axis and $A$-site displacements in $Pbnm$-type perovskite films. Synchrotron diffraction measurements of EuFeO$_3$ films under strain states ranging from 2 % compressive to 0.9 % tensile on cubic or rhombohedral substrates exhibit a combination of $a^-a^+c^-$ and $a^+a^-c^-$ rotational patterns. We compare the EuFeO$_3$ behavior with previously reported experimental and theoretical work on strained $Pbnm$-type films on non-orthorhombic substrates, as well as additional measurements from LaGaO$_3$, LaFeO$_3$, and Eu$_{0.7}$Sr$_{0.3}$MnO$_3$ films on SrTiO$_3$. Compiling the results from various material systems reveals a general strain dependence in which compressive strain strongly favors $a^-a^+c^-$ and $a^+a^-c^-$ rotation patterns and tensile strain weakly favors $a^-a^-c^+$ structures. In contrast, EuFeO$_3$ films grown on $Pbnm$-type GdScO$_3$ under 2.3 % tensile strain take on a uniform $a^-a^+c^-$ rotation pattern imprinted from the substrate, despite strain energy considerations that favor the $a^-a^-c^+$ pattern. These results point to the use of substrate imprinting as a more robust route than strain for tuning the crystallographic orientations of the octahedral rotations and $A$-site displacements needed to realize rotation-induced hybrid improper ferroelectricity in oxide heterostructures.'
author:
- 'A. K. Choquette'
- 'C. R. Smith'
- 'R. J. Sichel-Tissot'
- 'E. J. Moon'
- 'M. D. Scafetta'
- 'E. Di Gennaro'
- 'F. Miletto Granozio'
- 'E. Karapetrova'
- 'S. J. May'
title: 'Octahedral rotation patterns in strained EuFeO$_3$ and other $\pmb{Pbnm}$ perovskite films: Implications for hybrid improper ferroelectricity'
---
Introduction
============
Epitaxial heterostructures of $AB$O$_3$ perovskite oxides have attracted considered interest as a route toward altering or enhancing properties through epitaxial strain, superlattice formation, and interfacial phenomena.[@Schlom2008; @Hwang2012; @Zubko2011; @Bhattacharya14] Recently, the control of local atomic structure, in particular $B$O$_6$ octahedral distortions and rotations and $A$-site displacements, has emerged as a promising strategy for designing functional properties in perovskite films.[@Bousquet08; @Rondinelli2012a; @BenedekJSSC; @MoonNC] One example of structure-driven design in oxide heterostructures is the prediction of hybrid improper ferroelectricity in ($A^{'}B$O$_3$)/($AB$O$_3$) superlattices where both $A^{'}B$O$_3$ and $AB$O$_3$ are perovskites that exhibit the orthorhombic $Pbnm$ structure in bulk.[@Rondinelli2012; @Mulder2013; @GhoshPRB15; @Benedek15] In such superlattices, the inequivalent displacements of the $A$ and $A^{'}$ cations produce a ferrielectric state. This design principle is predicated on the $A$-site displacements occuring within the plane of the superlattice, perpendicular to the superlattice growth direction. A similar design approach has been used to predict that (SrRuO$_3$)$_1$/(CaRuO$_3$)$_1$ superlattices are polar metals.[@Puggioni14]
A key challenge for experimentally verifying such predictions lies in the quantitative measurement of octahedral behavior and $A$-site positions in thin films, as the primary technique used in bulk perovskites - powder diffraction - is not accessible in studies of epitaxial films. Recent work has shown the promise of synchrotron diffraction,[@May2010; @ChangPRB11; @Rotella2012; @Johnson13; @LuPRB13; @Fister14; @ZhangPRB14; @Zhai14; @Biegalski14] coherent Bragg rod analysis,[@Fister14; @Kumah14] electron microscopy,[@JiaPRB09; @BorisevichPRL10; @AsoCGD14] and electron diffraction[@HwangAPL12] to probe octahedral rotations in perovskite films. In particular, the synchrotron diffraction approach is based on the measurement of half-order Bragg peaks that arise from the unit cell doubling nature of the octahedral rotations.[@Glazer1975; @Woodward05] The presence and absense of specific half-order peaks is a direct signature of the pattern of octahedral rotations within the material. The rotation pattern is denoted using Glazer notation, in which in-phase, out-of-phase, or absense of rotations are signified by +, -, or 0 superscripts, respectively, along a given pseudocubic direction.[@Glazer1972; @Woodward1997] Axes with equal rotational magnitude are denoted by the same letter. For example, the $a^-a^-a^-$ pattern consists of equal out-of-phase rotation angles along all three pseudocubic axes, while the $a^-a^-c^+$ pattern has equivalent out-of-phase rotations along two axes and an in-phase rotation of differing magnitude along the third axis. This latter pattern corresponds to the orthorhombic $Pbnm$ perovskite structural variation, which is one of the most common crystal structures for oxide perovskites.[@Thomas1989; @Woodward1997a] Materials in this structure also exhibit $A$-site displacements in the plane normal to the in-phase rotation axis.
There is limited understanding of what determines the direction of the in-phase rotation axis, and therefore the $A$-site displacements, in epitaxial $Pbnm$-type perovskite films and superlattices despite the clear importance of this knowledge for the realization of new ferroic materials. While there have been numerous reports of the rotation pattern within a single film,[@Copie2013; @Proffit2008; @Choi2010a; @Han2009; @Kan2013] systematic experimental studies probing the effect of a single variable, such as strain or composition, on the rotation pattern in $Pbnm$-type films are lacking.
In this work, we report on the rotation patterns of strained EuFeO$_3$ films, a perovskite that exhibits the $Pbnm$ structure in bulk form.[@Marezio1970a] These results are compared with previously reported experimental and theoretical work on strained $Pbnm$-type films, as well as new measurements from LaGaO$_3$, LaFeO$_3$, and Eu$_{0.7}$Sr$_{0.3}$MnO$_3$ films, revealing a general strain dependence in which compressive strain strongly favors $a^-a^+c^-$ and $a^+a^-c^-$ rotation patterns and tensile strain weakly favors $a^-a^-c^+$ structures. However, EuFeO$_3$ grown on orthorhombic GdScO$_3$ (110) exhibits a uniform $a^-a^+c^-$ orientation matching that of the substrate, despite the 2.3 % tensile strain imposed by the substrate. This result indicates that the use of substrate templating is a more deterministic route than strain for controlling the in-phase rotation axis and $A$-site displacement orientation in perovskite films and superlattices.
Experimental Techniques
=======================
EuFeO$_3$ films were grown on SrTiO$_3$ (STO) (001), (LaAlO$_3$)$_{0.3}$(Sr$_2$AlTaO$_6$)$_{0.7}$ (LSAT) (001), LaAlO$_3$ (LAO) (001), and GdScO$_3$ (GSO) (110) substrates using oxide molecular beam epitaxy (MBE). The growth conditions are described in Ref. . The thickness of the EuFeO$_3$ films are between 35-40 unit cells (13-15 nm) thick as determined from x-ray diffraction. LaFeO$_3$ and Eu$_{0.7}$Sr$_{0.3}$MnO$_3$ films were also deposited by MBE on STO (001) using conditions reported in Ref. and Ref. , respectively. The LaGaO$_3$ film was grown on STO (001) using pulsed laser deposition as described in Ref. . The LaGaO$_3$, Eu$_{0.7}$Sr$_{0.3}$MnO$_3$, and LaFeO$_3$ films are 25, 40, and 129 unit cells thick, respectively. A previously published reciprocal space map from this LaFeO$_3$ film confirms that it is coherently strained to the STO substrate.[@Scafetta13] Synchrotron diffraction measurements were performed at Sector 33-BM-C of the Advanced Photon Source. All measurements were carried out at room temperature. Photon energies of 15.5 keV and 16 keV were used for the EuFeO$_3$ and Eu$_{0.7}$Sr$_{0.3}$MnO$_3$ measurements, respectively. The LaGaO$_3$ and LaFeO$_3$ films were measured with 10 keV photons. The GenX software package[@Bjorck2007] was used to simulate the measured ($00L$) data, from which $c$-axis parameters and film thicknesses were obtained. Volume fractions of different structural orientations were obtained by analyzing peak areas after applying Lorentz polarization and beam footprint corrections.
EFO$_3$ Films
=============
![(Color online) The crystal structure of bulk EuFeO$_3$, representative of $Pbnm$-type perovskites, viewed along the pseudocubic \[001\] (a) and \[100\] (b) directions; structural data from Ref. .[]{data-label="fig:Fig1"}](Fig1){width="3.5"}
{width="7"}
![(color online) Omega scans through $H$=$K$=$L$ half-order diffraction peaks from EuFeO$_3$ films on STO, LAO and GSO. The inset shows an $L$-scan through the ($\frac{1} {2}$ $\frac{1} {2}$ $\frac{1} {2}$) condition for a LaNiO$_3$ film on STO.[]{data-label="fig:Fig3"}](Fig3){width="3.0"}

![(color online) Regions of reciprocal space near the {$\frac{1} {2}$ $\frac{3} {2}$ 1} Bragg conditions for EuFeO$_3$ on GSO. The ($\frac{1} {2}$ 1 $\frac{3} {2}$) peak is shown in (a); the (1 $\frac{1} {2}$ $\frac{3} {2}$) region of reciprocal space is shown in (b); the ($\frac{1} {2}$ $\frac{3} {2}$ 1) region of reciprocal space is shown in (c). Only the ($\frac{1} {2}$ 1 $\frac{3} {2}$) peak is present (a), mirroring the substrate. In (a), the white arrow highlights the peak from GSO; the black arrow highlights the peak from the EuFeO$_3$. The scale bar indicates the natural log of the measured intensity.[]{data-label="fig:Fig5"}](Fig5)
![(color online) The ($\pm\frac{1} {2}$ $\pm\frac{1} {2}$ $\frac{3} {2}$) peaks measured from EuFeO$_3$ films on (a) LAO, (b) STO, and (c) GSO. The films on LAO and STO show evidence of equal populations of the rotational domains. The film on GSO shows evidence of unequal rotational domains.[]{data-label="fig:Fig6"}](Fig6_2){width="2.5"}
The orthorhombic $Pbnm$ structure is one of the most common perovskite variants among oxides and is the structure exhibited by bulk EuFeO$_3$ at room temperature. Within the thin film community, the orthorhombic lattice is commonly converted to a pseudocubic structure in which the orthorhombic \[100\] is equivalent to the pseudocubic \[110\] and $a_o = \sqrt{2} a_p$. The \[001\] direction is unchanged but the pseudocubic $c$-axis parameter is half that of the orthorhombic $c$-axis parameter. This pseudocubic lattice will be used throughout this work. Two key features of this structure, shown in Fig. \[fig:Fig1\], are the presence of the $a^-a^-c^+$ rotation pattern and $A$-displacements. The $a^-a^-c^+$ rotation pattern indicates out-of-phase $B$O$_6$ rotations about two in-plane directions (pseudocubic \[100\] and \[010\]) and in-phase rotations along the out-of-plane \[001\] direction. The $A$-site cations are displaced within the plane perpendicular to the $c^+$ rotation axis along directions close to the <110>. Both the octahedral rotations and $A$-site displacements act to double the pseudocubic unit cell, leading to half-order diffraction peaks. Throughout this work, we will denote the growth direction as the $c$-axis of the film. The $a^-a^+c^-$ pattern indicates the in-phase axis lies along an in-plane film direction. The $a^-a^-c^+$ pattern indicates that the in-phase axis lies along the out-of-the-plane film direction (the growth direction). In this study, we do not quantify the rotation angles and therefore cannot distinguish between $a^-a^+c^-$ and $a^-b^+c^-$. We use $a^-a^+c^-$ throughout this work with the understanding that this entails both $a^-a^+c^-$ and $a^-b^+c^-$. However, for films grown on STO, LAO, and LSAT substrates, we anticipate that the in-plane rotations, $\alpha$ and $\beta$, are equal due to the same in-plane lattice constants along the $a$ and $b$-axes. For films on orthorhombic GSO, inequivalent $\alpha$ and $\beta$ angles would be expected leading to $a^-b^+c^-$ or $a^-b^-c^+$ patterns because the in-plane lattice constants of the substrate are not identical.
The bulk structure of EuFeO$_3$ has previously been determined from powder diffraction measurements.[@Marezio1970a] It has pseudocubic lattice parameters, taken from the Fe-Fe distances of 3.882 Å along the $a^-$ axes and 3.842 Å along the $c^+$ axis (the orthorhombic long axis). The reduced $B$-$B$ distance along the in-phase axis compared to the out-of-phase axes is a common feature of the $Pbnm$ structure, having also been reported in bulk LaFeO$_3$,[@Falcon97] LaGaO$_3$,[@Vasylechko99], LaTiO$_3$,[@Komarek07] LaVO$_3$,[@Bordet93] LaCrO$_3$,[@Tezuka98] CaTiO$_3$,[@Liu93] CaFeO$_3$,[@Takeda00] and CaMnO$_3$.[@Chmaissem01] Based solely on these distances, one would expect the in-phase axis to lie out-of-the-plane ($a^-a^-c^+$) for films under tensile strain and in-the-plane ($a^+a^-c^-$ or $a^-a^+c^-$) for films under compressive strain in order to minimize the lattice mismatch with the substrate.
EuFeO$_3$ films were deposited on a variety of commercially available substrates. The lattice mismatch between EuFeO$_3$ and the substrates leads to an average 2% compressive strain on LAO (lattice parameter 3.791 Å), <0.1% strain on LSAT (3.868 Å), 0.9% tensile strain on STO (3.905 Å), and 2.3% tensile strain on GSO (3.968 Å). The measured $00L$ scans are shown in Fig. \[fig:Fig2\](a-d). The obtained EuFeO$_3$ $c$-axis parameters are 3.918 Å on LAO, 3.840 Å on STO, 3.869 Å on LSAT, and 3.806 Å on GSO, consistent with the strain states of the films, shown in Fig. \[fig:Fig2\](e). Further verification of the strain state comes from reciprocal space maps measured about the (113) peak, as shown Fig. 2(f) for EuFeO$_3$/LAO and Fig. 2(g) for EuFeO$_3$/GSO. The Bragg peak from the films occurs at the same $H$ and $K$ values as that of the substrate, indicating that the films are coherently strained. Similar reciprocal space maps for the films on LSAT and STO were previously reported in Ref. , which confirm the EuFeO$_3$ films are strained to LSAT and STO.
A broad survey of half-order diffraction peaks were measured to determine the pattern of octahedral rotations in the films. We first present measurements at $H=K=L$ conditions. These peaks arise from $A$-site displacements with minimal intensity contribution from octahedral rotations.[@Glazer1975] Figure \[fig:Fig3\] displays $\omega$ scans, as commonly referred to as rocking curves, through $H=K=L$ regions of reciprocal space for EuFeO$_3$ films on STO, LAO and GSO, which all exhibit peaks. The presence of these peaks is consistent with the presence of $A$-site displacements in the $Pbnm$-type structure. In contrast, a 45 u.c. thick LaNiO$_3$ film, shown inset of Fig. 3, does not exhibit a ($\frac{1} {2}$ $\frac{1} {2}$ $\frac{1} {2}$) peak as expected for a $R\bar3c$-type perovskite lacking $A$-site displacements. The broad and intense ($\frac{1} {2}$ $\frac{1} {2}$ $\frac{1} {2}$) peak from the LSAT substrate[@Sang2015] prevented measurement of the film peak at this condition for the EuFeO$_3$/LSAT sample.
We next move to Bragg conditions in which one of the reciprocal lattice positions is an integer and the other two are unequal half-order positions, for example ($\frac{1} {2}$ 1 $\frac{3} {2}$) or ($\frac{1} {2}$ 2 $\frac{5} {2}$) where $K$ is an integer and $H$ $\neq$ $L$. Figure 4 shows a series of three peaks in which either *H*, *K*, or *L* is an integer, and the total momentum transfer, *q*, is kept approximately constant. These peaks are present only when the integer reciprocal lattice variable is parallel to the real space direction of the in-phase rotation axis.[@Glazer1975] For example, an $a^-a^-c^+$ pattern produces a ($\frac{1} {2}$ $\frac{3} {2}$ 1) peak. The *A*-site displacements perpendicular to the direction of the in-phase rotation also contribute intensity to these peaks. Therefore, the presence of a ($\frac{1} {2}$ $\frac{3} {2}$ 1)-type peak allows for the orientation of the in-phase rotation axis to be determined. For the films on LAO \[Fig. \[fig:Fig4\](a)\] and on LSAT \[Fig. \[fig:Fig4\](b)\], peaks with an integer in either $H$ or $K$ are observed, while peaks with an integer $L$ value are absent. Figure 4(a) shows a series of (1 $\frac{1} {2}$ $\frac{3} {2}$)-type peaks for the EuFeO$_3$/LAO sample. While the ($\frac{1} {2}$ 1 $\frac{3} {2}$) and (1 $\frac{1} {2}$ $\frac{3} {2}$) have approximately equal intensity, no intensity is measured at the ($\frac{1} {2}$ $\frac{3} {2}$ 1). Similar data is obtained from EuFeO$_3$/LSAT, shown in Fig. 4(b), where a larger value of $L$ is used to better separate the film and substrate peaks. For the EuFeO$_3$/STO film, shown in Fig. \[fig:Fig4\](c), the majority of the film takes a structure of $a^+a^-c^-$ or $a^-a^+c^-$, with only a small fraction (4%) of the film exhibiting $a^-a^-c^+$, as has previously been reported.[@Choquette2015]
In contrast, this multi-domain trend is not observed in the EuFeO$_3$/GSO film. Instead, the film exhibits a uniform $a^-a^+c^-$ pattern, which matches that of the GSO substrate. As shown in Fig. \[fig:Fig4\](d), only the ($\frac{1} {2}$ 1 $\frac{3} {2}$) peak is present and both the (1 $\frac{1} {2}$ $\frac{3} {2}$) and ($\frac{1} {2}$ $\frac{3} {2}$ 1) peaks are absent. The ($\frac{1} {2}$ 1 $\frac{3} {2}$) peak is asymmetric due to some contribution from the substrate in the $\omega$ scan. $L$ scans through these three regions of reciprocal space are presented as supplemental materials (Fig. S1).[@note]
Reciprocal space maps measured near these same set of peaks further demonstrate that the film rotation behavior is dependent on that of the substrate. For the film on GSO, peaks at ($\frac{1} {2}$ 1 $\frac{3} {2}$) from both the substrate and film can be seen in Fig. \[fig:Fig5\](a), with the white and black arrows highlighting the substrate and film peak, respectively. There is no intensity from either the substrate or film at the (1 $\frac{1} {2}$ $\frac{3} {2}$) and ($\frac{1} {2}$ $\frac{3} {2}$ 1) conditions \[Fig. \[fig:Fig5\](b,c)\], consistent with a uniform $a^-a^+c^-$ pattern in both the substrate and film.
The {$\pm\frac{1} {2}$ $\pm\frac{1} {2}$ $\frac{3} {2}$} series of peaks, shown in Fig. 6, provides additional evidence for the presence of mixed $a^+a^-c^-$ and $a^-a^+c^-$ patterns on LAO and STO, and uniform $a^-a^+c^-$ orientation on GSO. These peaks arise from out-of-phase rotations within the plane of the film ($a^-$)[@Glazer1975] and from $A$-site displacements within the plane perpendicular to the rotation axis. Therefore, the presence of these peaks indicates that the EuFeO$_3$/LAO and EuFeO$_3$/STO films are not $a^+a^+c-$ but instead contain regions of both $a^+a^-c^-$ and $a^-a^+c^-$ patterns. This is consistent with previous scanning transmission electron microscopy results obtained from an EuFeO$_3$/STO film in which $Pbnm$-type rotations were observed with the in-phase axis lying along different pseudocubic directions.[@Choquette2015]
Additionally, within a given rotation pattern, different rotational domains can arise. Each domain is defined by how the closest octahedron to the origin rotates (clockwise or counterclockwise) about each axis, which in turn dictates the displacement direction of the oxygen atoms within that rotation pattern. To probe these rotational domains, symmetrically equivalent half-order peaks with a fixed $L$ are measured.[@May2010] For the film on STO, we find that the intensity of the four {$\pm\frac{1} {2}$ $\pm\frac{1} {2}$ $\frac{3} {2}$} peaks are equal, indicating an equal population of the rotational domains as would be expected for growth on a cubic substrate. Similar data is obtained from the EuFeO$_3$/LAO sample, indicating that the rotational domains from LAO, which has an $a^-a^-a^-$ pattern, are not transferred into the film due to the symmetry mismatch at the interface. $L$ scans through these {$\pm\frac{1} {2}$ $\pm\frac{1} {2}$ $\frac{3} {2}$} peaks are shown in the supplemental materials (Fig. S2). This data clearly demonstrates that the rotational domain populations are not equal in the LAO substrate in contrast to the EuFeO$_3$ film, providing further evidence that the LAO is not imprinting rotational information into the film beyond the effect of strain. In contrast, the ($\frac{1} {2}$ $\frac{1} {2}$ $\frac{3} {2}$) and ($\frac{1} {2}$ -$\frac{1} {2}$ $\frac{3} {2}$) peaks are significantly more intense than the (-$\frac{1} {2}$ $\frac{1} {2}$ $\frac{3} {2}$) and (-$\frac{1} {2}$ -$\frac{1} {2}$ $\frac{3} {2}$) peaks in the EuFeO$_3$ film on GSO. As shown in supplemental Fig. S3, the same trend in peak intensities is found in the GSO substrate. This result indicates that not only is the rotation pattern imprinted from the GSO substrate, but the rotational domains within that pattern are also transferred from the substrate to film.
Other $\pmb{Pbnm}$-type Perovskites
===================================
Films on non-$\pmb{Pbnm}$ substrates
------------------------------------
Based on the purely geometric considerations described earlier in the text, one would expect that the in-phase rotation axis in $Pbnm$-type perovskite films on cubic substrates would depend on the epitaxial strain state. A mixed $a^-a^+c^-$ and $a^+a^-c^-$ rotational pattern would be expected for compressive strain, putting the shorter pseudocubic in-phase axis in the plane of the film thereby minimizing strain along one of the in-plane directions. Under tensile strain, the lattice mismatch can be minimized by orienting the $c^+$ axis along the growth direction leading to an $a^-a^-c^+$ pattern. Indeed, this strain dependence of the in-phase axis has been predicted with density functional theory. For example, calculations of LaMnO$_3$ and CaTiO$_3$ reveal the $a^-a^+c^-$ pattern to be favorable under compressive strain and under tensile strain of less than 1 % and 1.5 %, respectively.[@Eklund09; @LeePRB13] Similarly, the $a^-a^+c^-$ pattern was predicted to minimize energy in LaVO$_3$ in compressive strain.[@Sclauzero15] Under tensile strain above these values, the $a^-a^-c^+$ pattern becomes the lower energy structure. However, the energy differences between the two structural variants can be small; for example, first-principles calculations of strained LaVO$_3$ and many rare earth ferrites revealed minimal energetic preference between $a^-a^-c^+$ and $a^-a^+c^-$ structures under tensile strain.[@Sclauzero15; @ZhaoJPCM14]
Our observation of an $a^-a^+c^-$ rotation pattern is consistent with previous experimental studies of epitaxial perovskites compressively strained to a non-$Pbnm$ substrate, including SrRuO$_3$/STO,[@ChangPRB11; @LuPRB13; @Vailionis08; @ZiesePRB10] Pr$_{0.7}$Sr$_{0.3}$MnO$_3$/LAO,[@Mercey2000] LaVO$_3$/STO,[@Rotella2012] LaFeO$_3$/STO,[@Seo2008] GdTiO$_3$/STO/LSAT,[@Zhang2013] GdTiO$_3$/SrLaGaO$_4$,[@Grisolia14] Pr$_{0.5}$Ca$_{0.5}$MnO$_3$/LAO,[@Haghiri-Gosnet00] and La$_{0.9}$Sr$_{0.1}$MnO$_3$/STO.[@Vigliante01] The same structural variant has been reported in some films under small magnitudes of tensile strain, such as CaRuO$_3$/LSAT (0.55 % tensile)[@Proffit2008] and PrVO$_3$/STO (0.5 % tensile).[@Copie2013] In many of these studies, a mixture of $a^-a^+c^-$ and $a^+a^-c^-$ patterns was observed.[@Rotella2012; @Seo2008; @Mercey2000; @Proffit2008; @Copie2013]
There have also been reports of the $a^-a^-c^+$ pattern in films under tensile strain, especially in heterojunctions with larger than a 1 % lattice mismatch. These studies include Pr$_{0.5}$Ca$_{0.5}$MnO$_3$/STO (2.3 % tensile),[@Prellier2000] NdNiO$_3$/STO (2.6 % tensile),[@Tung13] and La$_{0.7}$Ca$_{0.3}$MnO$_3$/STO (1 % tensile).[@Andres03] A mixture of all three orientations was reported in CaMnO$_3$/LAO (2.3 % tensile).[@Gunter12]
![(color online) Measured ($\frac{1} {2}$ $\frac{3} {2}$ $n$) and ($\frac{1} {2}$ $n$ $\frac{3} {2}$) peaks for (a) LaGaO$_3$/STO, (b) Eu$_{0.7}$Sr$_{0.3}$MnO$_3$/STO, and (c) LaFeO$_3$/STO films, where $n = 2$ for (a) and $n = 1$ for (b) and (c). []{data-label="fig:Fig7"}](Fig7){width="2.7"}
To gain further insight into the in-phase axis orientation in films under moderate tensile strain (between 0 - 2 %), we have measured the half-order peaks from LaGaO$_3$/STO (0.5 % tensile) and Eu$_{0.7}$Sr$_{0.3}$MnO$_3$/STO (1.5 % tensile). A survey of the half-order peaks indicate that both films retain the $Pbnm$-type rotation pattern that is found in bulk compounds. As shown in Fig. 7(a), the LaGaO$_3$ film is predominately $a^-a^-c^+$ oriented, which accounts for 94 % of the sample volume compared to 6 % for $a^+a^-c^-$ and $a^-a^+c^-$ domains as determined from intensity analysis of the half-order peaks. In contrast, the Eu$_{0.7}$Sr$_{0.3}$MnO$_3$ film is comprised of 65 % $a^+a^-c^-$ and $a^-a^+c^-$ domains and 35 % $a^-a^-c^+$ domains, based on the half-order peaks shown in Fig. 7(b). Finally, Fig. 7(c) shows half-order peaks measured from LaFeO$_3$/STO (-0.8 % compressive strain) revealing over 99 % of the film volume consists of $a^+a^-c^-$ and $a^-a^+c^-$ domains as expected for the film under compressive strain.
![(color online) Compilation of experimental results reported in this study (red symbols) and previously published experimental work referenced in the main text (black symbols) displaying the orientation of the in-phase axis for $Pbnm$-type films. Films grown on non-$Pbnm$ substrates are indicated by squares, while films grown on (110)-oriented $Pbnm$ substrates are indicated by stars. The y-axis indicates the approximate volume fraction of the film that is $a^-a^-c^+$; a value of 0 indicates $a^+a^-c^-$ and/or $a^-a^+c^-$. The blue dotted and green dashed vertical lines signify the strain state at which a transition from $a^+a^-c^-$ to $a^-a^-c^+$ was predicted in density functional calculations for LaMnO$_3$ and CaTiO$_3$.[@Eklund09; @LeePRB13][]{data-label="fig:Fig8"}](Fig8){width="3.4"}
We compile our experimental results with those previously reported from both experiment and density functional theory in Fig. 8 to provide a comprehensive view of how the in-phase rotation axis responds to strain in films. From these results, three main conclusions can be drawn regarding the structural orientation of films on non-$Pbnm$ substrates. First, compressive strain leads to $a^-a^+c^-$ and/or $a^+a^-c^-$ structures. Second, large magnitudes of tensile strain ($> 2$ %) promote $a^-a^-c^+$ structures. Finally, under moderate values of tensile strain (0 - 2 %) films can exhibit any of the three in-phase orientations and in many cases considerable volume fractions of all three orientations. Within this strain range, it remains an open question regarding what factors, such as material chemistry or $B-B$ distances found in the bulk structure, determine the rotation pattern orientation of films. A table containing sample compositions and references for the data presented in Fig. 8 is given in the supplemental materials.
Films on $\pmb{Pbnm}$ substrates
--------------------------------
As discussed in the introduction, design strategies for realizing hybrid improper ferroelectrics and polar metals in short-period superlattices rely on the $A$-site ordering along the same direction as the in-phase rotation axis. This requires that the superlattices exhibit the $a^-a^-c^+$ structure. Based on Fig. 8, it is clear that such superlattices, when grown on cubic or rhombohedral substrates, must be under significant tensile strain to realize the correct orientation. However, the substrates most commonly used to induce large values of tensile strain are the rare earth scandates,[@SchlomMRS14] such as DyScO$_3$ and GSO, compounds that exhibit the $Pbnm$ structure.[@Liferovich04] In films grown on these substrates, the structural coupling between the film and substrate leads to an imprinting of the substrate in-phase axis orientation into the film. This imprinting effect is observed in the EuFeO$_3$/GSO films described here, and has also been reported in other papers detailing heteroepitaxial growth of $Pbnm$-type films under tensile strain on $Pbnm$-type substrates.[@Proffit2008; @Kan2013; @Aso14b; @Biegalski2015] These results from films on (110)-oriented $Pbnm$ substrates, in which the in-phase axis within the substrate is perpendicular to the growth direction, are also plotted in Fig. 8 illustrating the substrate-induced structural coupling effect. The primacy of substrate imprinting over strain in determining the in-phase rotation axis points to growth on (001)-oriented $Pbnm$-type substrates as the most promising means to ensure $a^-a^-c^+$ behavior in perovskite films and superlattices.
Finally, it should be noted that we do not find an indirect imprinting effect on the in-phase axis from LAO, which exhibits an $a^-a^-a^-$ pattern, into $Pbnm$-type films. Here one may expect that the octahedral connectivity can be better maintained if the film takes on the $a^-a^-c^+$ pattern that would retain coherence of the out-of-phase axes within the epitaxial plane at the interface. However, in both our experimental results and those previously reported, such behavior is not found. This suggests that direct imprinting of the in-phase axis from a $Pbnm$ substrate provides deterministic control of the structural orientation while indirect imprinting from a rhombohedral substrate does not.
Conclusions
===========
In summary, we report on the octahedral rotation patterns of strained EuFeO$_3$ and other $Pbnm$-type perovskite films. We observe a mixed $a^-a^+c^-$ and $a^+a^-c^-$ rotation pattern when grown on cubic or rhombohedral substrates, in EFO films under strain states ranging from 2% compressive to 0.9 % tensile. In contrast, EuFeO$_3$ grown on orthorhombic GSO (110) exhibits a uniform $a^-a^+c^-$ orientation matching that of the substrate. To better understand the universality of this behavior, we have also measured LaGaO$_3$/STO, LaFeO$_3$/STO, and Eu$_{0.7}$Sr$_{0.3}$MnO$_3$/STO and compiled previously reported structural data from $Pbnm$-type films. The totality of the results indicates that compressive strain results in $a^-a^+c^-$ and $a^+a^-c^-$ patterns; moderate tensile strain can result in $a^-a^+c^-$, $a^+a^-c^-$, and/or $a^-a^-c^+$ structures; and large values of tensile strain ($>$ 2 %) tends to favor $a^-a^-c^+$. However, films under large tensile strain on $Pbnm$-type substrates exhibit the same rotation pattern as that of the substrate, indicating that substrate imprinting of the in-phase axis offers a more robust means for deterministically controlling the rotation pattern compared to epitaxial strain. We anticipate that this work will enable more efficient experimental pursuits of recently predicted rotation-induced phenomena, such as hybrid improper ferroelectricity and non-centrosymmetric metals.
Acknowledgements
================
We thank Christian Schlepütz for assistance with the diffraction measurements. We are grateful to James Rondinelli and Craig Fennie for useful discussions. Work at Drexel was supported by the National Science Foundation (DMR-1151649). Use of the Advanced Photon Source was supported by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
[74]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , , ****, ().
, , , , , , ****, ().
, , , , , ****, ().
, ****, ().
, , , , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , , , , , , , , , ****, ().
, ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , ****, ().
, ****, ().
, , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , ****, ().
, , , , , , , , , ****, ().
, , , , , ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
|
{
"pile_set_name": "ArXiv"
}
|
---
address: 'Instituto de Física [*“Manuel Sandoval Vallarta"*]{}, Universidad Autónoma de San Luis Potosí, Álvaro Obregón 64, 78000 San Luis Potosí, SLP, México'
author:
- 'José Manuel Olais-Govea, Leticia López-Flores, Martín Chávez-Páez, and Magdaleno Medina-Noyola'
title: 'Non-equilibrium Kinetics of the Structural and Morphological Transformation of Liquids into Physical Gels'
---
**
Supplemental Material (SM)
==========================
The main approximations of the NE-SCGLE theory
----------------------------------------------
The essence of the NE-SCGLE theory [@nescgle1] are the time-evolution equations of: (I) the mean value $\overline{n}(\textbf{r},t)$, $$\frac{\partial \overline{n}(\textbf{r},t)}{\partial
t} = D_0{\nabla} \cdot b(\textbf{r},t)\overline{n}(\textbf{r},t)
\nabla \beta\mu[{\bf r};\overline{n}(t)], \label{difeqdlp}$$ and of: (II) the Fourier transform (FT) $\sigma(k;\textbf{r},t)$ of the covariance $\sigma(\textbf{r},\textbf{r} + \textbf{x};t)$, $$\begin{aligned}
\begin{split}
\frac{\partial \sigma(k;\textbf{r},t)}{\partial t} = & -2k^2 D_0
\overline{n}(\textbf{r},t) b(\textbf{r},t)
\mathcal{E}(k;\overline{n}(\textbf{r},t)) \sigma(k;\textbf{r},t)
\\ & +2k^2 D_0 \overline{n}(\textbf{r},t)\ b(\textbf{r},t), \label{relsigmadif2p}
\end{split}\end{aligned}$$ of the fluctuations of the local density $n(\textbf{r},t)$ of particles. In these equations $D_0$ is the particles’ short-time self-diffusion coefficient and $b(\textbf{r},t)$ is their local reduced mobility. The main external input of these equations is the Helmholtz free energy density-functional $\mathcal{F}[n]$, or, more precisely, its first and second functional derivatives: the chemical potential $\mu [{\bf r};n] \equiv \left[ {\delta \mathcal{F}[n]}/{\delta n({\bf r}')}\right]$ and the thermodynamic function $\mathcal{E}[{\bf r},{\bf r}';n]\equiv \left[ {\delta \beta\mu [{\bf r};n]}/{\delta n({\bf r}')}\right] $. In Eq. (\[relsigmadif2p\]), $\mathcal{E}(k;\overline{n}(\textbf{r},t))$ is the Fourier transform (FT) of $\mathcal{E}[{\bf r},\textbf{r} + \textbf{x};n]\equiv \left[ {\delta \beta\mu
[{\bf r};n]}/{\delta n(\textbf{r} + \textbf{x})}\right]_{n=\overline{n}(\textbf{r},t)}$.
In principle, these two equations describe the *isochoric* non-equilibrium morphological and structural evolution of a simple liquid of $N$ particles in a volume $V$ after being *instantaneously* quenched at time $t=0$ to a final temperature $T_f$, in the absence of applied external fields. This description, cast in terms of the one- and two-particles distribution functions $\overline{n}(\textbf{r},t)$ and $\sigma(k;\textbf{r},t)$, involves the local mobility $b(\textbf{r},t)$, which is in reality a *functional* of $\overline{n}(\textbf{r},t)$ and $\sigma(k;\textbf{r},t)$, and this introduces strong non-linearities. In fact, even before solving these equations, they reveal a relevant feature of general and universal character: besides the equilibrium stationary solutions $\overline{n}^{eq}(\textbf{r})$ and $\sigma^{eq} (k;\textbf{r})$, defined by the equilibrium conditions $\nabla \beta\mu[{\bf r};\overline{n}^{eq}]=0$ and $\mathcal{E}(k;\overline{n}(\textbf{r},t)) \sigma(k;\textbf{r},t)=1$, Eqs. (\[difeqdlp\]) and (\[relsigmadif2p\]) also predict the existence of another set of stationary solutions that satisfy the dynamic arrest condition, $\lim_{t_w\to \infty} b(\textbf{r},t)=0$. This far less-studied second set of solutions describes, however, important non-equilibrium stationary states of matter, corresponding to common and ubiquitous non-equilibrium amorphous solids, such as glasses and gels.
To appreciate the essential physics, the best is to provide explicit examples. To do this at the lowest mathematical and numerical cost, let us write $\overline{n}(\textbf{r},t)$ as the sum of its bulk value $n\equiv N/V$ plus the deviations $\Delta \overline{n}(\textbf{r},t)$ from homogeneity, and in a zeroth-order approximation let us neglect $\Delta \overline{n}(\textbf{r},t)$. As explained in more detail in Ref. [@nescgle6], this reduces the previous two equations to only one equation for the covariance, now written in terms of the non-equilibrium structure factor $S(k;t)$ as $\sigma(k;\textbf{r},t)=nS(k;t)$. For waiting times $t>0$ after the quench, such an equation reads $$\frac{\partial S(k;t)}{\partial t} = -2k^2 D_0
b(t)n\mathcal{E}_f(k) \left[S(k;t)
-1/n\mathcal{E}_f(k)\right], \label{relsigmadif2pp}$$ where $\mathcal{E}_f(k)\equiv \mathcal{E}(k;n,T_f)$ is the value of $\mathcal{E}(k;\overline{n}(\textbf{r},t))$ at the uniform profile $\overline{n}(\textbf{r},t)= n$ and at the final temperature $T_f$. Here, too, $b(t)$ is in reality a functional of $S(k;t)$
The local mobility $b(t)$ as a functional of $S(k;t)$.
------------------------------------------------------
The detailed functional dependence of the local mobility $b(t)$ on the non-stationary structure factor $S(k;t)$ is determined by the following NE-SCGLE equations, which must be self-consistently solved together with Eq. (\[relsigmadif2pp\]). As explained in Ref. [@nescgle2], this set of equations start by writing $b(t)$ as $$b(t)= [1+\int_0^{\infty} d\tau\Delta{\zeta}^*(\tau; t)]^{-1},
\label{bdt}$$ with the $t$-evolving, $\tau$-dependent friction function $\Delta{\zeta}^*(\tau; t)$ given approximately by $$\begin{split}
\Delta \zeta^* (\tau; t)= \frac{D_0}{24 \pi^{3}n} \int d^{3} k\ k^2 \left[\frac{ S(k;t)-1}{S(k; t)}\right]^2 \\ \times F(k,\tau; t)F_S(k,\tau; t),
\end{split}
\label{dzdtquench}$$ in terms of $S(k; t)$ and of the collective and self non-equilibrium intermediate scattering functions $F(k,\tau; t)$ and $F_S(k,z; t)$, whose memory-function equations are written approximately, in terms of the Laplace transforms $F(k,z; t)$ and $F_S(k,\tau; t)$, as
$$\begin{gathered}
\label{fluctquench}
F(k,z; t) = \frac{S(k; t)}{z+\frac{k^2D_0 S^{-1}(k;
t)}{1+\lambda (k)\ \Delta \zeta^*(z; t)}},\end{gathered}$$
and $$\begin{gathered}
\label{fluctsquench}
F_S(k,z; t) = \frac{1}{z+\frac{k^2D_0 }{1+\lambda (k)\ \Delta
\zeta^*(z; t)}},\end{gathered}$$ with $\lambda (k)$ being a phenomenological interpolating function [@todos2], $$\lambda (k)=1/[1+( k/k_{c}) ^{2}], \label{lambdadk}$$ in which $k_c$ is an empirically determined parameter (here we use $k_c$ as $k_c= 1.305 \times 2\pi/\sigma$, as in previous works).
Eqs. (\[relsigmadif2pp\])-(\[lambdadk\]) summarize the NE-SCGLE theory employed so far to describe the irreversible processes occurring in a solidifying glass- or gel-forming liquid. A systematic presentation of the predictions of this theory and of their correspondence with the widely observed experimental signatures of the glass transition, started in Refs. [@nescgle3] and [@nescgle6] with the description of the transformation of equilibrium hard-sphere (and soft-sphere) liquids into “repulsive” glasses. That investigation was extended in Ref. [@nescgle5] to Lennard-Jones–like simple liquids (pairwise interactions composed of a strong repulsion plus an attractive tail), which revealed a much richer scenario, summarized by the *non-equilibrium* phase diagram in Fig. 1 of our Letter. The scenario laid down in Ref. [@nescgle5] was inferred solely on the basis of the predicted long-time asymptotic stationary solutions of the NE-SCGLE equations.
Model system and approximate thermodynamic input.
-------------------------------------------------
For a monocomponent simple liquid with pairwise interaction $u(r)=u_{HS}(r) + u_{A}(r)$ (hard spheres of diameter $\sigma$, plus a weaker attractive interaction $u_{A}(r)$), we shall adopt the van der Waals (vdW) approximation for the Helmholtz free energy functional, $\mathcal{F}[n]=\mathcal{F}_{HS}[n]+ \frac{1}{2}\int d\mathbf{r} d\mathbf{r}' n(\textbf{r})n(\textbf{r}') u_{A}(|\textbf{r}-\textbf{r}'|)\theta (|\textbf{r}-\textbf{r}'|-\sigma)$, where $\mathcal{F}_{HS}[n]$ is the exact free energy functional of the reference HS system and $\theta(x)$ is Heaviside’s step function. This leads to the approximate chemical potential, $\mu [{\bf r};n] = \mu_{HS} [{\bf r};n] + \int d\mathbf{r}' u_{A}(|\textbf{r}-\textbf{r}'|)\theta (|\textbf{r}-\textbf{r}'|-\sigma)n(\textbf{r}')$, and thermodynamic functional $\mathcal{E}[{\bf r},{\bf r}';n]=\mathcal{E}_{HS}[{\bf r},{\bf r}';n]+ \beta u_{A}(|\textbf{r}-\textbf{r}'|)\theta (|\textbf{r}-\textbf{r}'|-\sigma)$, which in Fourier space reads $\mathcal{E}(k;n)=\mathcal{E}_{HS}(k;n)+ \beta u_{A}(k)$. This last equation was referred to in Ref. [@nescgle5] as Sharma-Sharma approximation [@sharmasharma].
The hard-sphere function $\mathcal{E}_{HS}(k;n)$ can be determined using the Ornstein-Zernike (OZ) equilibrium condition for the static structure factor, $S^{eq}(k;n)=1/n\mathcal{E}_{HS}(k;n)$. This OZ equation, complemented with the Percus-Yevick approximation [@percusyevick] with Verlet-Weis correction [@verletweis], provides an analytic expression for $\mathcal{E}_{HS}(k;n)$. The van der Waals approximate free energy, complemented by these (virtually exact) hard-sphere properties, were employed to solve Eqs. (\[relsigmadif2pp\])-(\[fluctsquench\]) for the HSAY model. The spinodal curve in Fig. 1 of the Letter, was obtained from the condition $\mathcal{E}(k=0;\phi,T_s)=0$.
Simulations.
------------
Brownian dynamics simulations were performed using the algorithm developed by Ermak and Mcammon [@tildesley] without hydrodynamics interactions, using a cubic simulation box, periodic boundary conditions, and a number of particles $N=4300$. To mimic the hard-core part of the attractive Yukawa potential, a short-range repulsive part was considered. In specific, the simulations employed the soft-sphere plus attractive Yukawa form $$u(r)=
\epsilon\Big[\Big(\frac{\sigma}{r} \Big)^{2\nu}-2\Big(\frac{\sigma}{r} \Big)^{\nu}+1\Big]
\theta(\sigma-r)
-\epsilon \frac{\exp[-z(r/\sigma -1)]}{(r/\sigma )}
\label{yukawa}$$
where $\theta(x)$ is the Heaviside’s step function and $\nu=20$. A cut-off radius $r_c=3.5\sigma$ was employed for the attractive part. All the simulations were started from a random configuration, equilibrated for $5\times 10^5$ time steps of length $\Delta t=5.0\times 10^{-5}t_B$ $(t_B \equiv \sigma^{2}/D_{0})$ and using an initial temperature $T_i=2.0$. A quench was then applied to the desired temperature $T_f$, continuing on the simulations using a reduced time step $\Delta t=10^{-5}t_B$ and collecting data every 200 time steps. Following Ref. [@heyeslodge], structural functions were calculated as the averages in non-overlapping time-windows of length $0.2t_B$. The reported results correspond to the average over ten independent simulations for every system.
![Theoretical snapshots (solid lines) of the small-$k$ peak of $S(k;t)$ (for the same conditions indicated in Fig. 2 of our Letter), compared with the snapshots of a Brownian-dynamics simulated quench (symbols) and with waiting times adjusted to match the height $S(k_{max};t)$ of this peak.[]{data-label="Fig2"}](Figure1c.eps)
In Fig. 2 of our Letter, the sequence of simulated snapshots of $S(k;t)$ was compared with the corresponding theoretical sequence, with identical evolution times $t$, thus exhibiting the quantitative inaccuracies of the approximate theory, most notably a quantitative mismatch between the simulation and the theoretical clocks. In spite of this difference, however, the structural pathways predicted by theory and registered by simulations are remarkably similar. This is illustrated here in Fig. SM1, which compare the same sequence of simulated snapshots of $S(k;t)$ in Fig. 2 of our Letter, but now paired with the theoretical snapshots having the same height $S_{max}(t)$ (but, obviously, different evolution time).
Arrested spinodal decomposition in protein solutions.
-----------------------------------------------------
Here we provide the quantitative details of the comparisons in Figs. 3, 4(a)-(b) and 5(a)-(b) of the Letter, between our theoretical predictions and the reported experimental measurements of the growth and arrest of the spinodal heterogeneities of gelling protein solutions.
### Gelling lysozyme solutions.
In Fig. 2(d) of Ref. [@gibaud], Gibaud and Schurtenberger report the experimental measurements of the growth and arrest of the representative size $\xi(t)$ (in micrometers) of the heterogeneities, as a function of time (in seconds), of gelling lysozyme proteins of diameter $\sigma=3.4nm$ in aqueous solution at fixed bulk concentration corresponding to a volume fraction of approximately $\phi=0.15$. To compare our predicted theoretical scenario with these data, let us normalize their experimental $\xi(t)$ and $t$ obtained by microscopy, by the experimental value of our theoretical units of length and time, $\sigma$ and $\tau_{B}\equiv\sigma^{2}/D_{0}$. Using Stokes-Einstein’s relation and the viscosity of water at room temperature we obtain $D_{0}=122\mu m^{2}/s$ and $\tau_{B}= \sigma^{2}/D_{0}=1\times 10^{-7}s$.
Plotted in this manner, the experimental data of Fig. 2(d) of Ref. [@gibaud] appear as the empty symbols in Fig. SM \[Suplementary1\]. In this figure the solid lines are the theoretical predictions for the evolution of $\xi(t)$ in the HSAY model with $z=2$ initially at the (theoretical) temperature $T_0=1.5$ and quenched at fixed volume fraction ($\phi=0.15$) to various values of the final temperature $T$. The region limited by the dotted lines is the time-window employed in the inset of Fig. 2(b). As we can see, in reality the experimental data fall well outside such time-window. However, as our present comparison demonstrates, one can find a theoretical curve (red line) corresponding to a value of $T$ closer to the spinodal curve, whose predicted arrest also occurs far outside this window and approximately superimposes on the experimental data. The Fig. SM 2 also re-plots, now as solid symbols, the same experimental data with $\xi(t)$ and $t$ arbitrarily reduced by empirical factors (7 $\times 10^{-3}$ and 1.5 $\times 10^{-8}$, respectively). We do this only to illustrate, as we do in the inset of Fig. 2(b) of the Letter, their qualitative resemblance with the predicted scenario of the demonstrative quench of the HSAY model discussed in that figure (notwithstanding the fact that the theoretical curves there actually refer to the isochore $\phi=0.08$).
![Theoretical evolution (solid lines) of the size $\xi (t)$ of the heterogeneities of the HSAY model for a sequence of quenches for several final temperatures. The empty square symbols are the experimental results measured in a lysozyme protein system reported in Fig. 2(d) of Ref.[@gibaud]. The solid symbols represent the scaled data.[]{data-label="Suplementary1"}](k_min_phi015_shutemberger.eps)
Of course, comparing theoretical results for the HSAY model with experimental data for the referred protein solution can only have the purpose of comparing the essential qualitative features of the striking arrest of the process of spinodal decomposition. To establish a more proper quantitative comparison, the theoretical calculations should be fine-tuned to actually correspond to the detailed experimental conditions (much smaller range of the attractive potential ($z>> 2$), finite cooling rates, etc.), which is for the moment out of the scope of this short communication.
### Gelling bovine serum albumin solutions.
To complement the previous comparison, let us now refer to Fig. 7 of reference [@davela], where Da Vela et al. report quite similar experimental measurements of $\xi(t)$ for *a sequence* of quenches involving another protein, namely, bovine serum albumin, quenched along its critical isochore (protein concentration $175mg/mL$ BSA with 44 mM YCL$_3$). This protein has diameter $\sigma=6 nm$, and in the reported solution presents a lower consolute temperature. Thus, in contrast with the previous example, a deeper quench involves a higher experimental temperature $T_e$. Nevertheless, to compare these measurements with our predicted scenario we followed essentially the same procedure described above, and the result is summarized in Fig. 3(a) of the Letter.
![ Theoretical evolution (solid lines) of the size $\xi (t)$ of the heterogeneities of the HSAY model for a sequence of quenches for several final temperatures. The symbols are the experimental results measured in the BSA solutions reported in Fig. 7 of Ref. [@davela], for two of the systems (squares for $T=30^{o}C$ and circles for $T=57.5^{o}C$). The solid symbols represent the scaled data.[]{data-label="Suplementary2"}](DaVela_teoria_extrapolacion.eps)
To explain this procedure in more detail, let us mention that in this case we estimated $D_{0}= 60 \mu m^{2}/s$ and $\tau_{B}=1\times 10^{-6}s$. In Fig. SM \[Suplementary2\], we thus plot the normalized experimental data (empty symbols) corresponding to the shallowest and to the deepest quenches reported in Fig. 7 of reference [@davela] (labeled there by the experimental temperatures $T_e=30^{o}C$ and $57.5^oC$, respectively). The solid lines are theoretical predictions for the evolution of $\xi(t)$ in the HSAY model with $z=2$ initially at the (theoretical) temperature $T_0=1.5$ for various values of the final temperature $T$. Since the experiments were conducted at the volume fraction $\phi = 0.20$, the theoretical quenches were performed at the same volume fraction.
Once again, the experimental data fall well outside the time-window employed in the inset of Fig. 2b, but again, one can find theoretical curves corresponding to values of $T$ closer to the spinodal curve, whose predicted arrest approximately superimposes on these experimental data. This Fig. SM3 also re-plots (solid symbols) the same experimental data with $\xi(t)$ and $t$ arbitrarily reduced by empirical factors (3.1 $\times 10^{-2}$ and 1 $\times 10^{-7}$, respectively). This is exactly what we have done to generate Fig. 3(a), which also includes the experimental results for the other quenches reported in Fig. 7 of reference [@davela].
Arrested spinodal decomposition in colloid-polymer mixtures.
------------------------------------------------------------
Let us now discuss the details of the comparisons in Figs. 3(b), 4(a)-(b) of the Letter, which refer to the experimental measurements of the evolution of $\xi (t)$ in two colloid-polymer mixtures that differ in the range of the depletion attraction between colloids.
### Moderate polymer/colloid size ratio: longer-ranged depletions.
In Fig. 3(b) of the Letter we quote the experimental measurements by Zhang et al., reported in Fig. 5(a) of Ref. [@royall], of the growth and arrest of $\xi(t)$ in a colloid-polymer mixture where the mean diameter of the colloids is 544 nm and the polymer radius of gyration is 126 nm, so that the range $\delta$ of the depletion forces between colloids induced by the polymer (in units of the colloid’s diameter) is $\delta\approx 0.45$. This means that, regarding the range of the attractive interactions, this system is represented more closely (than the previous protein solutions) by the HSAY model with $z = 2$, discussed and simulated in Fig. 2 of our Letter.
Ref. [@royall] determines that for these experimental systems $\tau_{B} \approx 0.7s$ and reports the data already in units of $\sigma$ and $\tau_{B}$ for a system with fixed colloid volume fraction $\phi=0.20$ and for several values of the polymer concentration $C_p$, whose inverse plays the role of the theoretical temperature $T$. In Fig. 3(b) of the Letter we plot these experimental data along with our theoretical predictions for a sequence of quenches along the isochore $\phi=0.20$. Here again, to put both theory and experiments in the same time window, so as to appreciate the qualitative agreement, the experimental data of $\xi(t)$ and $t$ were arbitrarily multiplied by empirical factors (6 and 9, respectively), much more moderate than in the previous case involving protein solutions.
### Small polymer/colloid size ratio: short-ranged depletions.
Figs. 4(a) and (b) of our Letter reproduce, respectively, the experimental data reported by Lu et al. in Figs. 4(b) and (c) of Ref. [@lu], corresponding to a colloid-polymer mixture with colloid radius 560 nm and colloid-to-polymer size ratio $z= \sigma/2R_g \approx17$. In these experiments, the transition to arrested or gelled states is investigated as a function of the polymer concentration, which plays the role of an effective temperature. These experiments monitor the non-equilibrium evolution of the colloid-colloid static structure factor $S(k;t)$ and of its first moment $k_1(t) \equiv \int_{0}^{k_c} k S(k;t) dk/\int_{0}^{k_c} S(k;t) dk$ (where $k_c$ locates the minimum of $S(k;t)$ following its low-$k$ peak), after a quench from a polymer concentration $c_p= 0$ to $c_p= 53.31$ mg/ml, keeping the colloid volume fraction fixed at $\phi=0.045$. The viscosity of the solvent is reported to be $\eta=1.96\times 10^{-3}$Pa $\cdot$ s at the temperature $T=25^{o}$C. Using the Stokes-Einstein relation we obtain the short time self diffusion coefficient as $D_{0}=0.19\mu m^{2}/s$, yielding a characteristic time $\tau_{B}= \sigma^{2}/D_{0} = 6.3s$, which is the time unit employed in Figs. 4(a) and (b) of our Letter. The measured wave-vector $k_1(t)$ decreases with time until reaching the stationary arrested value $k_{1}^{a}\approx 0.45 $.
To establish the connection between theory and experiments, we first solved the NE-SCGLE Eqs. (\[relsigmadif2pp\])-(\[lambdadk\]) to theoretically calculate the first moment $k_{1}(t)$ for a sequence of quenches along the isochore $\phi=0.045$ of the HSAY model ($z=2$), starting at the same (high) initial temperature and with varying (lower) final temperature $T$. We then determined the $T$-dependent limiting value $k_{1}^{a} =\lim_{t \to\infty} k_{1}(t)$, and chose as the effective temperature of the experimental quench, the value of $T$ for which the condition $k_{1}^{a}(T) \approx 0.45 $ was satisfied, yielding $T \approx 0.5$. In Fig. 4(a) of the Letter we compare the solution of Eqs. (\[relsigmadif2pp\])-(\[lambdadk\]) for $S(k;t)$ after this particular quench, with the corresponding experimental data. Fig. 4(b) of the Letter presents a similar comparison between the theoretical (thick solid line) and experimental (full circles) time-dependent first moment $k_{1}(t)$.
The direct comparison between theory and experiment presented in Fig. 4 of the Letter illustrates that, in spite of appreciable quantitative discrepancies, most notably the mismatch between the theoretical and experimental clocks, there is a remarkable qualitative similarity.
P. E. Ramírez-González and M. Medina-Noyola, Phys. Rev. E **82**, 061503 (2010).
P. Mendoza-Méndez, E. Lázaro-Lázaro, L. E. Sánchez-Díaz, P. E. Ramírez-González, G. Pérez-Ángel, and M. Medina-Noyola, Phys. Rev. E **96**, 022608 (2017).
P. E. Ramírez-González and M. Medina-Noyola, Phys. Rev. E **82**, 061504 (2010).
R. Juárez-Maldonado, M. A. Chávez-Rojo, P. E. Ramírez-González, L. Yeomans-Reyna and M. Medina-Noyola , Phys. Rev. E [**76**]{}, 062502 (2007).
L. E. Sánchez-Díaz, P. E. Ramírez-González, and M. Medina-Noyola, Phys. Rev. E **87**, 052306 (2013).
J. M. Olais-Govea, L. López-Flores, and M. Medina-Noyola, J. Chem Phys. **143**, 174505 (2015).
R. V. Sharma and K. C. Sharma, Physica A **89**, 213 (1977).
J. K. Percus and G. J. Yevick, Phys. Rev. [**110**]{}, 1 (1957).
L. Verlet and J.-J. Weis, Phys. Rev. A [**5**]{} 939 (1972).
M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids (Oxford University Press, Oxford, 1887).
J. F. M. Lodge and D. M. Heyes, J. Chem. Soc., Faraday Trans., **93**, 437 (1997).
T. Gibaud and P. Schurtenberger, J. Phys.: Condens. Matter **21**, 322201 (2009).
S. Da Vela et al., Soft Matter, **12**, 9334 (2016).
Isla Zhang, C. Patrick Royall, Malcolm A. Faersd and Paul Bartlett, Soft Matter, **9**, 2076, (2013).
P. J. Lu, E. Zaccarelli, F. Ciulla, A. B. Schofield, F. Sciortino and D. Weitz, Nature **453**, 499 (2008).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The spherical alga [*Volvox*]{} swims by means of flagella on thousands of surface somatic cells. This geometry and its large size make it a model organism for studying the fluid dynamics of multicellularity. Remarkably, when two nearby [*Volvox*]{} swim close to a solid surface, they attract one another and can form stable bound states in which they “waltz" or “minuet" around each other. A surface-mediated hydrodynamic attraction combined with lubrication forces between spinning, bottom-heavy [*Volvox*]{} explains the formation, stability and dynamics of the bound states. These phenomena are suggested to underlie observed clustering of [*Volvox*]{} at surfaces.'
author:
- 'Knut Drescher$^{1}$, Kyriacos C. Leptos$^{1}$, Idan Tuval$^{1}$, Takuji Ishikawa$^{2}$, Timothy J. Pedley$^{1}$, and Raymond E. Goldstein$^{1}$'
title: 'Dancing [*Volvox*]{} : Hydrodynamic Bound States of Swimming Algae'
---
Long after he made his great contributions to microscopy and started a revolution in biology, Antony van Leeuwenhoek peered into a drop of pond water and discovered one of nature’s geometrical marvels [@Leeuwenhoek]. This was the freshwater alga which, years later, in the very last entry of his great work on biological taxonomy, Linneaus named [*Volvox*]{} [@Linneaus] for its characteristic spinning motion about a fixed body axis. [*Volvox*]{} is a spherical colonial green alga (Fig. \[fig1\]), with thousands of biflagellated cells anchored in a transparent extracellular matrix (ECM) and daughter colonies inside the ECM. Since the work of Weismann [@Weismann], [*Volvox*]{} has been seen as a model organism in the study of the evolution of multicellularity [@Kirkbook; @twelvestep; @multicellular].
Because it is spherical, [*Volvox*]{} is an ideal organism for studies of biological fluid dynamics, being an approximate realization of Lighthill’s “squirmer" model [@Lighthill] of self-propelled bodies having a specified surface velocity. Such models have elucidated nutrient uptake at high P[é]{}clet numbers [@Magar; @multicellular] by single organisms, and pairwise hydrodynamic interactions between them [@Ishikawa_pairwise]. Volvocine algae may also be used to study [*collective*]{} dynamics of self-propelled objects [@IshikawaPedley08], complementary to bacterial suspensions ([*E. coli, B. subtilis*]{}) exhibiting large-scale coherence in thin films [@Wu] and bulk [@Dombrowski].
While investigating [*Volvox*]{} suspensions in glass-topped chambers we observed stable bound states, in which pairs of colonies orbit each other near the chamber walls. [*Volvox*]{} is “bottom-heavy" due to clustering of daughter colonies in the posterior, so an isolated colony swims upward with its axis vertical, rotating clockwise (viewed from above) at an angular frequency $\omega \sim 1$ rad/s for a radius $R\sim 150$ $\mu$m. When approaching the chamber ceiling, two [*Volvox*]{} are drawn together, nearly touching while spinning, and they “waltz" about each other clockwise (Fig. \[fig1\]a) at an angular frequency $\Omega\sim 0.1$ rad/s. When [*Volvox*]{} have become too heavy to maintain upswimming, two colonies hover above one another near the chamber bottom, oscillating laterally out of phase in a “minuet” dance. Although the orbiting component of the waltzing is reminiscent of vortex pairs in inviscid fluids, the attraction and the minuet are not, and as the Reynolds number is $\sim 0.03$, inertia is negligible.
![\[fig1\] (Color online) Waltzing of [*V. carteri*]{}. (a) Top view. Superimposed images taken $4$ s apart, graded in intensity. (b) Side, and (c) top views of a colony swimming against a coverslip, with fluid streamlines. Scales are $200$ $\mu$m. (d) A linear [*Volvox*]{} cluster viewed from above (scale is $1$ mm).](drescher_fig1){width="0.95\columnwidth"}
While one might imagine that signalling and chemotaxis could result in these bound states, a combination of experiment, theory, and numerical computations is used here to show that they arise instead from the interplay of short-range lubrication forces between spinning colonies and surface-mediated hydrodynamic interactions [@Blake], known to be important for colloidal particles [@KehAnderson; @dufresne] and bacteria [@lauga_prl]. We conjecture that flows driving [*Volvox*]{} clustering at surfaces enhance the probability of fertilization during the sexual phase of their life cycle.
![\[fig2\] (Color online) Dual-view apparatus.](drescher_fig2){width="0.95\columnwidth"}
[*Volvox carteri*]{} f. [*nagariensis*]{} EVE strain (a subclone of HK10) were grown axenically in SVM [@kirk83; @multicellular] in diurnal growth chambers with sterile air bubbling, in a daily cycle of $16$ h in cool white light ($\sim 4000$ lux) at $28^{\circ}$ C and $8$ h in the dark at $26^{\circ}$ C. Swimming was studied in a dual-view system (Fig. \[fig2\]) [@RevSciInst], consisting of two identical assemblies, each a CCD camera (Pike 145B, Allied Vision Technologies, Germany) and a long-working distance microscope (InfiniVar CMS-2/S, Infinity Photo-Optical, Colorado). Dark-field illumination used $102$ mm diameter circular LED arrays (LFR-100-R, CCS Inc., Kyoto) with narrow bandwidth emission at $655$ nm, to which [*Volvox*]{} is insensitive [@photospectrum]. Thermal convection induced by the illumination was minimized by placing the $2\times 2\times 2$ cm sample chamber, made from microscope slides held together with UV-curing glue (Norland), within a stirred, temperature-controlled water bath. A glass cover slip glued into the chamber provided a clean surface (Fig. \[fig1\]b) to induce bound states. Particle imaging velocimetry (PIV) studies (Dantec Dynamics, Skovelund, Denmark) showed that the r.m.s convective velocity within the sample chamber was $\lesssim 5$ $\mu$m/s.
Four aspects of [*Volvox*]{} swimming are important in the formation of bound states, each arising, in the far field, from a distinct singularity of Stokes flow: (i) negative buoyancy (Stokeslet), (ii) self-propulsion (stresslet), (iii) bottom-heaviness (rotlet), and spinning (rotlet doublet). During the $48$ hour life cycle, the number of somatic cells is constant; only their spacing increases as new ECM is added to increase the colony radius. This slowly changes the speeds of sinking, swimming, self-righting, and spinning, allowing exploration of a range of behaviors. The upswimming velocity $U$ was measured with side views in the dual-view apparatus. [*Volvox*]{} density was determined by arresting self-propulsion through transient deflagellation with a pH shock [@multicellular; @Witman], and measuring sedimentation. The settling velocity $V=2\Delta\rho g R^2/9\eta$, with $g$ the acceleration of gravity and $\eta$ the fluid viscosity, yields the density offset $\Delta\rho=\rho_c-\rho$ between the colony and water. Bottom-heaviness implies a distance $\ell$ between the centers of gravity and geometry, measured by allowing [*Volvox*]{} to roll off a guide in the chamber and monitoring the axis inclination angle $\theta$ with the vertical. This angle obeys $\zeta_r\dot \theta=-(4\pi R^3\rho_c g\ell/3)\sin\theta$, where $\zeta_r=8\pi \eta
R^3$ is the rotational drag coefficient, leading to a relaxation time $\tau=6\eta/\rho_c g\ell$ [@gyrotaxis]. The rotational frequencies $\omega_o$ of free-swimming colonies were obtained from movies, using germ cells/daughter colonies as markers.
![\[fig3\] (Color online) Swimming properties of [*V. carteri*]{} as a function of radius. (a) upswimming speed, (b) rotational frequency, (c) sedimentation speed, (d) reorientation time, (e) density offset, and (f) components of average flagellar force density.](drescher_fig3){width="1.00\columnwidth"}
Figure \[fig3\] shows the four measured quantities ($U,V,\omega_o,\tau$) and the deduced density offset $\Delta \rho$. In the simplest model [@multicellular], locomotion derives from a uniform force per unit area ${\bf f}=
(f_{\theta},f_{\phi})$ exerted by flagella tangential to the colony surface. Balancing the net force $\int dS\,{\bf f}\cdot \hat{\bf z}
=\pi^2 f_{\theta}R^2$ against the Stokes drag and negative buoyancy yields $f_{\theta}=6\eta (U+V)/\pi R$. Balancing the flagellar torque $\int dS\, R ( \hat{\bf r} \times {\bf f} ) \cdot \hat{\bf z}=\pi^2 f_{\phi}R^3$ against viscous rotational torque $8\pi\eta R^3\omega_o$ yields $f_{\phi}=8\eta\omega_o/\pi$. These components are shown in Fig. \[fig3\]f, where we used a linear parameterization of the upswimming data (Fig. \[fig3\]a) to obtain an estimate of $U$ over the entire radius range. The typical force density $f_{\theta}$ corresponds to several pN per flagellar pair [@multicellular], while the relative smallness of $f_{\phi}$ is a consequence of the $\sim15^\circ$ tilt of the flagellar beating plane with respect to the colonial axis [@Hoops; @jfm].
Using the measured parameters it is possible to characterize both bound states. Fig. \[fig4\]c shows data from measured tracks of $60$ pairs of [*Volvox*]{}, as they fall together to form the waltzing bound state. The data collapse when the inter-colony separation $r$, normalized by $\bar R$, the mean of the two participating colonies’ radii, is plotted as a function of rescaled time from contact. The waltzing frequency $\Omega$ is linear in the mean spinning frequency of the pair $\bar \omega$. These two ingredients of the waltzing bound state, “infalling” and orbiting, can be understood, respectively, by far-field features of mutually-advected singularities and near-field effects given by lubrication theory, which will now be considered in turn.
[*Infalling:*]{} When swimming against an upper surface, the net thrust induced by the flagellar beating is not balanced by the viscous drag on the colony, as the colony is at rest, resulting in a net downwards force on the fluid. The fluid response to such a force may be modeled as a Stokeslet normal to and at a distance $h$ from a no-slip surface [@Blake], forcing fluid in along the surface (Fig. \[fig1\]c) and out below the colony, with a toroidal recirculation. Seen in cross section with PIV, the velocity field of a single colony has precisely this appearance (Fig. \[fig1\]b). This flow produces the attractive interaction between colonies; Squires has proposed a similar scenario in the context of electrophoretic levitation [@Squires].
The motion of a pointlike object at ${\bf x}_i$, with axis orientation ${\bf p}_i$ and net velocity ${\bf v}_i$ from self-propulsion and buoyancy, due to the fluid velocity ${\bf u}$ and vorticity ${\bf \nabla}\times {\bf u}$ generated by the other self-propelled objects, obeys $$\begin{aligned}
\dot {\bf x}_i &=& {\bf u}({\bf x}_i)+{\bf v}_i ~ , \label{eom1} \\
\dot {\bf p}_i &=& {1\over \tau}
{\bf p}_i\times\left(\hat{\bf z} \times{\bf p}_i\right)
+{1\over 2}\left({\bf \nabla}\times{\bf u}\right)\times {\bf p}_i~. \nonumber\end{aligned}$$ Assuming that for the infalling, ${\bf v}_i = \dot{{\bf p}}_i= 0$, and that ${\bf u}({\bf x}_i)$ are due to Stokeslets of strength $F= 6 \pi \eta R (U + V)$, Eq. \[eom1\] may be reduced, in rescaled coordinates $\tilde r = r / h$ and $\tilde t = t F / \eta {h}^2$ with $h= \bar R$, to [@Squires] $$\frac{\mbox{d} \tilde r}{\mbox{d} \tilde t} = -\frac{3}{\pi} \frac{ \tilde r}{({\tilde r}^2 + 4)^{5/2}}~.
\label{eom2}$$ Integration of (\[eom2\]) shows good parameter-free agreement with the experimental trajectories of nearby pairs (Fig. \[fig4\]c). Large perturbations to a waltzing pair by a third nearby colony can disrupt it by strongly tilting the colony axes, suggesting that bottom-heaviness confers stability. This is confirmed by a linear stability analysis [@jfm].
![\[fig4\] (Color online) Waltzing dynamics. Geometry of (a) two interacting Stokeslets (side view) and (b) nearby spinning colonies. (c) Radial separation $r$, normalized by mean colony radius, as a function of rescaled time for $60$ events (black). Running average (green) compares well with predictions of the singularity model (red). Inset shows orbiting frequency $\Omega$ as a function of mean spinning frequency $\bar\omega$, and linear fit.](drescher_fig4){width="0.9\columnwidth"}
{width="1.90\columnwidth"}
[*Orbiting:*]{} As [*Volvox*]{} colonies move together under the influence of the wall-induced attractive flows (Fig. \[fig1\]b), orbiting becomes noticeable only when their separation $d$ is $\lesssim 30$ $\mu$m; their spinning frequencies also decrease very strongly with decreasing separation. This arises from viscous torques associated with the thin fluid layer between two colonies (Fig. \[fig4\]b). We assume that in the thin fluid layer, the spinning [*Volvox*]{} colonies can be modeled as rigid spheres, ignoring the details of the overlapping flagella layers. For two identical colonies, ignoring the anterior-posterior “downwash," and considering only the region where the fluid layer is thin, the plane perpendicular to the line connecting their centers is a locus of zero velocity, as with a no-slip wall. Appealing to known asymptotic results [@lube] we obtain the torque ${\cal T}=-(2/5)\ln(d/2R)\zeta_r\omega$ and a lateral force ${\cal F}=(1/10)\ln(d/2R)\zeta_r\omega/R$ on the sphere, where $\omega<\omega_o$ is the spinning frequency of a colony in the bound state. The rotational slowing of the self-propelled colony has an effect on the fluid that may be approximated by a rotlet of strength ${\cal T}$ at its center. From the flow field of a rotlet perpendicular to a horizontal no-slip wall [@Blake] and the lateral force ${\cal F}$, we then deduce the orbiting frequency $$\Omega\simeq 0.069\,\ln\left({d\over 2R}\right)\, \bar\omega~.
\label{torque}$$ Typical values of $d$ and $R$ give a slope of $\simeq 0.14-0.19$ for the $\Omega-\omega$ line, consistent with the experimental fit of $0.19 \pm 0.05$ (Fig. \[fig4\]c). The nonzero intercept is likely due to lubrication friction against the ceiling [@jfm].
A second and more complex type of bound state, the “minuet," is found when the upswimming just balances the settling (at $R \simeq 300$ $\mu$m, see Fig \[fig3\]a), and [*Volvox*]{} colonies hover at a fixed distance above the chamber bottom. In this mode (Fig. \[fig5\]) colonies stacked one above the other oscillate back and forth about a vertical axis. The mechanism of oscillation is the instability of the perfectly aligned state due to the vorticity from one colony rotating the other, whose swimming brings it back, with the restoring torques from bottom-heaviness conferring stability. Studies of the coupled dynamics of $\mathbf{x}_i$ and $\mathbf{p}_i$ show that when the orientational relaxation time $\tau$ is below a threshold the stacked arrangement is stable, while for $\tau$ larger there is a Hopf bifurcation to limit-cycle dynamics (Fig. \[fig5\]b). In these studies, lubrication effects were ignored, $\dot{\mathbf{x}}_i$ was restricted to be in one horizontal dimension only, and $\mathbf{x}_i$ were at fixed heights $h_i$ above the wall. The flow $\mathbf{u}$ was taken to be due to vertically oriented Stokeslets at $\mathbf{x}_i$, of magnitude $F$, equal to the gravitational force on the [*Volvox*]{}.
Hydrodynamic bound states, such as those described here, may have biological significance. When environmental conditions deteriorate, [*Volvox*]{} colonies enter a sexual phase of spore production to overwinter. Field studies show that bulk [*Volvox*]{} concentrations $n$ are $<1$ cm$^{-3}$ [@cornell_thesis], with male/female ratio of $\sim 1/10$, and $\sim 100$ sperm packets/male. Under these conditions, the mean encounter time for females and sperm packets is a substantial fraction of the life cycle. The kinetic theory mean free path $\lambda=1/ \sqrt{2} n \pi (R+ R_{sp})^2 \times 10/100$, with $R = 150$ $\mu$m for females, and $R_{sp} = 15$ $\mu$m for sperm packets, is $\lambda \sim 1$ m, implying a mean encounter time $>2$ h [@tobias]. This suggests that another mechanism for fertilization must be at work, with previous studies having excluded chemoattraction in this system [@coggin]. At naturally occuring concentrations, more than one [*Volvox*]{} may partake in the waltzing bound state, leading to long linear arrays (Fig. \[fig1\]d). In such clusters, formed at the air-water interface, the recirculating flows would decrease the encounter times to seconds or minutes, clearly increasing the chance of sperm packets finding their target. Studies are underway to examine this possibility.
We thank D. Vella, S. Alben and C.A. Solari for key observations, A.M. Nedelcu for algae, and support from the BBSRC, DOE, and the Schlumberger Chair Fund.
A. van Leeuwenhoek, Phil. Trans. Roy. Soc. [**22**]{}, 509 (1700).
C. Linneaus, [*Systema Naturae*]{}, 10th ed. (Holmiae, Impensis Laurentii Salvii, 1758), p. 820.
A. Weismann, [*Essays Upon Heredity and Kindred Biological Problems*]{} (Clarendon Press, Oxford, 1891).
D.L. Kirk, [*Volvox: Molecular-genetic origins of multicellularity and cellular differentiation*]{} (Cambridge University Press, Cambridge, 1998).
D.L. Kirk, Bioessays [**27**]{}, 299 (2005).
C.A. Solari, [*et al.*]{}, Proc. Natl. Acad. Sci. (USA) [**103**]{}, 1353 (2006); M.B. Short, [*et al.*]{}, Proc. Natl. Acad. Sci. (USA) [**103**]{}, 8315 (2006); C.A. Solari, J.O. Kessler, and R.E. Michod, Am. Nat. [**167**]{}, 537 (2006).
M.J. Lighthill, Commun. Pure Appl. Math. [**5**]{}, 109 (1952).
V. Magar, T. Goto, and T.J. Pedley, Q. J. Mech. Appl. Math. [**56**]{}, 65 (2003).
T. Ishikawa and M. Hota, J. Exp. Biol. [**209**]{}, 4452 (2006); T. Ishikawa, M.P. Simmonds, and T.J. Pedley, J. Fluid Mech. [**568**]{}, 119 (2006).
T. Ishikawa and T.J. Pedley, Phys. Rev. Lett. [**100**]{}, 088103 (2008); T. Ishikawa, J.T. Locsei, and T.J. Pedley, J. Fluid Mech. [**615**]{}, 401 (2008).
X.-L. Wu and A. Libchaber, Phys. Rev. Lett. [**84**]{}, 3017 (2000); A. Sokolov, [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 158102 (2007).
C. Dombrowski, [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 098103 (2004).
J.R. Blake, Proc. Camb. Phil. Soc. [**70**]{}, 303 (1971). J.R. Blake and A.T. Chwang, J. Eng. Math. [**8**]{}, 23 (1974)
H.J. Keh and J.L. Anderson, J. Fluid Mech. [**153**]{}, 417 (1985).
E.R. Dufresne, [*et al.*]{}, Phys. Rev. Lett. [**85**]{}, 3317 (2000).
A.P. Berke, [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 038102 (2008).
D.L. Kirk and M.M. Kirk, Dev. Biol. [**96**]{}, 493 (1983).
K. Drescher, K. Leptos, and R.E. Goldstein, Rev. Sci. Instrum. [**80**]{}, 014301 (2009).
H. Sakaguchi and K. Iwasa, Plant Cell Physiol. [**20**]{}, 909 (1979).
G.B. Witman, [*et al.*]{}, J. Cell Biol. [**54**]{}, 507 (1972).
T.J. Pedley and J.O. Kessler, Ann. Rev. Fluid Mech. [**24**]{}, 313 (1992).
H.J. Hoops, Protoplasma [**199**]{}, 99 (1997).
K. Drescher, et al., preprint (2009).
T. Squires, J. Fluid Mech. [**443**]{}, 403 (2001).
S. Kim and S.J. Karrila, [*Microhydrodynamics: Principles and Selected Applications*]{} (Dover, New York, 2005).
F. DeNoyelles, Jr., Ph.D. thesis, Cornell Univ. (1971).
T. Ishikawa and T.J. Pedley, J. Fluid Mech. [**588**]{}, 437 (2007).
S.J. Cogging, [*et al.*]{}, J. Phycol. [**15**]{}, 247 (1979).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The existence of right-handed neutrinos follows from theoretical consistence of the recently suggested electroweak symmetry breaking model, based on dynamical flavor gauge symmetry breaking. Only finite number of versions of the model exists. They differ by the number and the flavor structure of the right-handed neutrinos. We choose for inspection one of them, the non-minimal version with right-handed neutrinos in sextet flavor representation, and at some points we compare it with the minimal version. We show that a Majorana pairing of the sextet right-handed neutrinos is responsible for the flavor symmetry breaking, and the seesaw pattern of the neutrino mass matrix naturally arises. The dynamically generated neutrino mass matrix spontaneously breaks the lepton number and the chiral sterility symmetry of the right-handed neutrino sector. As a result spectrum of majorons, neutrino composites, manifests. We study main characteristics of both massive sterile neutrinos and majorons which show their relevance as dark matter candidates.'
author:
- Adam Smetana
bibliography:
- 'references.bib'
title: Sterile Particles from the Flavor Gauge Model of Masses
---
Introduction
============
With recent significant improvement of quality of cosmological, astrophysical and neutrino observations, the neutrino sector of particle spectrum is just becoming increasingly powerful tool to discriminate among various models of the electroweak symmetry breaking. Recently suggested model [@Hosek:2009ys; @Hosek:NagoyaProceeding; @Benes:2011gi] of dynamically generated masses turns out to be extremely difficult to be approved or disproved through direct computation of its mass spectrum. Nevertheless, the model provides clear and clean predictions about the structure of the right-handed neutrino sector, about its global symmetries, and about majorons, the composite Nambu–Goldstone scalars, the consequences of the spontaneous breaking of the global symmetries.
The right-handed neutrinos are often proposed to exist for their power to explain straightforwardly the observed neutrino masses and, especially, to explain why the neutrinos are so light via the see-saw mechanism [@GellMann:1980vs; @Mohapatra:1979ia; @Yanagida:1979as]. Since they are Standard Model singlets they do not produce color and electroweak gauge anomaly. Therefore, if they are not charged with respect to some new gauge force, their number is not constrained. In models where the family or flavor ${\ensuremath{\mathrm{SU}(3)}}_{\ensuremath{\mathrm{F}}}$ index is gauged [@Wilczek:1978xi; @Ong:1978tq; @Davidson:1979wt; @Chakrabarti:1979vy; @Yanagida:1979gs; @Yanagida:1980xy; @Berezhiani:1990wn; @Nagoshi:1990wk] the number of all fields that feel the new flavor gauge force has to be balanced so that the flavor gauge anomaly cancels. The overall contribution of observed electroweakly charged fermions to the flavor gauge anomaly does not vanish. Therefore additional fields, the chromodynamically and electroweakly neutral right-handed neutrinos, are needed in a specific number [@Kribs:2003jn].
The flavor gauge model studied in this paper intends not to postulate any further new dynamics and leaves whole responsibility for the electroweak symmetry breaking on the gauge flavor ${\ensuremath{\mathrm{SU}(3)}}_\mathrm{F}$ dynamics. In order to make sense the gauge flavor dynamics is strong, asymptotically free, self-breaking, and non-confining, i.e., non-vector-like[^1].
The flavor gauge symmetry breaking, the cause for the mass generation, is so far only assumed in the model. It is supposed to be achieved neither by the vacuum expectation value of a scalar field, nor by the confining strong gauge dynamics, but it is the strong flavor gauge dynamics itself that self-breaks. As there is no similar effect observed in nature, as the effort to put the chiral theories on a lattice fails, and as the solution of the equations of the model are painfully unattainable, it is not clear whether the self-breaking mechanism is possible at all. In general, it is not clear whether a chiral gauge dynamics alone could dynamically generate self-energies by which it breaks its own gauge symmetry. On the other hand this scenario pioneered by [@Eichten:1974et] has not been disqualified yet.
Fortunately, the gauge symmetries tights the model so much that there is no room for fine-tuning and firm predictions arise. Theoretical consistence of the flavor gauge model predicts the number of right-handed neutrinos with only little ambiguity. The model admits right-handed neutrinos only in selected flavor representation settings whose number is finite and not large. Throughout the paper we bring more or less heuristic arguments for that there is only one right-handed neutrino setting that defines phenomenologically viable and preferred version of the model.
The preferred version of the model is non-minimal in the sense that it contains the right-handed neutrinos in a flavor sextet representation. It provides appealing features: (i) It is chiral, i.e., the only mass scale comes from the dimensional transmutation of the running flavor coupling constant. (ii) It is essentially non-vector-like. (iii) The sextet right-handed neutrino Majorana pairing leads naturally to the see-saw pattern of the neutrino mass matrix. (iv) It provides light sterile neutrinos, in addition to the three electroweak neutrinos what can be of particular interest with respect to the dark matter [@Nieuwenhuizen:2008pf; @Kusenko:2009up; @Bezrukov:2009th]. (v) The dynamically generated neutrino mass matrix breaks spontaneously both the lepton number and the sterility symmetry $G_\mathrm{S}$, the accidental global symmetry of the right-handed neutrino sector. As a result, numerous Nambu–Goldstone neutrino-composites, called majorons [@Chikashige:1980ui; @Schechter:1981cv], appear in the spectrum. The existence of majorons, especially the standard majoron, is a rigid prediction present in all versions of the model, while they do *not* present a phenomenological danger in form of long-range force [@Gelmini:1982zz].
The aim of the paper is to point out aspects of the sterile sector of the non-minimal version of the flavor gauge model, and document both theoretically and phenomenologically why others are not so preferable.
The paper is organized as follows. In section \[secII\] we investigate the right-handed neutrino structure of the flavor gauge model: After a brief recapitulation of the model we summarize all viable versions of the model. We argue why we choose the non-minimal version for the analysis in the rest of the paper. In section \[secIII\] we write the part of the model Lagrangian relevant for the neutrino sector. We analyze its global sterility symmetry $G_\mathrm{S}$. In section \[secIV\] we apply the model idea to generate neutrino masses, and discuss the flavor symmetry breaking. In the non-minimal version in contrast to the minimal one, the privileged role of right-handed neutrinos is recognized: It is their Majorana pairing which triggers the flavor symmetry breaking. In section \[secV\] we analyze the majoron spectrum arising from spontaneous lepton number and sterility symmetry breaking. In section \[secVI\] we conclude.
Right-handed neutrino fields {#secII}
============================
The quantum flavor gauge dynamics of the model requires the existence of the right-handed neutrinos and restricts severely their number. As a main result of this section we list the finite number of all acceptable flavor settings that are anomaly and asymptotically free, and do not provide the perturbative infrared fixed point. These three properties are necessary for a viability of the model. Later, we rather heuristically argue that some settings are more preferable than others.
First we briefly recapitulate the model.
Flavor gauge model
------------------
The basis of the model is that the chiral electroweak symmetry is broken dynamically by chirality changing fermion self-energies $\Sigma(p^2)$ generated by the strong flavor dynamics. The flavor structure of the self-energies $\Sigma(p^2)$ is crucial for it should reflect the hierarchical pattern of fermion masses.
The model is defined by the flavor setting of electroweakly charged Weyl fermions. There are two distinct cases, case I and case II, see Tab. \[FermionSetting\]. The ultimate discrimination among them can be made after the successful solution of mass equations is found, or after full structure of the neutrino sector is revealed.
$q_\mathrm{L}$ $u_\mathrm{R}$ $d_\mathrm{R}$ $\ell_\mathrm{L}$ $e_\mathrm{R}$ $N$
--------- ---------------- ---------------- ------------------------- ------------------------- ------------------------- -----
case I $\mathbf{3}$ $\mathbf{3}$ $\overline{\mathbf{3}}$ $\overline{\mathbf{3}}$ $\mathbf{3}$ $3$
case II $\mathbf{3}$ $\mathbf{3}$ $\overline{\mathbf{3}}$ $\overline{\mathbf{3}}$ $\overline{\mathbf{3}}$ $5$
: Two possible flavor settings of electroweakly charged fermions. The number $N$ tells how many flavor triplets are necessary to cancel the flavor gauge anomaly. The notation is obvious: $q_\mathrm{L}=(u_\mathrm{L},d_\mathrm{L})^{\ensuremath{\mathrm{T}}}$, $\ell_\mathrm{L}=(\nu_\mathrm{L},e_\mathrm{L})^{\ensuremath{\mathrm{T}}}$, $u=(u,c,t)$, $d=(d,s,b)$, $\nu=(\nu_e,\nu_\mu,\nu_\tau)$, and $e=(e,\mu,\tau)$. []{data-label="FermionSetting"}
The purpose of this setting is to distinguish self-energy matrices for fermions of various charges, as generally[^2] $\Sigma^{\mathbf{3}\times\mathbf{3}}\neq\Sigma^{\overline{\mathbf{3}}\times\mathbf{3}}\neq
\Sigma^{\mathbf{3}\times\overline{\mathbf{3}}}\neq\Sigma^{\overline{\mathbf{3}}\times\overline{\mathbf{3}}}$. (This idea was also pursued in the class of extended technicolor models [@Appelquist:2003hn].) In order to achieve the exclusivity of the $u$-type quarks, whose observed mass spectrum is significantly heavier, *we prefer the case I*. Their self-energy is of type $\Sigma^{\overline{\mathbf{3}}\times\mathbf{3}}$ as distinct from $d$-type and $e$-type fermion self-energies which are of type $\Sigma^{\overline{\mathbf{3}}\times\overline{\mathbf{3}}}$ and $\Sigma^{\mathbf{3}\times\mathbf{3}}$. Neutrino self-energies are distinguished from others by their Majorana components and by possible higher flavor representation settings of right-handed neutrinos. Due to the flavor setting the mass hierarchy among different charges can be achieved. The mass hierarchy among generations then has completely different origin. It follows from the fact that the flavor symmetry is completely broken providing distinct eigenvalues of the self-energies.
The characteristic fermion flavor setting plays also another important role. It makes the flavor gauge dynamics non-vector-like, what distinguishes it from QCD and makes it non-confining.
As well as the QCD, the flavor gauge dynamics is asymptotically free, i.e., the effective flavor gauge coupling constant $\bar
h(q^2)$ in perturbative regime runs according to $$\frac{\bar h^2(q^2)}{4\pi}=\frac{4\pi}{(11-\frac{1}{3}N^\mathrm{EW}-\frac{2}{3}\eta_{\nu_\mathrm{R}})\ln{q^2/\Lambda^{2}_\mathrm{F}}}\,,$$ where $N^{\mathrm{EW}}=15$ is the number of electroweakly charged flavor triplets and $\eta_{\nu_\mathrm{R}}$ is the right-handed neutrino contribution to the coefficient of the flavor $\beta$-function. Well above the scale of the flavor gauge dynamics, $\Lambda_\mathrm{F}$, everything is weakly coupled and symmetric. Decreasing the energy scale the effective flavor gauge coupling increases till it surpasses its critical value at the energy scale around $\Lambda_\mathrm{F}$. Because of its non-vector-like nature the flavor symmetry itself does not survive anymore and is spontaneously broken [@Eichten:1974et; @Pagels:1979ai]. The flavor gauge bosons acquire masses of order of the flavor symmetry breaking scale $\Lambda_\mathrm{F}$. (For details see [@Benes:2011gi].)
The flavor gauge bosons have to be enormously heavy in order to suppress the processes with flavor changing neutral currents, giving a lower bound for their mass to be more than $10^6\,\mathrm{GeV}$ [@Eichten:1979ah]. But in order to make the axion, which is naturally present in the model, invisible it is better to assume that the quark self-energies are formed at the scale in the so called axion window $10^{10}\,\mathrm{GeV}<\Lambda_\mathrm{F}<10^{12}\,\mathrm{GeV}$ [@Raffelt:2006rj]. We will see that in the non-minimal versions of the model the right-handed neutrino Majorana self-energy should be generated at even much higher scale.
The ‘would-be’ Nambu–Goldstone bosons of the broken electroweak symmetry, which are composites of fermions, manifest themselves as the longitudinal components of the electroweak gauge bosons, producing their masses. The electroweak gauge boson masses are therefore directly, though non-trivially linked to the masses of electroweakly charged fermions. Therefore we expect $M_{W,Z}$ being proportional rather to $m_t$ and not to some electroweak scale $\Lambda_\mathrm{EW}$, which in fact does not exist in this model.
Constraints on the number of right-handed neutrino fields
---------------------------------------------------------
### Anomaly freedom
The model would suffer from the flavor gauge anomaly unless the proper number of right-handed neutrino fields is added into the model. They are needed to compensate the non-zero flavor anomaly contribution of electroweakly charged fermions. In Tab. \[FermionSetting\] the number $N$ indicates that 3 (5) additional triplets of right-handed neutrinos make the flavor gauge dynamics anomaly free.
Adding of triplets is not the only possibility. Specially balanced settings including higher representations, sextet, octet, or decuplet, etc., lead to the anomaly free models too. Constructing the non-minimal versions of the model, notice that a pair of complex multiplet and its conjugate, as well as real representation multiplet do not contribute to the anomaly.
### Asymptotic freedom
On the other hand, we should not add too many right-handed neutrinos in order not to destroy the asymptotic freedom of the flavor dynamics. Within the one-loop approximation of the $\beta$-function, the $\eta_{\nu_\mathrm{R}}$ coefficient is constrained as $$\begin{aligned}
\eta_{\nu_\mathrm{R}} & \equiv & 1/2N^{\nu_\mathrm{R}}_3+5/2N^{\nu_\mathrm{R}}_6+ \nonumber\\
& & 3N^{\nu_\mathrm{R}}_8+15/2N^{\nu_\mathrm{R}}_{10}+\ldots<9 \,,\label{AF_inequality}\end{aligned}$$ where $N^{\nu_\mathrm{R}}_r$ is the number of right-handed neutrino multiplets of a given representation $\mathbf{r}$ and $\overline{\mathbf{r}}$. The inequality leaves us to combine only lower dimensional multiplets, $\mathbf{3}$, $\overline{\mathbf{3}}$, $\mathbf{6}$, $\overline{\mathbf{6}}$, and $\mathbf{8}$.
### Absence of the perturbative infrared fixed point
Even more stringent limit comes from demand not to produce too small, i.e., sub-critical, pertubative infrared fixed point, say $\alpha^{*}_{\mathrm{F,\,IR}}<0.5$, where $\alpha_\mathrm{F}\equiv\frac{h^2}{4\pi}$. It would leave the system in the chirally symmetric phase and prevent the whole symmetry breaking mechanism.
We choose the discriminating value of $\alpha^{*}_{\mathrm{F,\,IR}}$ being $0.5$ quite arbitrarily but motivated by QCD running coupling constant which is measured (still being in a perturbative regime) at the scale $1.7\,\mathrm{GeV}\gtrsim\Lambda_{\mathrm{QCD}}$ having the value $\alpha_\mathrm{s}(1.7\,\mathrm{GeV})\thickapprox0.35$ [@Nakamura:2010zzi].
A zero of the two-loop $\beta$-function gives an estimate of the perturbative infrared fixed point $$\alpha^{*}_{\mathrm{F,\,IR}}=-4\pi\frac{-18+N^{\nu_\mathrm{R}}_3+5N^{\nu_\mathrm{R}}_6+6N^{\nu_\mathrm{R}}_8}
{-21+19N^{\nu_\mathrm{R}}_3+125N^{\nu_\mathrm{R}}_6+144N^{\nu_\mathrm{R}}_8} \,.$$
### Chirality and non-vector-like nature
Putting all together we get only few possible right-handed neutrino flavor settings defining still viable models. We list them in Tab. \[NUsettings\]. The models fall into various classes according to two criteria, their chirality and their approximate vector-like nature.
--------- ------------------------------------------------------------------------------------- --------- -----------------------------
approx.
$\nu_\mathrm{R}$ representation setting chiral vector-like
around $\Lambda_\mathrm{F}$
case I $3\times\mathbf{3}$ **yes** yes
$3\times\mathbf{3},\ 1\times(\mathbf{3},\overline{\mathbf{3}})$ no yes
$3\times\mathbf{3},\ 2\times(\mathbf{3},\overline{\mathbf{3}})$ no yes
$3\times\mathbf{3},\ 3\times(\mathbf{3},\overline{\mathbf{3}})$ no yes
$1\times\mathbf{6},\ 4\times\overline{\mathbf{3}}$ **yes** **no**
$1\times\mathbf{8},\ 3\times\mathbf{3}$ no **no**
case II $5\times\mathbf{3}$ **yes** yes
$5\times\mathbf{3},\ 1\times(\mathbf{3},\overline{\mathbf{3}})$ no yes
$5\times\mathbf{3},\ 2\times(\mathbf{3},\overline{\mathbf{3}})$ no yes
$1\times\mathbf{6},\ 2\times\overline{\mathbf{3}}$ **yes** **no**
$1\times\mathbf{6},\ 2\times\mathbf{3},\ 1\times(\mathbf{3},\overline{\mathbf{3}})$ no **no**
--------- ------------------------------------------------------------------------------------- --------- -----------------------------
: All viable versions of the flavor gauge model. []{data-label="NUsettings"}
The models containing right-handed neutrinos in both $\mathbf{3}$, and $\overline{\mathbf{3}}$, or in $\mathbf{8}$, allow the gauge invariant hard Majorana mass term. Therefore they are non-chiral possessing a hard Majorana mass parameter. The origin of such mass parameter is not explained by the model and it would have been assumed to follow from yet another dynamics operating at higher energy scale. In this sense the chiral models appear to be more complete and more fundamental.
From the high energy (around $\Lambda_\mathrm{F}$) perspective, the versions of the model that contain only $\mathbf{3}$, or $\overline{\mathbf{3}}$ are approximately vector-like with small non-vector-like perturbation given by the Standard Model gauge dynamics. In that case the dynamics resembles the dynamics of QCD and presumably prefers pairing in the $\mathbf{3}\times\overline{\mathbf{3}}$ that does not ensure the flavor symmetry breaking. The flavor breaking fermion self-energies are then only believed to be energetically more favorable than the flavor preserving ones. On the other hand, the versions of the model that contain right-handed neutrinos in higher representation $\mathbf{6}$, are essentially non-vector-like and prefer right-handed neutrino pairing in the Majorana channels $\mathbf{6}\times\overline{\mathbf{3}}$, or $\mathbf{6}\times\mathbf{6}$, that certainly break the flavor symmetry.
\*
The *minimal* version with three right-handed neutrino triplets denoted by (333) was analyzed in the paper [@Benes:2011gi]. In this paper we will pursue the *non-minimal* version, the only case I version which is both non-vector-like and chiral. Its right-handed neutrino setting is $(\mathbf{6},\overline{\mathbf{3}},\overline{\mathbf{3}},\overline{\mathbf{3}},\overline{\mathbf{3}})$, and we will denote it by (63333).
Neutrino Lagrangian and its symmetries {#secIII}
======================================
In this section we define the non-minimal and preferred versions of the flavor gauge model with the triplet right-handed electron and with four right-handed neutrino anti-triplet and one right-handed neutrino sextet by writing the Lagrangian of their neutrino sector. Next we identify its sterility symmetry $G_\mathrm{S}$.
Lagrangian of neutrino sector
-----------------------------
The Lagrangian describing the neutrino flavor gauge dynamics is given by $$\begin{aligned}
{\cal L} & = & -\frac{1}{4}F_{\mu\nu}^aF^{\mu\nu a}+{\cal L}_\nu \,,\\
\label{L}
{\cal L}_\nu & = &
\overline{\nu_\mathrm{L}}\gamma^\mu(\im\partial_\mu-hC^{a}_\mu T^{a*})\nu_\mathrm{L} \\
& &
+\sum_\mathbf{r}\overline{\nu_{\mathrm{R}\mathbf{r}}}\gamma^\mu(\im\partial_\mu+hC^{a}_\mu
T^{a}_\mathbf{r})\nu_{\mathrm{R}\mathbf{r}} \,,\nonumber\end{aligned}$$ where the field strength tensor of flavor gauge bosons $C^{a}_\mu$ is given by $F_{\mu\nu}^a=\partial_\mu C^{a}_\nu-\partial_\nu
C^{a}_\mu+hf^{abc}C^{b}_\mu C^{{\ensuremath{c}}}_\nu$. $T_{\mathbf{r}}^{a}$ are ${\ensuremath{\mathrm{SU}(3)}}_\mathrm{F}$ generators for a representation $\mathbf{r}$ of the right-handed neutrino multiplet.[^3] The sum runs over one sextet with $T_{\mathbf{6}}^{a}$ and four anti-triplets with $T_{\overline{\mathbf{3}}}^{a}=-[T_{\mathbf{3}}^{a}]^*=-\frac{1}{2}\lambda^{*a}$.
This non-minimal version is chiral, i.e., it does not allow the Majorana mass term, $-\frac{1}{2}\overline{\nu^{{\ensuremath{c}}}_{\mathrm{R}}}M\nu_{\mathrm{R}}$, relevant for the non-chiral models.
Global symmetries of neutrino sector {#nuSymmetries}
------------------------------------
Additionally to the global symmetries of the electroweakly charged fermion sector of the model $$\label{GlobalSymmetriesEW}
{\ensuremath{\mathrm{U}(1)}}_\mathrm{B}\times{\ensuremath{\mathrm{U}(1)}}_\mathrm{L_{EW}}\times{\ensuremath{\mathrm{U}(1)}}_{\mathrm{B}_5}\times{\ensuremath{\mathrm{U}(1)}}_{\mathrm{L}_5} \,,$$ (for detailed analysis of the global symmetries see [@Benes:2011gi]) the sterile sector provides another global symmetry of the classical Lagrangian : a large sterility symmetry $G_\mathrm{S}$, with both Abelian and non-Abelian components. It is not ordered by anyone and comes out accidentally.
The electroweak lepton number, $L_\mathrm{EW}$,[^4] is defined by its current $$\label{LSM}
\mathcal{J}^{\mu}_{L_\mathrm{EW}} = \overline{e_\mathrm{L}}\gamma^\mu e_\mathrm{L}+\overline{e_\mathrm{R}}\gamma^\mu e_\mathrm{R}+\overline{\nu_\mathrm{L}}\gamma^\mu \nu_\mathrm{L} \,.$$ As well as an Abelian part of the sterility symmetry (see below), it is broken heavily by the flavor instanton effects due to the flavor anomaly $$\partial_\mu \mathcal{J}^{\mu}_{\mathrm{L}_\mathrm{EW}}=-\frac{h^2}{32\pi^2}F_{\alpha\beta a}\tilde{F}^{\alpha\beta}_a \,.$$ (We neglect the electroweak anomaly.) Nevertheless one can always find some linear combinations of the electroweak lepton number and the sterility symmetry which are flavor anomaly free. One of them plays a role of the conserved lepton number $L$.
The setting of the right-handed neutrinos defines manifestly chiral model. The chirality provides quite large accidental sterility symmetry $G_\mathrm{S}$ of the right-handed neutrino sector. The sterility symmetries are $$G_\mathrm{S} =
{\ensuremath{\mathrm{U}(1)}}_{\mathrm{S}_6}\times{\ensuremath{\mathrm{U}(1)}}_{\mathrm{S}_3}\times{\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}
\,.$$ The corresponding Noether currents are
$$\begin{aligned}
\mathcal{J}^{\mu}_{\mathrm{S}_6} & = & \overline{\nu_{\mathrm{R}\mathbf{6}}}\gamma^\mu\nu_{\mathrm{R}\mathbf{6}}={\mathop{\rm Tr}\nolimits}\overline{\xi_{\mathrm{R}}}\gamma^\mu\xi_{\mathrm{R}} \,; \\
\mathcal{J}^{\mu}_{\mathrm{S}_3} & = & \frac{1}{4}\overline{\zeta_{\mathrm{R}}^n}\gamma^\mu\zeta_{\mathrm{R}}^n \,;\\
\mathcal{J}^{\mu}_{\mathrm{S},i} & = &
\overline{\zeta_{\mathrm{R}}^n}\left[S_i\right]^{nm}\gamma^\mu\zeta_{\mathrm{R}}^m
\,,\end{aligned}$$
where the summation over the flavor index is suppressed. The indices $n,\,m=1,..,4$ run over four right-handed neutrino anti-triplets. Matrices $S_i$, $i=1,..,15$, are ${\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}$ generators. We denote sextet right-handed neutrinos as $\xi_\mathrm{R}$ and anti-triplet right-handed neutrinos as $\zeta_\mathrm{R}$:
\[RHnu\] $$\begin{aligned}
\xi_{\mathrm{R}} & \equiv & T^{\iota}_{\mathrm{(sym.)}}\nu^{\iota}_{\mathrm{R}\mathbf{6}} \,,\\
\zeta^{n}_{\mathrm{R}} & \equiv &
\nu_{\mathrm{R}\overline{\mathbf{3}}}^n \,,\end{aligned}$$
where $\iota=1,..,6$ is the flavor index. The six symmetric $3\times3$ matrices $T^{\iota}_{\mathrm{(sym.)}}$ are $\openone,\,\frac{1}{2}\lambda_1,\,\frac{1}{2}\lambda_3,\,\frac{1}{2}\lambda_4,\,\frac{1}{2}\lambda_6,\,\frac{1}{2}\lambda_8$.
The trace-full Abelian symmetries do not survive the incorporation of the quantum effects. They are broken by anomalies what is expressed by the non-vanishing four-divergencies of their currents.[^5]
\[anomaly\] $$\begin{aligned}
\partial_\mu \mathcal{J}^{\mu}_{\mathrm{S}_3-\mathrm{S}_6} & = & 0 \,;\\
\partial_\mu \mathcal{J}^{\mu}_{\mathrm{S},i} & = & 0 \,;\\
\partial_\mu
\mathcal{J}^{\mu}_{\mathrm{S}_3+\mathrm{S}_6} & = &
\frac{h^2}{16\pi^2}F_{\alpha\beta a}\tilde{F}^{\alpha\beta}_a \,.\end{aligned}$$
Both $\mathcal{J}^{\mu}_{\mathrm{S}_3}$ and $\mathcal{J}^{\mu}_{\mathrm{S}_6}$ are broken by the anomaly individually, but their combination $\mathcal{J}^{\mu}_{\mathrm{S}_3-\mathrm{S}_6}$ is exactly conserved, while the orthogonal combination $\mathcal{J}^{\mu}_{\mathrm{S}_3+\mathrm{S}_6}$ is not.
The conserved anomaly free lepton number $L$ and its current $\mathcal{J}^{\mu}_{L}$ are given as a linear combination
\[Lepton\_number\] $$\begin{aligned}
L & = & L_\mathrm{EW}+\left(a S_3+(1-a)S_6\right) \,, \\
\mathcal{J}^{\mu}_{L} & = & \mathcal{J}^{\mu}_{L_\mathrm{EW}}+\left(a \mathcal{J}^{\mu}_{S_3}+(1-a)\mathcal{J}^{\mu}_{S_6}\right) \,,\end{aligned}$$
where the real coefficient $a$ is arbitrary.
Massive neutrinos {#secIV}
=================
Within the model, the neutrinos as well as all other fermions acquire masses due to the strong flavor dynamics. In this section we describe the neutrino mass generation. We continue with a discussion of the flavor symmetry breaking and by a neutrino phenomenology.
Neutrino mass generation
------------------------
To treat the most general neutrino masses of both Majorana and Dirac types in compact form, we introduce the neutrino multispinor $n$ in the Nambu–Gorkov formalism $$\label{NGmultiplet}
n=\beginm{c} \nu_\mathrm{L}+(\nu_{\mathrm{L}})^{\ensuremath{c}}\\ \nu_{\mathrm{R}\overline{\mathbf{3}}}^1+(\nu_{\mathrm{R}\overline{\mathbf{3}}}^1)^{\ensuremath{c}}\\ \nu_{\mathrm{R}\overline{\mathbf{3}}}^2+(\nu_{\mathrm{R}\overline{\mathbf{3}}}^2)^{\ensuremath{c}}\\
\nu_{\mathrm{R}\overline{\mathbf{3}}}^3+(\nu_{\mathrm{R}\overline{\mathbf{3}}}^3)^{\ensuremath{c}}\\ \nu_{\mathrm{R}\overline{\mathbf{3}}}^4+(\nu_{\mathrm{R}\overline{\mathbf{3}}}^4)^{\ensuremath{c}}\\ \nu_{\mathrm{R}\mathbf{6}}+(\nu_{\mathrm{R}\mathbf{6}})^{\ensuremath{c}}\endm
\,,$$ where the flavor indices are suppressed.
The Lagrangian is then rewritten as $${\cal L}_\nu = \frac{1}{2}\bar{n}\gamma^\mu(\im\partial_\mu+hC^{a}_\mu t^{a})n \,,$$ where the flavor generators $t^a$ in multi-component space are given in .
The chiral invariance underlying the gauge dynamics forbids to write the neutrino mass term directly into the Lagrangian. The neutrino masses arise as poles of the full propagator[^6] $S(p)\equiv[\slashed{p}-\mathbf{\Sigma}(p^2)]^{-1}$, thus as solutions of the equation $$\label{detSigma}
\det\left(p^2-\mathbf{\Sigma}(p^2)\mathbf{\Sigma}^\dag(p^2)\right)=0\,.$$ The neutrino self-energy $\mathbf{\Sigma}(p^2)$ is given as $$\mathbf{\Sigma}(p^2) = \Sigma(p^2)P_\mathrm{L}+\Sigma^\dag(p^2)P_\mathrm{R} \,,$$ where the *symmetric* $21\times 21$ matrix $\Sigma(p^2)$ can be written block-wise as $$\Sigma=\beginm{cc}
\Sigma_\mathrm{L} & \Sigma_\mathrm{D} \\ \Sigma_{\mathrm{D}}^{\ensuremath{\mathrm{T}}}& \Sigma_\mathrm{R} \endm$$ or in more detail $$\label{NGselfenergy}
\Sigma=\beginm{c|c|c}
L^{\mathbf{3}\times\mathbf{3}} & \ \ \ \ D^{\overline{\mathbf{3}}\times\mathbf{3}}_n \ \ \ \ &
D^{\mathbf{6}\times\mathbf{3}} \\ \hline
D^{\mathbf{3}\times\overline{\mathbf{3}}}_m & R^{\overline{\mathbf{3}}\times\overline{\mathbf{3}}}_{mn} &
R^{\mathbf{6}\times\overline{\mathbf{3}}}_n \vphantom{\begin{array}{c}\ \\ \ \end{array}}\\ \hline
D^{\mathbf{3}\times\mathbf{6}} & R^{\overline{\mathbf{3}}\times\mathbf{6}}_m & R^{\mathbf{6}\times\mathbf{6}} \\ \endm \,.$$ By definition the self-energy matrix is symmetrical: the diagonal blocks, $L^{\mathbf{3}\times\mathbf{3}}$, $R^{\overline{\mathbf{3}}\times\overline{\mathbf{3}}}$ and $R^{\mathbf{6}\times\mathbf{6}}$ are symmetrical matrices, and $D^{\mathbf{3}\times\overline{\mathbf{3}}}_n=[D^{\overline{\mathbf{3}}\times\mathbf{3}}_n]^{\ensuremath{\mathrm{T}}}$, $D^{\mathbf{3}\times\mathbf{6}}=[D^{\mathbf{6}\times\mathbf{3}}]^{\ensuremath{\mathrm{T}}}$ and $R^{\overline{\mathbf{3}}\times\mathbf{6}}_n=[R^{\mathbf{6}\times\overline{\mathbf{3}}}_n]^{\ensuremath{\mathrm{T}}}$.
In the approximation of the truncated Schwinger–Dyson equation with the wave function renormalization omitted the self-energy is subject of the equation[^7] $$\label{SDE}
\Sigma(p^2) = \im \int_k \frac{\bar{h}^{2}_{ab}(k+p)}{(k+p)^2}t^a\Sigma(k^2)\big[k^2-\Sigma^\dag(k^2)\Sigma(k^2)\big]^{-1}t^b \,,$$ where for the flavor effective coupling we accept the heuristic Ansatz $$\begin{aligned}
\label{effective_charge}
\frac{\bar{h}^{2}_{ab}(q)}{q^2}
& \stackrel{\mathrm{IR}}{\simeq} & \frac{h^{2}_*}{q^2}\Pi_{ac}(q)\big[1+\Pi(q)\big]_{cb}^{-1} \nonumber\\
& \simeq & -h^{2}_*\frac{M_{ac}^{2}}{q^2}\big[q^2-M^{2}\big]_{cb}^{-1} \,,\end{aligned}$$ where $h_*$ is a non-perturbative infrared fixed point of the flavor gauge dynamics, $\Pi_{ab}(q)$ is the flavor gauge boson self-energy, and $M_{ab}^{2}$ is the flavor gauge boson mass matrix. (The rationale of the Ansatz is given in [@Benes:2011gi].)
Flavor symmetry breaking
------------------------
The flavor symmetry breaking and the fermion mass generation via formation of the chirality changing self-energies are induced by the strong flavor dynamics. Therefore it is essentially non-perturbative phenomenon, hard to control. This fact is condensed in the Schwinger–Dyson equation and in our impotence to solve it.
At least some qualitative understanding can be gained if we treat the self-energies $\Sigma$, the flavor symmetry breaking order parameters, as condensates formed by the pairing of the flavored fermion chiral components.
In a regime of very high energies ($>\Lambda_{\ensuremath{\mathrm{F}}}$) the system is fully symmetric, the flavor gauge bosons are massless and the power of attraction, mediated by the massless flavor gauge bosons, can be estimated by the Most Attractive Channel (MAC) method [@Raby:1979my].
The attractiveness of different pairing channels $$\mathbf{r}_1\times \mathbf{r}_2\rightarrow \mathbf{r}_\mathrm{pair}$$ is roughly measured by the quantity $$\label{AC}
\Delta C_2=C_2(\mathbf{r}_1)+C_2(\mathbf{r}_2)-C_2(\mathbf{r}_\mathrm{pair}) \,,$$ where $C_2(\mathbf{r})$ is the quadratic Casimir invariant for the representation $\mathbf{r}$, see Tab. \[table\] in the appendix \[appB\].
Decreasing the energy scale, the attractiveness of different pairing channels increases differently. Once the most attractive channel produces the flavor symmetry breaking at the energy scale $\Lambda_{\ensuremath{\mathrm{F}}}$, the MAC method looses its plausibility for the remaining pairing channels since the flavor gauge bosons become massive.
### Drawbacks of the minimal version
The *minimal* version analyzed in [@Benes:2011gi], where all fields are in triplets or anti-triplets, is approximately vector-like above the huge scale $\Lambda_{\ensuremath{\mathrm{F}}}$ because there we can neglect QCD and electroweak effects. The most attractive channel is $\mathbf{3}\times\overline{\mathbf{3}}\rightarrow\mathbf{1}$ with $\Delta C_2=8/3$. It causes several shortcomings of the minimal version:
1\) The most attractive channel is a flavor singlet, i.e., it does not break the flavor symmetry. It suggests that the flavor gauge dynamics should rather confine bellow $\Lambda_{\ensuremath{\mathrm{F}}}$.
2\) Even if we assume that the QCD and electroweak dynamics are sufficiently relevant at $\Lambda_{\ensuremath{\mathrm{F}}}$ to cure previous shortcoming by inducing the necessary non-vector-like nature, it still remains difficult to justify tininess of neutrino masses, simply, because there is no natural reason for the see-saw pattern of neutrino mass matrix.
3\) If at all, the breaking of the electroweak and the flavor symmetry happens at once. The separation of the flavor scale $\Lambda_\mathrm{F}$ and the electroweak symmetry breaking scale $\Lambda_{\mathrm{EW}}\sim|\Sigma_u|$ is not obvious. Necessary relation $\Lambda_\mathrm{F}\gg|\Sigma_u|$ has to be achieved by critical scaling [@Miransky:1996pd; @Braun:2010qs].
### Advantages of the non-minimal version
The *non-minimal* version (63333) naturally and straightforwardly leads to the complete flavor symmetry breaking and cures the first two weak points immediately. On top of that it provides the separation of flavor and electroweak symmetry breaking. Requirement of the critical scaling, however, remains unavoidable.
The attractive channels $(\mathrm{A.C.})$, governing different parts of the neutrino self-energy written in the Nambu–Gorkov formalism, are (compare with ) $$(\mathrm{A.C.})=\beginm{c|c|c}
\mathbf{3}\times\mathbf{3}\rightarrow\overline{\mathbf{3}} & \ \ \ \ \overline{\mathbf{3}}\times\mathbf{3}\rightarrow\mathbf{1} \ \ \ \ &
\mathbf{6}\times\mathbf{3}\rightarrow\mathbf{8} \\ \hline
\mathbf{3}\times\overline{\mathbf{3}}\rightarrow\mathbf{1} & \overline{\mathbf{3}}\times\overline{\mathbf{3}}\rightarrow\mathbf{3} &
\mathbf{6}\times\overline{\mathbf{3}}\rightarrow\mathbf{3} \vphantom{\begin{array}{c}\ \\ \ \end{array}}\\ \hline
\mathbf{3}\times\mathbf{6}\rightarrow\mathbf{8} & \overline{\mathbf{3}}\times\mathbf{6}\rightarrow\mathbf{3} & \mathbf{6}\times\mathbf{6}\rightarrow\overline{\mathbf{6}} \endm \,.$$ The measure of the attractiveness of the channels is $$(\Delta C_2)=\beginm{c|c|c}
4/3 & \ \ \ \ 8/3 \ \ \ \ & 5/3 \\ \hline
8/3 & 4/3 & 10/3 \vphantom{\begin{array}{c}\ \\ \ \end{array}}\\ \hline
5/3 & 10/3 & 10/3 \endm \,.$$
It naturally follows that, decreasing the energy scale, the right-handed neutrino pairing of Majorana type with $\Delta C_2=10/3$ happens first. This fact brings nice features:
1\) It breaks the flavor symmetry providing no confinement.
2\) It suggests the see-saw pattern of neutrino mass matrix.
3\) It does not break the electroweak symmetry what is postponed to lower energies.
### Effective description of the flavor symmetry breaking
We can quantify the anti-sextet and the four triplet pairings by, so called, sterility condensates
$$\begin{aligned}
\label{condensates}
{\langle0\vert}\frac{1}{4}\epsilon^{ACE}\epsilon^{BDF}\overline{(\xi_{\mathrm{R}}^{CD})^{\ensuremath{c}}}\xi_{\mathrm{R}}^{EF}{\vert0\rangle} & \propto
& \Lambda_{\mathrm{F}}^2{\langle0\vert}\Phi_{6}^{AB}{\vert0\rangle} \,,\hspace{0.5cm} \\
{\langle0\vert}\overline{(\xi_{\mathrm{R}}^{AB})^{\ensuremath{c}}}\zeta_{\mathrm{R}n}^{B}{\vert0\rangle} & \propto
& \Lambda_{\mathrm{F}}^2{\langle0\vert}\Phi_{3}^{n,A}{\vert0\rangle} \,.\hspace{0.5cm}\end{aligned}$$
where we have introduced auxiliary scalar fields $\Phi_6$ and $\Phi^{n}_3$ of mass dimension one. The index $n=1,..,4$ is the ${\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}$ sterility index. The indices, $A,B,C,..=1,..,3$, are the indices of the fundamental flavor representation, and $\epsilon^{ABC}$ is the totally anti-symmetric tensor. The auxiliary fields transform as an anti-sextet and a triplet, respectively, under the flavor rotations ${\cal
U}={\ensuremath{\mathrm{e}}}^{\im\alpha^a T^a}$
$$\begin{aligned}
\Phi_{6}' & = & {\cal U}^{\dag\mathrm{T}}\Phi_{6}{\cal U}^{\dag} \,,\\
\Phi_{3}^{n}\vphantom{\Phi}' & = & {\cal U}\Phi_{3}^{n}\,.\end{aligned}$$
These flavor transformation properties follow from the flavor transformation properties of the elementary right-handed neutrino fields (for their definitions see )
$$\begin{aligned}
\xi_\mathrm{R}' & = & {\cal U}\xi_\mathrm{R}{\cal U}^{\mathrm{T}} \,,\\
{\zeta_\mathrm{R}^{n}}' & = & {\cal U}^*\zeta_\mathrm{R}^{n}\,,\end{aligned}$$
and the fact that the totally anti-symmetric tensor $\epsilon^{ABC}$ is flavor invariant $${\cal U}^{AD}{\cal U}^{BE}{\cal U}^{CF}\epsilon^{DEF}=\epsilon^{ABC}\,.$$ The quantum numbers $(\,L,\,S_3-S_6,\,S_3+S_6,\,{\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}\,)$ of the scalar fields are
$$\begin{aligned}
\Phi_{6}\ :\hspace{0.2cm} & & \left(\,2-2a,\,-2,\,+2,\,\mathbf{1}\,\right) \,, \\
\Phi_{3}^{n}\ :\hspace{0.2cm} & & \left(\,1-\frac{3}{2}a,\,-\frac{3}{4},\,+\frac{5}{4},\,\mathbf{4}\,\right) \,.\end{aligned}$$
$\Phi_6$, and $\Phi_{3}^{n}$ are 18 complex scalar fields. They can be expressed in terms of twice as many real scalar fields from which several are the Nambu–Goldstone fields of broken flavor and sterility symmetries.
$$\begin{aligned}
\label{Phi6}
\Phi_6(x) & = & {\ensuremath{\mathrm{e}}}^{-2\im\alpha(x)}{\ensuremath{\mathrm{e}}}^{+2\im\beta(x)} \times \\
& & \hspace{-1cm} \times\ {\ensuremath{\mathrm{e}}}^{-\im\theta^a(x) T^{a\mathrm{T}}}\beginm{ccc}\Delta_1(x) & 0 & 0 \\
0 & \Delta_2(x) & 0 \\
0 & 0 & \Delta_3(x) \endm{\ensuremath{\mathrm{e}}}^{-\im\theta^a(x) T^{a}} \,, \nonumber\end{aligned}$$
$$\begin{aligned}
\label{Phi3}
\beginm{c}\Phi^{1}_3(x)^{\ensuremath{\mathrm{T}}}\\ \Phi^{2}_3(x)^{\ensuremath{\mathrm{T}}}\\ \Phi^{3}_3(x)^{\ensuremath{\mathrm{T}}}\\ \Phi^{4}_3(x)^{\ensuremath{\mathrm{T}}}\endm & = & {\ensuremath{\mathrm{e}}}^{-\frac{3}{4}\im\alpha(x)}{\ensuremath{\mathrm{e}}}^{+\frac{5}{4}\im\beta(x)}{\ensuremath{\mathrm{e}}}^{\im\gamma^i(x)s^{i}}\times \\
& & \hspace{-2cm}\times\ {\ensuremath{\mathrm{e}}}^{\im\theta^a(x) T^{a}}\beginm{ccccc}
\big(\hspace{-5pt} & 0 & 0 & 0 & \hspace{-5pt}\big) \\
\big(\hspace{-5pt} & 0 & 0 & \delta_2(x) & \hspace{-5pt}\big) \\
\big(\hspace{-5pt} & 0 & \delta_3(x) & \delta_1(x) & \hspace{-5pt}\big) \\
\big(\hspace{-5pt} & \delta_4(x) & \varepsilon_5(x) & \varepsilon_6(x) & \hspace{-5pt}\big)
\endm \,. \nonumber\end{aligned}$$
The 25 Nambu–Goldstone bosons are (for majorons see section \[secV\]):
- 8 of $\theta^a(x)$ corresponding to broken ${\ensuremath{\mathrm{SU}(3)}}_\mathrm{F}$:\
longitudinal components of flavor gauge bosons $C^{\mu}_L$
- 15 of $\gamma^i(x)$ corresponding to broken ${\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}$:\
non-Abelian light majorons $\eta^i$
- 1 of $\alpha(x)$ corresponding to broken $S_3-S_6$:\
Abelian light majoron $\eta^0$
- 1 of $\beta(x)$ corresponding to broken $S_3+S_6$:\
super-heavy majoron $H$
Further there are 7 real and 2 complex scalars and in general they all can develop 9 CP-preserving vacuum expectation values $\phi_A$ and $\varphi_k$ and 2 CP-violating phases $\varsigma_k$.
- 3 real components of sextet field\
$\Delta_A(x)\rightarrow\Delta_A(x)+\phi_A$, $A=1,2,3$
- 4 real anti-triplet fields\
$\delta_k(x)\rightarrow\delta_k(x)+\varphi_k$, $k=1,2,3,4$
- 2 complex anti-triplet fields\
$\varepsilon_k(x)\rightarrow\varepsilon_k(x)+\varphi_k{\ensuremath{\mathrm{e}}}^{\im\varsigma_k}$, $k=5,6$
A general form of the condensate ${\langle0\vert}\Phi_{6}{\vert0\rangle}$ is $$\label{cond6}
{\langle0\vert}\Phi_{6}{\vert0\rangle}=\beginm{ccc} \phi_1 & 0 & 0 \\
0 & \phi_2 & 0 \\
0 & 0 & \phi_3 \endm$$ as follows from . A general form of the condensates ${\langle0\vert}\Phi_{3}^{n}{\vert0\rangle}$ follows from . In general they are complex and have nontrivial mutual angle and also non-trivial angle with respect to the ${\langle0\vert}\Phi_{6}{\vert0\rangle}$.
Not only for the sake of concreteness we choose here a special form of the triplet condensates
\[cond3\] $$\begin{aligned}
{\langle0\vert}\Phi_{3}^{n=1,2,3}{\vert0\rangle} & = & \ \ 0 \,, \\
{\langle0\vert}\Phi_{3}^{n=4}{\vert0\rangle}\ \ & = & \beginm{ccc}\varphi_4 & \varphi_5 & \varphi_6 \endm \,.\end{aligned}$$
The main reason for this choice is that it leaves the ${\ensuremath{\mathrm{SU}(3)}}_\mathrm{S}$ sterility subgroup unbroken what is necessary to protect the seesaw mechanism (see Sect. \[neutrino\_pheno\]). Without the special form the general condensates would break the sterility symmetry completely.
The condensates break the flavor symmetry completely while the electroweak symmetry breaking is postponed to the lower energies where the pairing of the electroweakly charged fermions occurs. The sextet sterility condensation is very similar to the sextet color superconductivity [@Brauner:2003pj].
### Masses from the sterility condensation
The sterility condensation produces masses of all flavor gauge bosons. The masses can be estimated from the lowest order gauge invariant kinetic terms of the effective Lagrangian for the effective scalar fields $$\label{L_M_gauge}
{\cal L}_{\mathrm{M_{gauge}}}=(D^\mu\Phi_{3}^{n})^\dag D_\mu\Phi_{3}^{n}+{\mathop{\rm Tr}\nolimits}(D^\mu\Phi_{6})^\dag D_\mu\Phi_{6} \,,$$ where
$$\begin{aligned}
D_\mu\Phi_{6} & = & \partial_\mu\Phi_{6}+\im h C^{a}_\mu(T^{a\mathrm{T}}\Phi_{6}+\Phi_{6}T^{a}) \,,\\
D_\mu\Phi_{3}^{n} & = & (\partial_\mu-\im h C^{a}_\mu T^a)\Phi_{3}^{n} \,.\end{aligned}$$
In the effective Lagrangian ${\cal L}_{\mathrm{M_{gauge}}}$ we substitute the effective scalar fields for their vacuum expectation value $\Phi\rightarrow{\langle0\vert}\Phi{\vert0\rangle}$, and we get the mass matrix for the gauge bosons $$M_{\mathrm{gauge}}^2=M_{6}^2+M_{3}^2 \,,$$ where the mass matrices $M_{6}^2$ and $M_{3}^2$ with the specific form of the condensates and are in the Appendix .
The sterility condensation produces also Majorana masses for right-handed neutrinos. The masses can be estimated from Yukawa terms of the effective Lagrangian for the effective scalar fields $$\begin{aligned}
\label{L_M_sterile}
{\cal L}_{\mathrm{M_{R}}} & = &
\hphantom{+\ }y_{36}\,\overline{(\zeta_{\mathrm{R}}^{n})^{\ensuremath{c}}}\xi_{\mathrm{R}}\Phi_{3}^{n*} \\
& & +\ y_{6}\,\epsilon^{ACE}\epsilon^{BDF}\overline{(\xi_{\mathrm{R}}^{AB})^{\ensuremath{c}}}\xi_{\mathrm{R}}^{CD}(\Phi_{6}^{EF})^\dag \nonumber \\
& & +\ {\ensuremath{\mathrm{h.c.}}}\,,\nonumber\end{aligned}$$ where the effective Yukawa coupling constants $$y_{36}=\frac{4}{9}h^2 \nonumber \,,\
y_{6}=\frac{4}{9}h^2$$ are obtained from the effective four-neutrino interaction $\sim(\bar{n}\gamma_\mu t^an)(\bar{n}\gamma^\mu t^an)$.
In the effective Lagrangian ${\cal L}_{\mathrm{M_{R}}}$ we can substitute the scalars for the condensates and get the Majorana mass matrix for the sterile neutrinos[^8] $$\label{Msterile}
M_\mathrm{R}=\frac{4}{9} h^2\beginm{c|c}
0 & \langle\Phi^{n}_3\rangle \vphantom{\begin{array}{c}\ \\ \ \end{array}}\\ \hline
\ \langle\Phi^{m}_3\rangle^\mathrm{T} & \langle\Phi_6\rangle \\ \endm \,.$$ The mass matrix $M_\mathrm{R}$ has generically at least six zero eigenvalues. With the special choice of sterility condensates and , there are nine zero eigenvalues.
Neutrino phenomenology {#neutrino_pheno}
----------------------
The neutrino masses are given as roots of the equation where the momentum dependence of $\Sigma(p^2)$ makes the calculation difficult. For qualitative purpose it is sufficient to substitute the self-energy by a *constant* $N\times N$ symmetric mass matrix $M$, where $N=21$ for our case.
The mass spectrum can be found as eigenvalues $m_1,..,m_N$ of $M$,[^9]
$$\begin{aligned}
\hspace{-1.5cm} & & \overline{\nu^{\ensuremath{c}}} M \nu = \overline{\nu^{\ensuremath{c}}} U^{\ensuremath{\mathrm{T}}}\beginm{ccc} \vspace{-0.2cm} m_1 & & \\ \hspace{-0.3cm}\vspace{-0.2cm} & \ddots & \\ \hspace{-0.3cm} & & m_N\endm U\nu \\
\hspace{-0.1cm} & & \ \ \stackrel{\mathrm{e.g.}}{=}
\left(\overline{(\nu_{1}')^{\ensuremath{c}}} \ldots\ \overline{(\nu_{N}')^{\ensuremath{c}}}\right)
\hspace{-0.1cm}\beginm{ccccc} \vspace{-0.2cm} 0 & & & & \\ \hspace{-0.4cm}\vspace{-0.2cm} & m & & & \\ \hspace{-0.5cm}\vspace{-0.2cm} & & m & & \\ \hspace{-0.7cm}\vspace{-0.2cm} & & & \ddots & \\ \hspace{-0.7cm} & & & & m' \endm\hspace{-0.2cm} \beginm{c}\nu_{1}'\\ \vdots\\\nu_{N}'\endm \,, \nonumber\\
& & \label{egMassMatrix}\end{aligned}$$
where $U$ is the diagonalizing unitary transformation matrix.
Three types of mass eigenstates can arise: (i) In the most general scenario, when no selection rule is in work, all eigenvalues come out nonzero and different. In our case, they correspond to 21 massive Majorana neutrinos. (ii) The zero eigenvalues correspond to massless Weyl neutrinos. (iii) It can happen that pairs of degenerate eigenvalues appear (see e.g. in ). Each pair then corresponds to a massive Dirac neutrino with its chiral components given as, e.g.,
$$\begin{aligned}
\nu_\mathrm{L} & = & \nu_{2}'+\im\nu_{3}' \,,\\
\nu_\mathrm{R} & = & (\nu_{2}')^{\ensuremath{c}}-\im(\nu_{3}')^{\ensuremath{c}}\,.\end{aligned}$$
The presence of the pair degeneracy signals the ${\ensuremath{\mathrm{O}(2)}}\sim{\ensuremath{\mathrm{U}(1)}}$ symmetry of the mass matrix, a subgroup of either flavor, or sterility, or both. The symmetry corresponds to a quantum number carried by the Dirac neutrino.
In the minimal version (333), the left- and right-handed Majorana, and Dirac entries of the neutrino mass matrix arise from the pairing channels of the same flavor structure $\mathbf{3}\times\mathbf{3}$ or $\overline{\mathbf{3}}\times\overline{\mathbf{3}}$, thus of the same strength of attraction. That does not indicate the see-saw pattern of the neutrino mass matrix at all. The reason for the tiny masses of electroweak neutrinos has to be fully left on a huge amplification effects [@Benes:2011gi]. It is then natural to expect that the remaining nine sterile neutrinos turn out to be of small mass as well. In the same time, the dynamics should be also responsible for sufficient suppression of the right-handed admixtures within the electroweak neutrinos, for what it is difficult to find some natural reason.
On contrary, the non-minimal version (63333) naturally leads to the dynamically generated see-saw pattern of the neutrino mass matrix. The see-saw pattern is useful not only for explanation of tiny masses of the electroweak neutrinos, but also for suppression of their oscillations into the sterile neutrinos.
The key point is the presence of the sextet right-handed neutrinos. Their privileged role makes the situation clearer, separates the study of the right-handed neutrinos from other fermions, and allows us to switch into the approximative description by condensates.
Within the (63333) version we have demonstrated so far the massiveness only of the right-handed neutrinos. But of course we expect that ultimately at lower energy scale all elements of the full neutrino mass matrix given by become non-vanishing and all mass eigenstates become massive. The sole fact that there is an odd number of neutrino degrees of freedom indicates that at least one neutrino must be of Majorana type.
By the construction above we want to show that the right-handed Majorana elements dominate the whole neutrino mass matrix. This is exactly what is needed for the see-saw mechanism to work. Due to the strength of the sextet neutrino condensation the see-saw pattern occurs dynamically and naturally.
Nevertheless, the system with the general sterility condensation scheme is not directly able to accommodate all three light electroweak neutrinos. The see-saw mechanism is triggered by switching on the Dirac elements of the neutrino mass matrix. It seems to be natural to switch on the next-to-most attractive channel, $D^{\overline{\mathbf{3}}\times\mathbf{3}}_n$ . They arise dynamically from the effective four-fermion interactions $$\left[h^2/M^{2}_\mathrm{gauge}\right]_{ab}(\bar\nu_\mathrm{L}\gamma_\mu T^{a*}\nu_\mathrm{L})(\bar\zeta_{\mathrm{R}}^{n}\gamma^\mu T^{b*}\zeta_{\mathrm{R}}^{n})$$ after appropriate Fiertz rearrangement. Because it has analogous flavor structure as $u$-quark mass matrix, $\overline{\mathbf{3}}\times\mathbf{3}$, we assume it to be of the same order of magnitude, $D^{\overline{\mathbf{3}}\times\mathbf{3}}_n\sim m_t$. This should provide the see-saw masses of the electroweak neutrinos, $m_{\nu_\mathrm{EW}}\sim m_{t}^2/\Lambda_{\ensuremath{\mathrm{F}}}$, and of the sterile neutrinos, $m_{\nu_\mathrm{sterile}}\sim \Lambda_{\ensuremath{\mathrm{F}}}$. To reproduce the light neutrino mass $m_{\nu_\mathrm{EW}}\sim10^{-1}\,\mathrm{eV}$ while $m_{t}\sim10^{2}\,\mathrm{GeV}$, the flavor scale should be at least $\Lambda_{\ensuremath{\mathrm{F}}}\sim10^{14}\,\mathrm{GeV}$.
In our system with general $D^{\overline{\mathbf{3}}\times\mathbf{3}}_n\sim m_t$ and $R^{\overline{\mathbf{3}}\times\mathbf{6}}_1\sim\Lambda_{\ensuremath{\mathrm{F}}}$, the seesaw mechanism does not work. It is caused by the presence of six zero eigenvalues of general $M_\mathrm{R}$ . Instead of combining with super-heavy modes producing the seesaw spectrum, the three left-handed neutrino modes combine with three of the right-handed neutrino zero-modes to produce three pairs of quasi-degenerate modes of mass ($\sim m_t$). Those six modes in fact appear as three, too heavy pseudo-Dirac electroweak neutrinos in flagrant contradiction with observations.
There is way out of this trouble, if a subgroup of the sterility symmetry, which is able to prohibit the mixing of the left-handed neutrinos with the right-handed zero-modes, remains unbroken. The see-saw mechanism then acts only on the left-handed and super-heavy right-handed neutrino modes. All the way down to lower energy scale, the residual symmetry is unbroken and keeps the right-handed zero modes massless and decoupled from the massive neutrinos. We can say that we need the residual symmetry to protect the see-saw mechanism.
The necessary residual sterility symmetry can be achieved by imposing the special form of the triplet condensation which is equivalent to dynamically natural relations that ${\langle0\vert}\Phi_{3}^{n}{\vert0\rangle}={\langle0\vert}\Phi_{3}^{n'}{\vert0\rangle}$ (or generally $R^{\overline{\mathbf{3}}\times\mathbf{6}}_n=R^{\overline{\mathbf{3}}\times\mathbf{6}}_{n'}$), and $D^{\overline{\mathbf{3}}\times\mathbf{3}}_n=D^{\overline{\mathbf{3}}\times\mathbf{3}}_{n'}$, see . The seesaw-mechanism-protecting residual sterility symmetry is then ${\ensuremath{\mathrm{SU}(3)}}_\mathrm{S}\subset{\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}$ generated by $S_i$, with $i=1,..,8$.
Majorons {#secV}
========
Experienced by QCD we expect that the strong flavor dynamics leads to rich bound state spectrum. Their complete description and classification is nevertheless an infeasible task. The only bound states we can be sure to exist are the Nambu–Goldstone bosons of the spontaneously broken global symmetries. In this section we concentrate on the classification of bound states that arise from the formation of the neutrino self-energy of general form, spontaneously breaking the lepton number and the sterility symmetry $G_\mathrm{S}$, see section \[nuSymmetries\].
We leave the line of the previous section where, by phenomenological preferences, we have been lead to the particular pattern of the neutrino self-energy. We assume general pattern providing maximal chiral symmetry breaking occurring at one energy scale $\Lambda_{\ensuremath{\mathrm{F}}}$. The sterility symmetry together with the lepton number is broken completely along with the flavor symmetry breaking and rich spectrum of Nambu–Goldstone bosons, so called majorons, appears. For free the model provides excellent scalar candidates for the dark matter [@Berezinsky:1993fm]. Later in this section we describe and classify them.
The majoron corresponding to the anomalous symmetry is not true Nambu–Goldstone boson. It rather acquires huge mass ($\sim\Lambda_\mathrm{F}$), analogously to the case of the $\eta'$ in QCD. This majoron is called the *heavy sterile* majoron $H$.
The majorons corresponding to the anomaly free part of the sterility symmetry are called the *light sterile* majorons $\eta$.
The spontaneously broken anomaly free lepton number $L$ gives rise to the *standard* majoron $J$ [@Chikashige:1980ui; @Schechter:1981cv]. It is always present in all versions of the model.
The majorons $\eta$ and $J$ are the true Nambu–Goldstone bosons. They nevertheless do not present a phenomenological problem in the form of new long range force. The argument is simple: The Nambu–Goldstone bosons mediate spin-dependent tensor force among fermions which vanishes with cube of distance [@Gelmini:1982zz].
What more, the majorons can eventually acquire mass by gravitational effects of the order of, say, few keV [@Coleman:1988tj; @Giddings:1988cx; @Akhmedov:1992hi]. That would of course drastically shorten the force range. In the formulation of the issue we omit these effects and treat the majorons $\eta$ and $J$ as massless. During the phenomenological analysis, nevertheless, we keep this possibility open and call them collectively as *light* majorons.
Light majorons
--------------
All versions of the model predict the existence of the standard majoron $J$ from the spontaneously broken lepton number $L$ . The standard majoron couples to the lepton number current $${\langle0\vert}\mathcal{J}^{\mu}_{\mathrm{L}}(0){\vertJ(q)\rangle}=\im q^\mu F_J \\$$ with the strength of the standard majoron decay constant $F_J$. The anomaly free lepton number is spontaneously broken by the formation of all Dirac, $\Sigma_\mathrm{D}$, left-handed Majorana, $\Sigma_\mathrm{L}$, and right-handed Majorana, $\Sigma_\mathrm{R}$, components of the neutrino self-energy. Therefore the standard majoron is created from vacuum by a linear combination of interpolating operators $$J\sim\left(\overline{\nu_\mathrm{R}}\nu_\mathrm{L}+\overline{\nu_\mathrm{L}}\nu_\mathrm{R}\right),\ \ \overline{\nu^{{\ensuremath{c}}}_\mathrm{L}}\nu_\mathrm{L},\ \ \overline{\nu^{{\ensuremath{c}}}_\mathrm{R}}\nu_\mathrm{R} \,.$$
In the version (63333) sixteen light sterile majorons $\eta_0$ and $\eta_i$, $i=1,..,15$, as the Nambu–Goldstone bosons couple to the sterility currents
$$\begin{aligned}
{\langle0\vert}\mathcal{J}^{\mu}_{\mathrm{S}_3-\mathrm{S}_6}(0){\vert\eta_0(q)\rangle} & = & \im q^\mu F_\mathrm{\eta} \,;\\
{\langle0\vert}\mathcal{J}^{\mu}_{\mathrm{S},i}(0){\vert\eta_j(q)\rangle} & = & \im q^\mu F_\mathrm{\eta}\delta_{ij}\end{aligned}$$
with the strength of the sterile majoron decay constant $F_\mathrm{\eta}$. The sterility symmetry is spontaneously broken by the formation of both Dirac, $\Sigma_\mathrm{D}$, and right Majorana, $\Sigma_\mathrm{R}$, components of the neutrino self-energy. Therefore the sterile majorons are created from vacuum by a linear combination of interpolating operators
\[Inter\_fields\] $$\begin{aligned}
\eta_i & \sim & \left(\overline{\nu_\mathrm{R}}S_i\nu_\mathrm{L}+\overline{\nu_\mathrm{L}}S_i\nu_\mathrm{R}\right),\ \ \overline{\nu^{{\ensuremath{c}}}_\mathrm{R}}S_i\nu_\mathrm{R} \,;\\
\eta_0 & \sim & \left(\overline{\nu_\mathrm{R}}\nu_\mathrm{L}+\overline{\nu_\mathrm{L}}\nu_\mathrm{R}\right),\ \ \overline{\nu^{{\ensuremath{c}}}_\mathrm{R}}\nu_\mathrm{R} \,.\label{Inter_fields_C}\end{aligned}$$
In the following we will use the common notation for the generators relevant for the light majorons, $X_\alpha$, $\alpha=0,1,..,16$. It denotes the vector of the lepton number and sterility generators
\[Z\_generators\] $$\begin{aligned}
S_3-S_6: & & X_0=s_0 \,; \\
{\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}: & & X_i=s_i \,,\ \mathrm{where}\ i=1,..,15\,; \\
L: & & X_{16}=l\,.\end{aligned}$$
where the lepton number generator in the Nambu–Gorkov formalism $l$ is introduced in , and the sterility symmetry generators in the Nambu–Gorkov formalism $s_\alpha$ are introduced in .
Not counting their mutual interactions, the majorons interact mainly with neutrinos. Such interactions can be described generally by effective Yukawa majoron-neutrino term
\[EffYukawa\] $$\begin{aligned}
{\cal L}_{\mathrm{eff,}\eta\nu\nu} & = & y_{\eta\nu\nu}\,(\bar n X_0 n)\,\eta_0 + y_{\eta\nu\nu}'\,(\bar n X_i n)\,\eta_i \,,\\
{\cal L}_{\mathrm{eff,}J\nu\nu} & = & y_{J\nu\nu}\,(\bar n X_{16} n)\,J \,.\end{aligned}$$
At that level the effective Yukawa coupling constants $y_{\eta\nu\nu}$, $y_{\eta\nu\nu}'$ and $y_{J\nu\nu}$ are mere parameters. Nevertheless the majoron-neutrino coupling strength can be related to more fundamental quantities of the model, like to the flavor symmetry breaking neutrino self-energy $\mathbf{\Sigma}(p)$.
For that purpose we follow standard procedure [@Jackiw:1973tr; @Cornwall:1973ts] to insist on the fulfilment of the Ward–Takahashi identity for a proper vertex $\Gamma^{\mu}_\alpha(p+q,p)$ corresponding to the Green functions $G^{\mu}_0(x,y,z)\equiv{\langle0\vert}T\mathcal{J}^{\mu}_{\mathrm{S}_3-\mathrm{S}_6}(x)n(y)\bar{n}(z){\vert0\rangle}$, $G^{\mu}_i(x,y,z)\equiv{\langle0\vert}T\mathcal{J}^{\mu}_{\mathrm{S},i}(x)n(y)\bar{n}(z){\vert0\rangle}$ and $G^{\mu}_{16}(x,y,z)\equiv{\langle0\vert}T\mathcal{J}^{\mu}_{L}(x)n(y)\bar{n}(z){\vert0\rangle}$ of the lepton number and sterility currents coupled to the neutrino fields. The Ward–Takahashi identity reads $$\label{WTI}
q_\mu\Gamma^{\mu}_\alpha(p+q,p)=S^{-1}(p+q)X_\alpha-\gamma_0 X_\alpha\gamma_0
S^{-1}(p) \,.$$ When the dynamics develops symmetry breaking neutrino self-energy, the Ward–Takahashi identity does not vanish for $q\rightarrow0$. It determines uniquely only the leading ${\cal O}(q^{-1})$ part $\Gamma^{\mu}_{\alpha,\mathrm{lead.}}$ of the proper vertex $\Gamma^{\mu}_\alpha$ $$\label{Gamma_pole}
\Gamma^{\mu}_\alpha(p+q,p)=\Gamma^{\mu}_{\alpha,\mathrm{lead.}}(p+q,p)+{\cal O}(q^{0}) \,,$$ where $$\Gamma^{\mu}_{\alpha,\mathrm{lead.}}(p+q,p)=-\frac{q^\mu}{q^2}\left(\mathbf{\Sigma}(p)X_\alpha-\gamma_0 X_\alpha\gamma_0\mathbf{\Sigma}(p)\right) \,.$$ Physically, we interpret the pole in terms of the exchange of the massless Nambu–Goldstone boson, i.e., majoron. Following this interpretation we pick up the Nambu–Goldstone part of the proper vertex $$\label{Gamma_NG}
\Gamma^{\mu}_{\alpha}(p+q,p)=\Gamma^{\mu}_{\alpha,{\mathrm{NG}}}(p+q,p)+\ldots$$ and approximate it by a ‘one-loop’ expression $$\begin{aligned}
\Gamma^{\mu}_{\alpha,{\mathrm{NG}}}(p+q,p) & \approx & \begin{array}{c}\includegraphics[width=0.3\textwidth]{Pole_vertex.eps}\end{array} \nonumber\\
& = & -\frac{q^\mu}{q^2}I_{\alpha\beta}(q^2)P_\beta(p+q,p) \,,\label{Gamma_NG:loop}\end{aligned}$$ where the massless majoron propagator $\tfrac{\delta_{\beta\gamma}}{q^2}$ connects the neutrino loop $q^\mu I_{\alpha\beta}(q^2)$ and the majoron-neutrino vertex $P_\gamma(p+q,p)$ both regular for $q=0$.
Comparing the two expressions and for $q\rightarrow0$ we get the expression for the majoron-neutrino vertex for $q=0$ $$\label{P_J_nu}
P_\alpha(p,p)=I^{-1}_{\alpha\beta}(0)\big[\mathbf{\Sigma}(p)X_\beta-\gamma_0 X_\beta\gamma_0\mathbf{\Sigma}(p)\big] \,.$$ The loop function $I_{\alpha\beta}(0)$ from the diagram is $$\label{I_def}
I_{\alpha\beta}(0) =
\im\lim_{q\rightarrow0}\int_k{\mathop{\rm Tr}\nolimits}\left(\frac{\slashed{q}}{q^2}X_\alpha S(k-q)P_\beta(k-q,k)S(k)\right) \,.$$ For the sake of simplicity we write explicit formula for the loop function $I_{\alpha\beta}(0)$ only within the approximation of constant self-energies, $\mathbf{\Sigma}(p)\rightarrow \mathbf{M}\equiv\mathbf{\Sigma}(0)$. The approximated $I_{\alpha\beta}(0)$ then if plugged into gives us an upper estimate of magnitude of the majoron-neutrino vertex $P_\alpha(p,p)$.
Due to the limit in , we need to expand the $q$-dependent quantities up to ${\cal O}(q^1)$ order. The expansion of the neutrino propagator is $$\tilde{S}(k-q)=\tilde{S}(k)+\tilde{S}(k)\slashed q \tilde{S}(k)+{\cal O}(q^2)$$ and we *assume* that the expansion for the majoron-neutrino vertex is $$\tilde{P}_\beta(k-q,k)=\tilde{P}_\beta(k,k)+{\cal O}(q^2) \,,$$ where the tilde means the constant self-energy approximation of the quantity.
The loop function $I_{\alpha\beta}(0)$ necessary for the majoron-neutrino vertex is given by relation $$\begin{aligned}
\left[I(0)I^{{\ensuremath{\mathrm{T}}}}(0)\right]_{\alpha\beta} & = &
\im\lim_{q\rightarrow0}\int_k{\mathop{\rm Tr}\nolimits}\left(\frac{\slashed{q}}{q^2}X_\alpha \tilde{S}(k)\slashed q \tilde{S}(k)\times\right. \\
& & \hspace{1cm} \left.\vphantom{\frac{\slashed{q}}{q^2}}
\times\big[\mathbf{M} X_\beta-\gamma_0
X_\beta\gamma_0 \mathbf{M}\big]\tilde{S}(k)\right) \,. \nonumber\end{aligned}$$
Heavy sterile majoron
---------------------
The sterile majoron $H$ couples to the anomalous current of Abelian sterility symmetry ${\ensuremath{\mathrm{U}(1)}}_{\mathrm{S}_3+\mathrm{S}_6}$ $${\langle0\vert}\mathcal{J}_{\mathrm{S}_3+\mathrm{S}_6}^\mu(0){\vertH(q)\rangle}=\im q^\mu F_\mathrm{S} \,.$$ The heavy sterile majoron is created from vacuum by neutrino interpolating fields , and additionally, by flavor gauge boson component which is a topologically nontrivial field configuration $$H\sim F_{\mu\nu a}\tilde{F}^{\mu\nu}_a \,.$$
The majoron $H$ acquires huge mass due to the strong flavor axial anomaly . The value of its mass can be estimated according to the $\eta'$ mass analysis in QCD [@Witten:1979vv; @Veneziano:1979ec; @Shore:1998dm] as $$m_{H}^2\sim\frac{\chi(0)}{F^{2}_\mathrm{S}}\sim\Lambda_{\mathrm{F}}^2 \,,$$ where the flavor topological susceptibility is estimated as $\chi(0)\sim\Lambda^{4}_\mathrm{F}$, and the decay constant as $F_{\mathrm{S}}\sim\Lambda_\mathrm{F}$.
The anomalous coupling of $H$ to the flavor gauge bosons is given as $$\label{axionEWinteraction}
{\cal L}_{HCC}= \frac{h^2}{32\pi^2}\frac{H}{F_{\mathrm{S}}}F_{\mu\nu
a}\tilde{F}^{\mu\nu}_a \,.$$
The effective coupling of the heavy majoron with the neutrinos is $$\label{L_Hnn}
{\cal L}_{Hnn}\sim \frac{m_n}{F_\mathrm{S}}H\bar{n}\gamma_5n \,.$$ Because the interaction is proportional to the neutrino mass, the only significant interaction is with the heavy sterile neutrinos, the heavy sterile majoron is fairly invisible.
Majoron phenomenology
---------------------
### Light majorons:
Suppose that light majorons have mass of the order of few keV due to the gravitational effects. Then they are suitable candidates for warm dark matter [@Lattanzi:2008zz]. Important characteristic is their decay width. They can decay only to $N_\mathrm{light}$ sufficiently light neutrinos with mass $m_\mathrm{light}<M_J/2$, i.e., at least to the three electroweak neutrinos.
In the following we omit the differences among the light majorons and estimate the decay width only for standard majoron $J$ of mass $M_J$. The matrix element ${\cal M}$ for the decay is simply governed by the effective majoron-neutrino interaction : $$\im{\cal M}=\begin{array}{c}\includegraphics[width=0.2\textwidth]{LM_nlnl.eps}\end{array} \,.$$ The decay width $\Gamma$ is given by $$\label{DW}
\Gamma(J\rightarrow nn)=\frac{N_\mathrm{light}}{8\pi}y_{J\nu\nu}^2 M_J\left(1-\frac{4m_\mathrm{light}^2}{M_{J}^2}\right)^{3/2} \,,$$ where the effective Yukawa coupling is given by the majoron-neutrino vertex $P(p,p)$ $$y_{J\nu\nu}\sim P(p,p)\approx\frac{m_\mathrm{light}}{\sum_\nu m_\nu} \,.$$ We have estimated the neutrino loop $I(0)$ by a sum of all neutrino mass eigenvalues $m_\nu$, $I(0)\approx \sum_\nu m_\nu$.
Now, in the version (333), we could expect that masses of all neutrino eigenstates turn out to be of the same order, thus of the order of the electroweak neutrino mass. That is why we can estimate the effective Yukawa coupling as $y_{J\nu\nu}^{(333)}\approx 10^{-1}$ and neglect ratio $\tfrac{m_\mathrm{light}}{M_{J}}$. For the decay width we get an estimate $$\Gamma^{(333)}(J\rightarrow nn)\approx10^{-3}M_J \,.$$
On the other hand, in the version (63333) where the see-saw mechanism is in work, we could expect that only the $N_\mathrm{light}=3$ electroweak neutrinos are very light, $m_\mathrm{light}\ll M_J$. Then the decay width becomes $$\Gamma^{(63333)}(J\rightarrow nn)\sim\frac{N_\mathrm{light}}{N_\mathrm{heavy}^2}\frac{m_\mathrm{light}^2}{\Lambda_{{\ensuremath{\mathrm{F}}}}^2}\frac{M_J}{8\pi}\approx10^{-50}M_J \,,$$ where the sum of neutrino mass eigenvalues is dominated by $N_\mathrm{heavy}$ super-heavy neutrinos of mass $\sim\Lambda_{\ensuremath{\mathrm{F}}}\approx10^{14}\,\mathrm{GeV}$.
That makes a qualitative difference between the two versions of the model. While in the version (333) the light majorons are short-lived, in the version (63333) the light majorons are practically stable. From this point of view the version (333) resembles more the triplet majoron models [@Gelmini:1980re], while the version (63333) resembles the singlet majoron models [@Chikashige:1980ui].
### Heavy sterile majoron:
The coupling of the heavy sterile majoron to the flavor anomaly has important consequences for the $CP$ properties of the flavor model. There is no reason why there should not be the $\theta$-term of flavor gauge dynamics in the effective Lagrangian. The $\theta$-parameter shifted by phase that makes the neutrino masses real is eliminated by the Peccei–Quinn mechanism [@Peccei:1977hh; @Peccei:1977ur], where the heavy sterile majoron plays a role of the composite axion.
The heavy sterile majoron could decay to the heavy flavor gluons due to the direct interaction induced by the flavor anomaly. The decay would be kinematically allowed if the heavy sterile majoron is heavier than twice the mass of $N_C\leq8$ lighter flavor gauge bosons, $M_H<2M_C$. For the sake of rough estimate of the decay width, we omit the non-Abelian character of the flavor gauge bosons and also the differences of their masses using a common mass $M_C$. The matrix element ${\cal M}$ is given by the vertex $$\begin{aligned}
\im{\cal M} & = & \begin{array}{c}\includegraphics[width=0.2\textwidth]{HM_CC.eps}\end{array} \nonumber\\
& = & \frac{h^2}{32\pi^2F_\mathrm{S}}\varepsilon_{\mu}^*(p)\varepsilon_{\nu}^*(k)\epsilon^{\mu\nu\alpha\beta}p_\alpha k_\beta \,,\end{aligned}$$ where $\varepsilon_{\mu}$ is the polarization vector of the flavor gauge boson $C_\mu$. The decay width then follows $$\label{DW_HM_CC}
\Gamma(H\rightarrow CC)=\frac{N_C}{64\pi}\frac{h^4}{(32\pi^2)^2}\frac{M_{H}^3}{F_\mathrm{S}^2}\left(1-\frac{4M_{C}^2}{M_{H}^2}\right)^{3/2} \,.$$ After some order assumptions $N_C h^4\approx100$, and $M_H\sim M_C\sim F_\mathrm{S}\sim \Lambda_{\ensuremath{\mathrm{F}}}$ we get an estimate $$\Gamma(H\rightarrow CC)\approx 10^{-4}\Lambda_{\ensuremath{\mathrm{F}}}$$ leading to enormously fast decay.
In the version (63333), if it is kinematically allowed, a decay to $N_\mathrm{heavy}$ super-heavy neutrinos of mass $m_{\mathrm{heavy}}\sim\Lambda_{\ensuremath{\mathrm{F}}}$ (that are absent in the version (333)), gives a contribution to the heavy sterile majoron decay width of comparable size with . From the effective vertex the decay width follows $$\label{DW_H_nn}
\Gamma(H\rightarrow nn)=\frac{N_\mathrm{heavy}}{8\pi}\frac{m_{\mathrm{heavy}}^2}{F_{\mathrm{S}}^2} M_H\left(1-\frac{4m_\mathrm{heavy}^2}{M_{H}^2}\right)^{3/2}$$ and a rough estimate is $$\Gamma(H\rightarrow nn)\approx 10^{-1}\Lambda_{\ensuremath{\mathrm{F}}}\,.$$
Conclusions {#secVI}
===========
The intention of this paper was to investigate the sterile particle sector of the flavor gauge model [@Hosek:2009ys; @Hosek:NagoyaProceeding; @Benes:2011gi] of the electroweak symmetry breaking.
The model possesses a nice feature that its consistence requires the existence of a definite number of right-handed neutrino fields. Together with the left-handed neutrinos they form Majorana mass eigenstates, what is triggered by the formation of their self-energies dynamically.
The neutrino self-energies break the global symmetries giving rise to majorons. We cannot compute any fermion mass spectrum. But if neutrinos acquire Majorana masses dynamically, majorons must exist. The existence of the standard majoron as a consequence of the spontaneous lepton number breaking is an inevitable outcome of all versions of the model. The existence of the set of light sterile majorons, and one super-heavy sterile majoron depends on whether the sterility symmetry is broken, and their particular spectrum depends on how it is broken and differs from version to version. Majorons are both left- and right-handed neutrino composites. If the standard and the light sterile majorons acquire mass from the gravitational effects, they are excellent candidates for a warm dark matter [@Berezinsky:1993fm; @Lattanzi:2008zz]. The heavy sterile majoron is too unstable to account for any amount of matter of the Universe.
In any case, the heavy sterile majoron provides the Peccei–Quinn mechanism that eliminates the flavor $\theta$-term from the effective Lagrangian. The heavy sterile majoron is the composite invisible flavor axion. In this paper we have ignored anomalies of charged fermion Abelian currents in order to concentrate only on the neutrino sector as much as possible. The model in its completeness is analyzed in [@Benes:2011gi]. In the simplified case, due to the presence of flavor axion, the heavy sterile majoron, the model does not suffer from a $CP$ violation originating from the non-trivial topology of the flavor gauge dynamics. The only sources of the $CP$ violation remain to be un-transformable phases of the neutrino mixing matrices in the flavor gauge interactions. The $CP$ violating phases originate from the non-trivial neutrino self-energies $\Sigma$.
The sterile particle spectrum is the first main result of the paper. It is qualitatively common to all chiral versions of the model. The analysis is, nevertheless, based on the crucial assumption that the flavor symmetry scenario actually happens.
As the second main result of the paper we brought several heuristic but meaningful arguments why we see the non-minimal chiral version of the model with sextet right-handed neutrinos favored. To our surprise, such version has appeared to be also phenomenologically most suitable.
First of all, better understanding of the flavor symmetry self-breaking has been reached within the (63333) version. Just in the analogy with the color superconductivity, at the extremely high energy scale $\Lambda_\mathrm{F}$ the right-handed neutrino fields form the Majorana condensates that break flavor but not electroweak symmetry. The right-handed neutrinos and flavor gauge bosons acquire extremely high masses. The presence of the sextet right-handed neutrino fields is crucial: Their pairing channels are the most attractive, therefore their condensation happens at the highest energy scale which is *naturally* separated from the energy scale where the rest of fermion self-energies are formed and the electroweak symmetry is broken. This lower scale is, nevertheless, connected to the scale where the QCD axion is formed, i.e., $10^9-10^{12}\,\mathrm{GeV}$ [@Benes:2011gi]. So there is no advantage against the version (333) in explaining the smallness of the charged fermion masses. We still need the huge amplification of scales. It turns out that the right-handed neutrino Majorana self-energies must be generated at much higher scale $\Lambda_\mathrm{F}$ than $10^{12}\,\mathrm{GeV}$.
Second, the strongly coupled right-handed neutrino condensate formed at this very high energy scale is phenomenologically welcome. (i) It can generate the baryogenesis and drive the inflation of the Universe [@Barenboim:2008ds; @Barenboim:2010nm]. (ii) It naturally provides the see-saw pattern of the neutrino mass matrix and suggests $\Lambda_{\ensuremath{\mathrm{F}}}\gtrsim10^{14}\,\mathrm{GeV}$.
Third, in order to reach the presence of the three light electroweak neutrinos in the particle spectrum, we were forced to assume special (but not unnatural) form of the neutrino mass matrix. The form preserves the residual sterility symmetry that protects the see-saw mechanism. It also protects smallness of masses of a number of decoupled sterile neutrinos that can possibly account for fermionic warm dark matter [@Nieuwenhuizen:2008pf; @Kusenko:2009up; @Bezrukov:2009th].
The author gratefully acknowledges discussions with J. Hošek, G. Barenboim, J. Novotný, and P. Beneš. The work was supported by the Grant LA08015 of the Ministry of Education of the Czech Republic.
Nambu–Gorkov formalism {#appA}
======================
The neutrino fields are accommodated within the Nambu–Gorkov multispinor $n$ defined in . Its canonical anti-commutation relations then follow
$$\begin{aligned}
\{n_{\alpha i}(x),n_{\beta j}^\dag(y)\}_{\mathrm{E.T.}} & = & \delta_{ij}\delta_{\alpha\beta}\delta^{(3)}(\mathbf{x}-\mathbf{y})\,, \\
\{n_{\alpha i}(x),n_{\beta j}(y)\}_{\mathrm{E.T.}} & = & \delta_{ij}[C\gamma_0]_{\alpha\beta}\delta^{(3)}(\mathbf{x}-\mathbf{y})\,,\hspace{0.5cm}\end{aligned}$$
where $C$ is charge conjugation matrix.
The flavor transformations $$\label{n_sterile_transformation}
n'=\mathrm{e}^{\im\theta^a t^a}n \,,$$ are generated by the flavor generators $$\label{NGflavorGenerator}
t^a=\beginm{ccc} T_{\mathbf{3}}^{a}P_\mathrm{R}-[T_{\mathbf{3}}^{a}]^\mathrm{T}P_\mathrm{L} & & \\
& \hspace{-2cm}\openone_{4\times4}\left(T_{\mathbf{3}}^{a}P_\mathrm{L}-[T_{\mathbf{3}}^{a}]^\mathrm{T}P_\mathrm{R}\right) & \\
& & \hspace{-2cm}T_{\mathbf{6}}^{a}P_\mathrm{R}-[T_{\mathbf{6}}^{a}]^\mathrm{T}P_\mathrm{L} \endm\,.\hspace{0.5cm} \nonumber$$ The lepton number transformation of the neutrino fields is $$\label{n_sterile_transformation}
n'=\mathrm{e}^{\im\theta l}n \,,$$ where $l$ denotes the corresponding generator $$\label{NGleptonNumberGenerator}
l=\beginm{ccc} -L_\mathrm{EW}\gamma_5 & & \\
& \hspace{-0.3cm}\frac{1}{4}a\openone_{4\times4}\gamma_5 & \\
& & \hspace{-0.3cm}(1-a)\gamma_5 \endm\,.\hspace{0.5cm}$$ The sterility transformations of the neutrino fields are $$\label{n_sterile_transformation}
n'=\mathrm{e}^{\im\theta_\alpha s_\alpha}n$$ and the corresponding currents of the sterility symmetry are compactly rewritten as $$j_{\mathrm{S},\alpha}^\mu = \frac{1}{2}\bar{n}\gamma^\mu s_\alpha n
\,.$$ where $s_\alpha$ schematically denotes generators of all the sterility symmetries
\[NGsterileGenerator\] $$\begin{aligned}
S_3-S_6: & & s_0=\beginm{ccc} 0 & & \\
& \frac{1}{4}\openone_{4\times4}\gamma_5 & \\
& & -\gamma_5 \endm\,; \nonumber\\
{\ensuremath{\mathrm{SU}(4)}}_\mathrm{S}: & & s_i=\beginm{ccc} 0 & & \\
& S_{i}P_\mathrm{R}-S_{i}^\mathrm{T}P_\mathrm{L} & \\
& & 0 \endm\,, \ i=1,..,15\,; \nonumber\\
S_3+S_6: & & s_{16}=\beginm{ccc} 0 & & \\
& \frac{1}{4}\openone_{4\times4}\gamma_5 & \\
& & \gamma_5 \endm\,.\end{aligned}$$
Two-loop $\beta$-function {#appB}
=========================
Two-loop $\beta$-function is given by [@Machacek:1983tz] $$\begin{aligned}
\label{beta2}
& & \beta(h) = \nonumber\\
& & \hspace{0.2cm}-\frac{h^3}{(4\pi)^2}\left[\frac{11}{3}C(8)-\frac{2}{3}N^{\mathrm{EW}}C(3)-\frac{2}{3}\sum_\mathbf{r} N^{\nu_\mathrm{R}}_\mathbf{r} C(\mathbf{r})\right] \nonumber\\
& & \hspace{0.2cm}-\frac{h^5}{(4\pi)^4}\left[\frac{34}{3}C(8)^2-N^{\mathrm{EW}}\left(2C_2(3)+\frac{10}{3}C(8)\right)C(3)
\right. \nonumber\\
& & \hspace{0.2cm}\left.
-\sum_\mathbf{r} N^{\nu_\mathrm{R}}_\mathbf{r}\left(2C_2(\mathbf{r})+\frac{10}{3}C(8)\right)C(\mathbf{r})\right] \,,\end{aligned}$$ where the coefficient $C(\mathbf{r})$ reflects the flavor symmetry representation of the right-handed neutrino field, and is related to the quadratic Casimir invariant $C_2(\mathbf{r})$. Their definitions and their relation are
$$\begin{aligned}
\delta^{ab}C(\mathbf{r}) & = & {\mathop{\rm Tr}\nolimits}{T^{a}_\mathbf{r} T^{b}_\mathbf{r}} \,,\\
d(\mathbf{r})C_2(\mathbf{r}) & = & {\mathop{\rm Tr}\nolimits}{T^{a}_\mathbf{r} T^{a}_\mathbf{r}} \,,\\
d(\mathbf{r})C_2(\mathbf{r}) & = & d(G)C(\mathbf{r}) \,.\end{aligned}$$
$\mathbf{r}$ $d(\mathbf{r})$ $C(\mathbf{r})$ $C_2(\mathbf{r})$ $A(\mathbf{r})$ $C_3(\mathbf{r})$
--------------------------------------- ----------------- ----------------- ------------------- ----------------- -------------------
$\mathbf{3}(\overline{\mathbf{3}})$ $3$ $1/2$ $4/3$ $(-)1$ $(-)10/9$
$\mathbf{6}(\overline{\mathbf{6}})$ $6$ $5/2$ $10/3$ $(-)7$ $(-)35/9$
$\mathbf{8}$ $8$ $3$ $3$ $0$ $0$
$\mathbf{10}(\overline{\mathbf{10}})$ $10$ $15/2$ $6$ $(-)27$ $(-)9$
: List of important coefficients for the lowest representations of the group ${\ensuremath{\mathrm{SU}(3)}}$. []{data-label="table"}
For completeness we mention also the anomaly coefficient $A(\mathbf{r})$ important for the anomaly analysis. It is related to the cubic Casimir invariant $C_3(\mathbf{r})$. The relevant formulas are
$$\begin{aligned}
\frac{1}{2}d^{abc}A(\mathbf{r}) & = & {\mathop{\rm Tr}\nolimits}{T^{a}_\mathbf{r}\{T^{b}_\mathbf{r},T^{c}_\mathbf{r}\}} \,, \label{anomalyC}\\
d(\mathbf{r}) \, C_3(\mathbf{r}) & = & d^{abc}{\mathop{\rm Tr}\nolimits}{T^{a}_\mathbf{r}\,T^{b}_\mathbf{r}\,T^{c}_\mathbf{r}} \,, \label{CasimirI}\\
2 d(\mathbf{r}) \, C_3(\mathbf{r}) & = & \frac{5}{6}\,d(G)\,A(\mathbf{r}) \,.\end{aligned}$$
The values for some of the lowest representations are listed in Tab. \[table\].
Flavor gauge boson mass matrices {#appC}
================================
\[Mgauge\] $$\begin{aligned}
M_{6}^2 & = & h^2
\beginm{cccccccc}
(\phi_1+\phi_2)^2 \hspace{-7pt}& 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \hspace{-7pt} (\phi_1-\phi_2)^2 \hspace{-7pt}& 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \hspace{-7pt} 2(\phi_{1}^2+\phi_{2}^2) \hspace{-7pt}& 0 & 0 & 0 & 0 & \frac{2}{\sqrt{3}}(\phi_{1}^2-\phi_{2}^2) \\
0 & 0 & 0 & \hspace{-7pt} (\phi_1+\phi_3)^2 \hspace{-7pt}& 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \hspace{-7pt} (\phi_1-\phi_3)^2 \hspace{-7pt}& 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \hspace{-7pt} (\phi_2+\phi_3)^2 \hspace{-7pt}& 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \hspace{-7pt} (\phi_2-\phi_3)^2 \hspace{-7pt}& 0 \\
0 & 0 & \hspace{-7pt} \frac{2}{\sqrt{3}}(\phi_{1}^2-\phi_{2}^2) \hspace{-7pt} & 0 & 0 & 0 & 0 & \frac{2}{3}(\phi_{1}^2+\phi_{2}^2+4\phi_{3}^2)
\endm\,,\\
\nonumber\\
M_{3}^2 & = & \frac{h^2}{4}
\beginm{cccccccc}
(\varphi_{4}^2+\varphi_{5}^2) \hspace{-10pt} & 0 & 0 & \varphi_{5}\varphi_{6} & 0 & \varphi_{4}\varphi_{6} & 0 & \frac{2}{\sqrt{3}}\varphi_{4}\varphi_{5} \\
0 & \hspace{-10pt} (\varphi_{4}^2+\varphi_{5}^2) \hspace{-10pt} & 0 & 0 & \varphi_{5}\varphi_{6} & 0 & -\varphi_{4}\varphi_{6} & 0 \\
0 & 0 & \hspace{-10pt} (\varphi_{4}^2+\varphi_{5}^2) \hspace{-10pt} & \varphi_{4}\varphi_{6} & 0 & -\varphi_{5}\varphi_{6} & 0 & \frac{1}{\sqrt{3}}(\varphi_{4}^2-\varphi_{5}^2) \\
\varphi_{5}\varphi_{6} & 0 & \varphi_{4}\varphi_{6} & \hspace{-10pt} (\varphi_{4}^2+\varphi_{6}^2) \hspace{-10pt} & 0 & \varphi_{4}\varphi_{5} & 0 & -\frac{1}{\sqrt{3}}\varphi_{4}\varphi_{6} \\
0 & \varphi_{5}\varphi_{6} & 0 & 0 & \hspace{-10pt} (\varphi_{4}^2+\varphi_{6}^2) \hspace{-10pt} & 0 & \varphi_{4}\varphi_{5} & 0 \\
\varphi_{4}\varphi_{6} & 0 & -\varphi_{5}\varphi_{6} & \varphi_{4}\varphi_{5} & 0 & \hspace{-10pt} (\varphi_{5}^2+\varphi_{6}^2) \hspace{-10pt} & 0 & -\frac{1}{\sqrt{3}}\varphi_{5}\varphi_{6} \\
0 & -\varphi_{4}\varphi_{6} & 0 & 0 & \varphi_{4}\varphi_{5} & 0 & \hspace{-10pt} (\varphi_{5}^2+\varphi_{6}^2) \hspace{-10pt} & 0 \\
\frac{2}{\sqrt{3}}\varphi_{4}\varphi_{5} & 0 & \hspace{-10pt}\frac{1}{\sqrt{3}}(\varphi_{4}^2-\varphi_{5}^2) & -\frac{1}{\sqrt{3}}\varphi_{4}\varphi_{6} & 0 & -\frac{1}{\sqrt{3}}\varphi_{5}\varphi_{6} & 0 & \hspace{-10pt} \frac{1}{3}(\varphi_{4}^2+\varphi_{5}^2+4\varphi_{6}^2)
\endm\,.\end{aligned}$$
[^1]: Non-vector-like gauge theory arises from gauging not only vector currents, like in QCD, but also axial-vector currents.
[^2]: The flavor structure of self-energies should be understood via the corresponding mass terms $\overline{f_\mathrm{R}(\mathbf{r}^\prime)}\Sigma^{\mathbf{r}^\prime\hspace{-3pt}\times\bar{\mathbf{r}}}f_\mathrm{L}(\mathbf{r})$ where $\mathbf{r}$ and $\mathbf{r}^\prime$ are flavor representations of $f_\mathrm{L}$ and $f_\mathrm{R}$, respectively.
[^3]: If the index $\mathbf{r}$ is not used we mean the generators for the fundamental triplet representation, given by the Gell-Mann matrices $T^{a}=\frac{1}{2}\lambda^a$.
[^4]: $L_\mathrm{EW}$ denotes the lepton number counting the electroweakly charged leptons, $e$, $\nu_\mathrm{L}$, and *not* the right-handed neutrinos $\nu_\mathrm{R}$.
[^5]: We ignore here the flavor anomalies of the charged fermion currents corresponding to . Their flavor anomalies would otherwise provide some charged fermion component of the heavy sterile majoron, see later. Ignoring their flavor anomalies allows us to treat the heavy sterile majoron as a neutrino and flavor gauge boson composite only.
[^6]: Here, we neglect the wave function renormalization.
[^7]: We use the short-hand notation for integration $\int_k\equiv\int\frac{{\ensuremath{\mathrm{d}}}^4k}{(2\pi)^4}$.
[^8]: Here the condensates should be rewritten in the $\nu_{\mathrm{R}}$-formalism , not in the matrix $\xi_\mathrm{R}$-formalism.
[^9]: If the mass matrix $M$ is complex then we have to find eigenvalues of $M^\dag M$ to determine the mass spectrum.
|
{
"pile_set_name": "ArXiv"
}
|
### COrE sensitivity to polarized synchrotron
Accurate measurements of the polarized synchrotron radiation provide a unique probe to understand the structure of the Galactic magnetic field and to study the energy distribution of cosmic rays. The WMAP mission has provided first full-sky sensitive measurements of synchrotron-dominated sky emission polarization in the 20-90 GHz frequency range [@2007ApJS..170..335P], providing also an estimate of the level of contamination by polarized galactic emission outside of the galactic plane, in regions useful for CMB studies.
The observing power of the low frequency channels of COrE will be ideal to extract the rich information encoded in the Galactic diffuse synchrotron component. Synchrotron radiation is intrinsically highly polarized, up to 70-75% in a completely regular field. The observed synchrotron polarization depends on the uniformity of the field orientation within the resolution element. The typical synchrotron sky temperature at 45 GHz—the lowest-frequency COrE channel—is $\sim 35
\mu$K at intermediate Galactic latitudes. Assuming a polarization fraction of $\sim
60\%$, we predict the typical polarized signals to be of order $\sim 20\mu$K, with values depending on the features observed. Significantly higher values may be found in the loops and spurs, remnants of old supernova events close to the Galactic plane. At high latitudes, away from such local features, the expected degree of polarization should be only a few percent, resulting in polarized signals of a few $\mu$K. At intermediate latitudes, the polarization will lie somewhere between these two extremes.
The design sensitivity of the COrE 45, 75, and 105 GHz channels is approximately 0.4$\mu$K per resolution element (with beam-widths of 23$'$, 14$'$ and 10$'$), corresponding to signal-to-noise ratios (SNRs) for synchrotron polarized emission of $\sim 50,$ $\sim 15$, and $\sim 4$, respectively. This will represent an enormous step forward compared to the 30 months of Planck data, which is expected to achieve SNRs for synchrotron polarimetry of $\sim 2$, $\sim 0.3,$ and $\sim 0.2$ at 44, 70, and 100 GHz, respectively, at similar angular resolution. The polarization maps from the Planck LFI 30 GHz channel and from the WMAP 23 GHz channel, both with high SNR ($\sim 10$) for polarized synchrotron, will constitute useful low-frequency ancillary data for the COrE analysis, in particular for disentangling synchrotron radiation from the contribution of the polarized component of interstellar dust.[ **The exquisite polarization sensitivity of COrE will therefore deliver a high definition extraction of the synchrotron component in a Faraday-rotation free frequency domain.**]{}
### Statistical analysis of Galactic magnetic fields with COrE
The accuracy of COrE polarization information on Galactic magnetic fields will provide a unique opportunity to study magneto-hydrodynamical turbulence and dynamo action in great detail within our Galaxy. It would greatly improve the reliability of any polarization data on small angular scales of the Milky Way in synchrotron light and thereby increase the spectral range of accurately probed magneto-hydrodynamical modes. Therefore the detection potential for relevant plasma physical processes and the characteristic scales of turbulent energy injection and dissipation would be increased considerably. Furthermore, accurate Galactic polarization data will be of eminent value for upcoming Faraday tomography measurements with telescopes like LOFAR, eVLA, ASKAP, and especially the SKA, which has a comparable timeline as COrE.
### Faraday rotation free polarization
The COrE 45 or 60 GHz channels will provide a very accurate, highly resolved and detailed view on the synchrotron emission of our own galaxy. The polarization data permits to study the morphology of Galactic magnetic fields on global and local scales. CMB experiments measure at high frequencies and reveal there the original polarization structure, which do not suffer from Faraday rotation. Planck is currently increasing the angular resolution of such maps to a level comparable to ground based radio telescopes and COrE will boost the sensitivity and spectral information.
All ground-based radio-band synchrotron measurements of Galactic magnetic fields to date were carried out at much lower frequencies. Therefore they exhibit a high level of Faraday rotation and depolarization within the Galactic plane, hampering a direct analysis. COrE, by contrast, will unveil the polarization structure with essential no Faraday rotation (see Fig. \[fig:galPol1400MHz\]). [**This data will make possible disentangling depolarization phenomena arising from Faraday rotation by the Galactic magnetic field and the superposition of contributions with different polarization orientations along the line of sight within the beam.**]{} Having this unrotated polarization information with high precision will be of importance for at least two scientific research directions:
- Accurately measured fluctuations in the synchrotron flux in intensity and polarization provide insight into magnetic turbulence. Several characteristic statistical properties are encoded in this data, for example the energy and helicity spectra as well as the spectrum of the magnetic tension force.
- Such data will allow Faraday rotation tomography measurements of our galaxy, which will be pursued with upcoming instruments like LOFAR, eVLA, ASKAP, and especially the SKA. Faraday tomography data can be expected to reveal further statistical information on Galactic fields, with very promising potential for detecting signatures of magneto-hydrodynamical processes.
![The WMAP 22.8 GHz all-sky polarized intensity map (upper panel) and the 1.4 GHz all-sky polarized intensity map (lower panel). The polarized intensities are shown greyscale coded from 0 to 100 $\mu$K for 22.8 GHz and from 0 to 570 mK for 1.4 GHz. Galactic Faraday-depolarization structures are visible in the lower frequency map. Data from and figures from . []{data-label="fig:galPol1400MHz"}](galPol22GHz.png "fig:"){width="\columnwidth"} ![The WMAP 22.8 GHz all-sky polarized intensity map (upper panel) and the 1.4 GHz all-sky polarized intensity map (lower panel). The polarized intensities are shown greyscale coded from 0 to 100 $\mu$K for 22.8 GHz and from 0 to 570 mK for 1.4 GHz. Galactic Faraday-depolarization structures are visible in the lower frequency map. Data from and figures from . []{data-label="fig:galPol1400MHz"}](galPol1400MHz.png "fig:"){width="\columnwidth"}
### Magnetic spectra
The statistical properties of Galactic magnetic fields imprint themselves on observables such as the synchrotron intensity, polarization, and Faraday rotation measure. Methods to extract this information from observational data already exist and are being improved. Quantities highly relevant for an understanding of Galactic turbulence and dynamo processes are encoded in polarimetric data. Examples include:
- The [**magnetic energy spectrum**]{}, which is imprinted on intensity, polarization spectra and cross spectra [@1982ApJ...261..310S; @1983ApJ...271L..49S; @1989AJ.....98..244E; @1989AJ.....98..256E].
- The [**magnetic helicity spectrum**]{}, which can be measured from polarimetric data in combination with extragalactic Faraday data [@JunklewitzInThesis; @JunklewitzInPrep]. Magnetic helicity is a key quantity to understand the inner workings of large-scale Galactic dynamos .
- The [**magnetic tension force spectrum**]{}, which is encoded in polarimetry data alone, and is powerful in discriminating between different magneto-hydrodynamical scenarios [@2009MNRAS.398.1970W].
More physical relevant information might be encoded in the data. The above examples are only the quantities that are known today, for which a reconstruction from COrE data should be possible.
Due to our location within the galaxy, the angular fluctuations in observables correspond to physical magnetic field structures of different sizes. Disentangling these fluctuations in order to separate different physical scales such as the turbulent injection scale (see e.g. [@2008ApJ...680..362H]) or dissipative scales will be challenging. Highly accurate polarimetric data, with full-sky coverage and [**precise**]{} calibration will be invaluable for this endeavor. The probing of a large range of physical scales simultaneously with high angular resolution makes it possible to monitor diagnostics of Galactic turbulence and dynamo theory.
[**Statistical analysis of the polarized diffuse synchrotron emission from WMAP and from radio surveys disagree. WMAP finds similar E-mode and B-mode angular power spectra while those obtained from radio surveys differ significantly [@2006cmb..confE..16B]. Only more accurate measurements free from potential systematic effects can resolve this discrepancy. COrE will provide the necessary data and test the proposed theoretical models at different angular scales to understand if the origin of this observational disagreement is explained by residual systematics, data analysis errors, or results from subtle astrophysical mechanisms.**]{}
![Tension force spectrum reconstructed from mock polarimetry data using the method of Stokes correlators (from [@2009MNRAS.398.1970W]). The Gaussian random field was constructed to exhibit the same magnetic energy spectrum as the magneto-hydrodynamical simulation, but it has a different fourth-order statistic as measured by the Stokes correlators.[]{data-label="fig:tension"}](tensionForce_spec.png){width="\columnwidth"}
### Faraday tomography
Faraday tomography provides three-dimensional information on magnetic fields and thereby restores some of the information lost in the line-of-sight projection of most astronomical observations. The depth information of a synchrotron emitter is encoded in the rate of rotation of its polarization angle as a function of wavelength $\lambda$. For different sources at different physical depth, and therefore different Faraday depth, this rate differs. Mathematically, the observed polarization as a function of wavelength squared is the Fourier transformed polarized emission per Faraday depth [@1966MNRAS.133...67B].
An inverse Fourier transformation can therefore reveal the polarization per unit Faraday depth . This technique has already been successfully applied to radio data [@2006AN....327..545B] and is extremely promising for upcoming radio telescopes. However the problem for many of these measurements, especially for long wavelength telescopes like LOFAR, is that the full $\lambda^2$-space can not be examined. In particular, the negative part of this space cannot be probed by any instrument. However information in the full $\lambda^2$-space would be required for a direct inversion of the Fourier relation. Therefore inverse methods have to be applied which benefit from any available information, in particular close to the unaccessible negative $\lambda^2$ range. Thus the information close to $\lambda=0$ as will be provided by the COre low frequency channels will be of greatest importance for Faraday tomography.
Since the Faraday tomographic data will provide much deeper insight into the details of Galactic magnetism than the two dimensional information discussed above, it is obvious that the scientific return will be even larger. Accurate all-sky COrE data will be crucial for the succuess of this technique for exploring the richness of magnetic phenomena in the Galaxy.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The entanglement between the position and coin state of a $N$-dimensional quantum walker is shown to lead to a thermodynamic theory. The entropy, in this thermodynamics, is associated to the reduced density operator for the evolution of chirality, taking a partial trace over positions. From the asymptotic reduced density matrix it is possible to define thermodynamic quantities, such as the asymptotic entanglement entropy, temperature, Helmholz free energy, etc. We study in detail the case of a $2$-dimensional quantum walk, in the case of two different initial conditions: a non-separable coin-position initial state, and a separable one. The resulting entanglement temperature is presented as function of the parameters of the system and those of the initial conditions.'
author:
- 'Alejandro Romanelli$^{(1)}$, Raúl Donangelo$^{(1)}$, Renato Portugal$^{(2)}$, and Franklin de Lima Marquezino$^{(3)}$'
title: 'Thermodynamics of $N$-dimensional quantum walks'
---
Introduction {#intro}
============
The coined quantum walk (QW) model on the line was introduced by Aharonov *et al.* [@Aharonov] and its properties on graphs were studied in Ref. [@AAKV01]. In this model, the particle jumps from site to site in a direction which depends on the value of an internal degree of freedom called chirality. Quantum walks on multi-dimensional lattices were studied by many authors [@MBSS02; @Tregenna1; @OPD06; @Watabe08] and display the key feature of spreading quadratically faster in terms of probability distribution, compared to the classical random walk model on the same underlying structure [@AF02]. Those models were successfully applied to develop quantum algorithms, specially for searching a marked node in graphs [@SKW03; @AKR05; @PortugalBook]. There are other models of quantum walks and some of them do not use an auxiliary Hilbert space and have no coin. The continuous-time quantum walk model introduced by Farhi and Gutman [@FG98] and the coinless quantum walk model introduced by Patel *et al.* [@PRR05a] are examples of such models. The latter model can be used to search a marked node on two-dimensional finite lattices with the same number of steps (asymptotically in terms of the system size) compared to the coined model, with the advantage of using a smaller Hilbert space [@APN13].
The thermodynamics of quantum walks on the line was introduced in Refs. [@alejo2010; @alejo2012] using the coined QW model, which has two subspaces, namely, the coin and spatial parts. Taking the model’s whole Hilbert space, the dynamics is unitary with no change in the entropy. On the other hand, the coin subspace evolves entangled with its environment. In the asymptotic limit ($t\rightarrow\infty$), after tracing out the spatial part, the coin reaches a final equilibrium state which, if we consider the quantum canonical ensemble, can be seen to have an associated temperature. This procedure allows the introduction of thermodynamical quantities and helps to understand the physics behind the dynamics. In most cases, the thermodynamical quantities depend on the initial condition in stark contrast with the classical Markovian behavior.
In general the Hilbert space of a quantum mechanical model factors as a tensor product $\mathcal{H}_{sys}\otimes
\mathcal{H}_{env}$ of the spaces describing the degrees of freedom of the system and environment. The evolution of the system is determined by the reduced density operator that results from taking the trace over $\mathcal{H}_{env}$ to obtain $\varrho_{sys}=\mathrm{tr}_{env}(\rho )$. The simple toy models similar to our model studied in Refs. [@Zurek; @Meyer] shows how the correlations of a quantum system with other systems may cause one of its observables to behave in a classical manner. In this sense the fact that the partial trace over the QW positions leads to a system effectively in thermal equilibrium, agrees with those previous results.
In this work, we focus our attention on the thermodynamics of coined quantum walks on multi-dimensional lattices. The analysis of the dynamics is greatly simplified by using the Fourier basis (momentum space). In the computational basis, the evolution operator is in a Hilbert space of infinite dimensions, while in the Fourier basis we use a new operator in the finite coin subspace. The temperature of the quantum walk is obtained by taking the asymptotic limit ($t\rightarrow\infty$) of the reduced density matrix of the coin subspace and by making a correspondence to a quantum canonical ensemble. Using the saddle point expansion theorem [@BO78], we obtain the expression of the entanglement temperature in terms of the coin entries and the initial state. That analysis generalizes the results of Ref. [@alejo2012] and allows to obtain many new examples due to the increased number of degrees of freedom.
The paper is organized as follows. In Sec.\[theory\] we review the dynamics of multi-dimensional coined quantum walks in terms of the Fourier basis. In Sec.\[thermo\] we describe the thermodynamics of quantum walks in lattices and show how to obtain the temperature and other thermodynamical quantities. In Sec.\[initial\] we obtain an explicit expression for the temperature in terms of the initial condition. In Sec.\[examples2D\] we give some examples in two dimensions. In the last section we draw the conclusions.
$N$-dimensional discrete quantum walks. {#theory}
=======================================
In this section, following Ref. [@german2013], we present a brief theoretical development to obtain the wave function of the system.
The system moves at discrete time steps $t\in\mathbb{N}$ across an $N$-dimensional lattice of sites $\mathbf{x}\equiv\left(x_{1},\ldots,x_{N}\right)\in\mathbb{Z}^{N}$. Its evolution is governed by an unitary time operator. This operator can be written as the application of two more simple operators, one representing the unitary operator due to the $2N$-dimensional coin which determines the direction of displacement and another being specifically the unitary operator of the displacement. The Hilbert space of the whole system has then the form $$\mathcal{H}=\mathcal{H}_{\mathrm{P}}\otimes\mathcal{H}_{\mathrm{C}},
\label{espacio}$$ where the position space, $\mathcal{H}_{\mathrm{P}}$, is spanned by the unitary vectors $\left\{ \left\vert \mathbf{u_\alpha}\right\rangle
\equiv\left\vert
\delta_{1\alpha},\ldots,\delta_{N\alpha}\right\rangle;\alpha=1,\ldots,N\right\} $, and the coin space, $\mathcal{H}_{\mathrm{C}}$, is spanned by $2N $ orthonormal quantum states $\left\{ \left\vert
\alpha_{\eta}\right\rangle :\alpha=1,\ldots,N;\eta=\pm\right\} $. Therefore $\alpha$ is associated with the axis and $\eta$ with the direction. In the usual QW on the line ($N=1$), $\left\vert 1_{-}\right\rangle $ and $\left\vert 1_{+}\right\rangle $ are the right and left states $\left\vert
\mathrm{R}\right\rangle $ and $\left\vert \mathrm{L}\right\rangle $. The state of the system at any time $t$ is represented by the ket $\left\vert
\psi_{t}\right\rangle $ which can be expressed as $$\left\vert \psi_{t}\right\rangle =\sum_{\mathbf{x\in\mathbb{Z}^{N}}}\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\psi_{\mathbf{x},t}^{\alpha,\eta}\
\left\vert \mathbf{x}\right\rangle \otimes\left\vert
\alpha_{\eta}\right\rangle , \label{psi}$$ where $$\psi_{\mathbf{x},t}^{\alpha,\eta}=\left(\left\langle \alpha_{\eta}\right\vert \otimes\left\langle \mathbf{x}\right\vert
\right)\left\vert \psi_{t}\right\rangle . \label{proj}$$ We define, at each point $\mathbf{x}$, the following ket, $$\left\vert \psi_{\mathbf{x},t}\right\rangle =\left\langle \mathbf{x}\right.\left\vert \psi_{t}\right\rangle
=\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\psi_{\mathbf{x},t}^{\alpha,\eta}\left\vert \alpha_{\eta}\right\rangle , \label{psi_x}$$ which is a coin state, so that $$\psi_{\mathbf{x},t}^{\alpha,\eta}=\left\langle
\alpha_{\eta}\right.\left\vert \psi_{\mathbf{x},t}\right\rangle .
\label{psi2_x}$$ As $\left\vert \psi_{\mathbf{x},t}^{\alpha,\eta}\right\vert ^{2}=\left\vert
\left(\left\langle \alpha_{\eta}\right\vert \otimes\left\langle \mathbf{x}\right\vert \right)\left\vert \psi_{t}\right\rangle \right\vert ^{2}$ is the probability of finding the walker at $\left(\mathbf{x},t\right)$ and the coin in state $\left\vert \alpha_{\eta}\right\rangle $, the probability of finding the walker at $\left(\mathbf{x},t\right)$ irrespectively of the coin state is then $$P_{\mathbf{x},t}=\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\left\vert \psi_{\mathbf{x},t}^{\alpha,\eta}\right\vert ^{2}=\left\langle \psi_{\mathbf{x},t}\right.\left\vert \psi_{\mathbf{x},t}\right\rangle , \label{prob}$$ where we used the fact that $\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\left\vert
\alpha_{\eta}\right\rangle \left\langle \alpha_{\eta}\right\vert $ is the identity in $\mathcal{H}_{\mathrm{C}}$. Clearly $\sum_{\mathbf{x}}P_{\mathbf{x},t}=1$ because $\sum_{\mathbf{x}}\left\vert \mathbf{x}\right\rangle
\left\langle \mathbf{x}\right\vert $ is the identity in $\mathcal{H}_{\mathrm{P}}$.
The dynamical evolution of the system is ruled by $$\left\vert \psi_{t+1}\right\rangle ={\hat{U}}\left\vert
\psi_{t}\right\rangle , \label{map}$$ where the unitary operator $$\hat{U}=\hat{D}\circ\left(\hat{I}\otimes\hat{C}\right), \label{U}$$ is given in terms of the identity operator in $\mathcal{H}_{\mathrm{P}}$, $\hat{I}$, and two more unitary operators. First, the so-called coin operator $\hat{C}$, which acts in $\mathcal{H}_{\mathrm{C}}$, can be written in its more general form as $$\hat{C}=\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\sum_{\alpha^{\prime}=1}^{N}\sum_{\eta^{\prime}=\pm}C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}\left\vert
\alpha_{\eta}\right\rangle \left\langle
\alpha_{\eta^{\prime}}^{\prime}\right\vert , \label{C}$$ where the matrix elements $C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}\equiv\left\langle \alpha_{\eta}\right\vert \hat{C}\left\vert
\alpha_{\eta^{\prime}}^{\prime}\right\rangle $ can be arranged as a $2N\times2N$ unitary square matrix $C$. Then, $\hat{D}$ is the conditional displacement operator in $\mathcal{H}$ $$\hat{D}=\sum_{\mathbf{x}}\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\left\vert
\mathbf{x}+\eta\mathbf{u}_{\alpha}\right\rangle \left\langle \mathbf{x}\right\vert \otimes\left\vert \alpha_{\eta}\right\rangle \left\langle
\alpha_{\eta}\right\vert . \label{D}$$ Note that, depending on the coin state $\left\vert
\alpha_{\eta}\right\rangle $, the walker moves one site to the positive or negative direction of $x_{\alpha}$ if $\eta=+$ or $\eta=-$, respectively.
Projecting Eq.(\[map\]) onto $\left\langle \mathbf{x}\right\vert $ and using Eqs.(\[proj\]),(\[U\])–(\[D\]) we obtain $$\left\vert \psi_{\mathbf{x},t+1}\right\rangle
=\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\left\vert \alpha_{\eta}\right\rangle
\left\langle \alpha_{\eta}\right\vert \hat{C}\left\vert \psi_{\mathbf{x}-\eta\mathbf{u}_{\alpha},t}\right\rangle , \label{mapket_x}$$ which further projected onto $\left\langle \alpha_{\eta}\right\vert $ leads to $$\psi_{\mathbf{x},t+1}^{\alpha,\eta}=\sum_{\alpha^{\prime}=1}^{N}\sum_{\eta^{\prime}=\pm}C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}\psi_{\mathbf{x}-\eta\mathbf{u}_{\alpha},t}^{\alpha^{\prime},\eta^{\prime}}. \label{map_x}$$ Equation (\[map\_x\]) is the $N$-dimensional QW map in position representation. It shows that for any given time step the wave-function at each point is the coherent linear superposition of the wave-functions at the neighboring points calculated in the previous time step, the weights of the superposition being given by the coin operator matrix elements $C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}$.
Given the linearity of the map and the fact that it is space-invariant, i.e. the matrix elements $C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}$ do not depend on the space coordinates, the spatial Discrete Fourier Transform (DFT), which has been used many times in QW studies [@Grimmett; @nayak], is a very useful technique.
The DFT is defined as $$\left\vert \tilde{\psi}_{\mathbf{k,}t}\right\rangle \equiv\sum_{\mathbf{x}}e^{-i\mathbf{k}\cdot\mathbf{x}}\left\vert \psi_{\mathbf{x},t}\right\rangle ,
\label{DFT_k} \\$$ where $\mathbf{k}=\left(k_{1},\ldots,k_{N}\right)$; $k_{\alpha}\in\left[-\pi,\pi\right]$, is the quasi-momentum vector. The DFT satisfies $$\left\vert \psi_{\mathbf{x},t}\right\rangle \equiv\int\frac{\mathrm{d}^{N}\mathbf{k}}{\left(2\pi\right)^{N}}e^{i\mathbf{k}\cdot\mathbf{x}}\left\vert
\tilde{\psi}_{\mathbf{k},t}\right\rangle . \label{DFT_x}$$ Following Eq.(\[psi\_x\]) we define the components of the wavefunction in momentum space as $$\begin{aligned}
\left\vert \tilde{\psi}_{\mathbf{k},t}\right\rangle &
=\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\tilde{\psi}_{\mathbf{k},t}^{\alpha,\eta}\left\vert \alpha_{\eta}\right\rangle , \\
\tilde{\psi}_{\mathbf{k},t}^{\alpha,\eta} & =\sum_{\mathbf{x}}e^{-i\mathbf{k}\cdot\mathbf{x}}\psi_{\mathbf{x},t}^{\alpha,\eta}.\end{aligned}$$ Applying the previous definitions to the map (\[map\_x\]), and using $$\sum_{\mathbf{x}}e^{-i\mathbf{k}\cdot\mathbf{x}}\left\vert \psi_{\mathbf{x}-\eta\mathbf{u}_{\alpha},t}\right\rangle =\exp\left({-i\eta\mathbf{k}\cdot\mathbf{u}_{\alpha}}\right)\left\vert \tilde{\psi}_{\mathbf{k},t}\right\rangle , \label{trasla}$$ we obtain $$\left\vert \tilde{\psi}_{\mathbf{k},t+1}\right\rangle =\hat{C}_{\mathbf{k}}\left\vert \tilde{\psi}_{\mathbf{k},t}\right\rangle , \label{mapket_k}$$ where we have defined a coin operator in momentum space $$\hat{C}_{\mathbf{k}}\equiv\sum_{\alpha=1}^{N}\sum_{\eta=\pm}\left\vert
\alpha_{\eta}\right\rangle \left\langle \alpha_{\eta}\right\vert \hat{C}\exp\left({-i\eta k_{\alpha}}\right). \label{Ck}$$ Above, $k_{\alpha}=\mathbf{k}\cdot\mathbf{u}_{\alpha}$.
The matrix elements of the coin operator in this space are $$\left\langle \alpha_{\eta}\right\vert \hat{C}_{\mathbf{k}}\left\vert
\alpha_{\eta^{\prime}}^{\prime}\right\rangle \equiv\left(C_{\mathbf{k}}\right)_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}=\exp\left({-i\eta
k_{\alpha}}\right)C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta}.
\label{Ck_elements}$$ Projecting Eq.(\[mapket\_k\]) onto $\left\langle \alpha_{\eta}\right\vert $ and using (\[Ck\],\[Ck\_elements\]) leads to $$\tilde{\psi}_{\mathbf{k},t+1}^{\alpha,\eta}=\sum_{\alpha^{\prime}=1}^{N}\sum_{\eta^{\prime}=\pm} \exp\left({-i\eta k_{\alpha}}\right)C_{\alpha^{\prime},\eta^{\prime}}^{\alpha,\eta} \tilde{\psi}_{\mathbf{k},t}^{\alpha^{\prime},\eta^{\prime}}. \label{map_k}$$ As we see, the nonlocal maps (\[mapket\_x\],\[map\_x\]) become local in the momentum representation given by Eqs.(\[mapket\_k\]),(\[map\_k\]). This allows us to easily obtain a formal solution to the QW dynamics, since map (\[mapket\_k\]) implies $$\left\vert \tilde{\psi}_{\mathbf{k},t}\right\rangle =\left(\hat{C}_{\mathbf{k}}\right)^{t}\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle .
\label{evol_k}$$ Therefore the set of eigenvalues and eigenvectors of $\hat{C}_{\mathbf{k}}$ is most useful to solve the QW evolution dynamics.
Since, according to Eq.(\[evol\_k\]) the operator $\hat{C}_{\mathbf{k}}\mathcal{\ }$ must be unitary, all its eigenvalues $\left\{ \lambda_{\mathbf{k}}^{\left(s\right)}: s=1,2,3,..,2N\right\}$ can be written in the form $\lambda_{\mathbf{k}}^{\left(s\right)}=\exp\left(-i\omega_{\mathbf{k}}^{\left(s\right)}\right)$, with $\omega_{\mathbf{k}}^{\left(s\right)}$ real. In addition to these eigenvalues we also need to know the corresponding eigenvectors $\left\{\left\vert \phi_{\mathbf{k}}^{\left(s\right)}\right\rangle \right\}$. These eigenvectors satisfy the orthogonality condition $$\left\langle \phi_{\mathbf{k}}^{\left(s\right)}\right.\left\vert\phi_{\mathbf{k}}^{\left(s^{\prime }\right)}\right\rangle=\delta_{ss^{\prime }},
\label{or}$$ where $\delta_{ss^{\prime }}$ is the Kronecker delta. Once the eigenvalues and eigenvectors of $\hat{C}_{\mathbf{k}}$ are known, implementing Eq.([evol\_k]{}) is straightforward. Given the initial distribution of the walker in position representation $\left\vert \psi_{\mathbf{x},0}\right\rangle $, we compute its DFT $\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle $ via Eq.(\[DFT\_k\]), as well as the projections $$\tilde{f}_{\mathbf{k}}^{\left(s\right)}=\left\langle \phi_{\mathbf{k}}^{\left(s\right)}\right.\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle
, \label{fsk}$$ so that $\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle =$ $\sum_{s}\tilde{f}_{\mathbf{k}}^{\left(s\right)}\left\vert \phi_{\mathbf{k}}^{\left(s\right)}\right\rangle $. Using Eq.(\[evol\_k\]), we obtain $$\left\vert \tilde{\psi}_{\mathbf{k},t}\right\rangle =\sum_{s=1}^{2N}\exp{\left(-i\omega_{\mathbf{k}}^{\left(s\right)}t\right)}\tilde{f}_{\mathbf{k}}^{\left(s\right)}\left\vert \phi_{\mathbf{k}}^{\left(s\right)}\right\rangle
.$$ In position representation we get, using Eq.(\[DFT\_x\]), $$\begin{aligned}
\left\vert \psi_{\mathbf{x},t}\right\rangle & = \sum_{s=1}^{2N}\left\vert
\psi_{\mathbf{x},t}^{\left(s\right)}\right\rangle , \label{evol_x} \\
\left\vert \psi_{\mathbf{x},t}^{\left(s\right)}\right\rangle & =\int\frac{\mathrm{d}^{N}\mathbf{k}}{\left(2\pi\right)^{N}}\exp\left[{i\left(\mathbf{k}\cdot\mathbf{x-} \omega_{\mathbf{k}}^{\left(s\right)}t\right)}\right]\tilde{f}_{\mathbf{k}}^{\left(s\right)}\left\vert \phi_{\mathbf{k}}^{\left(s\right)}\right\rangle . \label{evol_x_s}\end{aligned}$$ In this way the time evolution of the QW is formally solved: all we need is to compute the set of eigenvalues and eigenstates of $\hat{C}_{\mathbf{k}}$ and the initial state in reciprocal space $\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle $, which determines the weight functions $\tilde{f}_{\mathbf{k}}^{\left(s\right)}$ through Eq.(\[fsk\]).
Entanglement and thermodynamics. {#thermo}
================================
Entanglement in quantum mechanics is associated with the non separability of the degrees of freedom of two or more particles. The degrees of freedom involved in entangled states are usually discrete, such as the spins of electrons or nuclei. However, there is also interest in continuous degrees freedom, such as the position or the moment of a particle, due to their potential to increase storage capacity and information processing in quantum computation [@Malena]. The unitary evolution of the QW generates entanglement between the coin and position degrees of freedom. The asymptotic coin-position entanglement and its dependence on the initial conditions of the QW has been investigated by several authors [Carneiro,abal,salimi,Annabestani,Omar,Pathak,Petulante,Venegas,Endrejat,Ellinas1,Ellinas2,Maloyer,alejo2010,alejo2012]{}. In particular in Ref.[@alejo2012] it has been shown that the coin-position entanglement can be seen as a system-environment entanglement and it allows to define an entanglement temperature. In the present work we also study this subject using the $N$-dimensional QW as a system.
Let us briefly review the usual definition of entropy with the aim to clarify the emergence of the concept of entanglement entropy. The density matrix of the quantum system is $$\widehat{\rho}(t)=\left\vert \psi_{t}\right\rangle \left\langle \psi_{t}
\right\vert . \label{rho}$$ The quantum analog of the Gibbs entropy is the von Neumann entropy $$S _{N}(t)=-\mathrm{tr}(\widehat{\rho}(t) \log{\widehat{\rho}(t)}).
\label{sn}$$ Owing to the unitary dynamics of the QW, the system remains in a pure state, and this entropy vanishes. However, for these pure states, the entanglement between the chirality and the position can be quantified by the associated von Neumann entropy for the reduced density operator, namely $$S(t)=-\mathrm{tr}(\widehat{\rho} _{c}(t) \log{\widehat{\rho} _{c}(t)}),
\label{s2}$$ where $$\widehat{\rho}_c(t)=\mathrm{tr_p}(\widehat{\rho} )=\sum_{\mathbf{x}}\left\langle \mathbf{x}\left\vert \psi_{t}\right\rangle \left\langle
\psi_{t} \right\vert \mathbf{x}\right\rangle , \label{rhoc}$$ is the reduced density operator for the chirality evolution and the partial trace, $\mathrm{tr_p}$, is taken over the positions. Note that, in general $\mathrm{tr}(\widehat{\rho}_{c}^{2})<1$, i.e., the reduced operator $\widehat{\rho}_{c}(t)$ corresponds to a statistical mixture. The expression for the entropy given by Eq.(\[s2\]), will be used as a measure of entanglement between the position and the chirality of the system. Using the properties of the wave-function $\left\vert \psi_{\mathbf{x},t}\right\rangle
=\left\langle \mathbf{x}\right.\left\vert \psi_{t}\right\rangle$ and the identity $$\sum_{\mathbf{x}}e^{i(\mathbf{k}-\mathbf{k}_{0})\cdot\mathbf{x}}=(2\pi)^{N}\delta^{N}\left(\mathbf{k}-\mathbf{k}_{0}\right),$$ for the $N$-dimensional delta, it is straightforward to obtain the following expression for Eq.(\[rhoc\]), the reduced density operator
$$\begin{aligned}
\widehat{\rho}_c(t)& = \sum_{s=1}\sum_{s^{,}=1}\int\exp\left[{i
\left(\omega_{\mathbf{k}}^{\left(s^{,}\right)}-\omega_{\mathbf{k}}^{\left(s\right)}\right)t}\right] \notag \\
& \times \tilde{f}_{\mathbf{k}}^{\left(s\right)} (\tilde{f}_{\mathbf{k}}^{\left(s^{,}\right)})^{*} \left\vert \phi_{\mathbf{k}}^{\left(s\right)}\right\rangle \left\langle\phi_{\mathbf{k}}^{\left(s^{,}\right)}\right\vert
\frac{\mathrm{d}^{N}\mathbf{k}}{\left(2\pi\right)^{N}}. \label{evolrho}\end{aligned}$$
This expression can be evaluated in the asymptotic limit $\mathrm{t}\rightarrow\infty$ using the stationary phase theorem, see Ref. [@nayak], where only terms with $\omega_{\mathbf{k}}^{\left(s^{,}\right)}=\omega_{\mathbf{k}}^{\left(s\right)}$ contribute in Eq.(\[evolrho\]). Therefore, in the asymptotic limit the reduced density operator is $$\widehat{\varrho}\equiv\widehat{\rho}_c(t\rightarrow\infty)=\sum_{s=1}^{2N}\int\frac{\mathrm{d}^{N}\mathbf{k}}{\left(2\pi\right)^{N}} |\tilde{f}_{\mathbf{k}}^{\left(s\right)}|^{2} \left\vert \phi_{\mathbf{k}}^{\left(s\right)}\right\rangle \left\langle\phi_{\mathbf{k}}^{\left(s\right)}\right\vert. \label{evolrhoinfo}$$ As the density operator is positive definite, its associated matrix, Eq.([evolrhoinfo]{}), has real and positive eigenvalues. We let $\{\left\vert\Phi_{s}\right\rangle\}$ be the basis that makes diagonal this matrix. Therefore, in this basis, the corresponding asymptotic density matrix has the following simple shape. $${\varrho}_{ss^{^{\prime }}}=\Lambda_s~\delta_{ss^{^{\prime }}}, \label{dia}$$ where $\Lambda_s\geq 0$ are the eigenvalues of the asymptotic density matrix, that satisfy $$\sum_{s=1}^{2N}\Lambda_s=1. \label{evolrhoinf}$$ In order to make a more complete description of this equilibrium in the asymptotic limit, it is necessary to connect the eigenvalues of $\rho_c$ with an unknown associated Hamiltonian operator $H_c$. To obtain this connection we shall use the quantum Brownian motion model of Ref.[@Kubo]. In this theory one considers that the entanglement between the system associated with the chirality degrees of freedom, characterized by the density matrix $\rho_c$, and those associated with the position degrees of freedom, the lattice, is equivalent to the thermal contact between the system and a thermal bath. In equilibrium $$[H_c,\rho_{c}]=0, \label{scho2}$$ should be satisfied. As a consequence, in the asymptotic regime the density operator $\rho_{c}$ is an explicit function of a time-independent Hamiltonian operator. If we note by $\{\left\vert\Phi_{s}\right\rangle\}$ the set of eigenfunctions of the density matrix, the operators $H_{c}$ and $\rho_{c}$ are both diagonal in this basis. Therefore the eigenvalues $\Lambda _{s}$ depend on the corresponding eigenvalues of $H_c$. We denote this set of eigenvalues by $\{\epsilon_s\}$; they can be interpreted as the possible values of the entanglement energy. This interpretation agrees with the fact that $\Lambda_{s}$ is the probability that the system is in the eigenstate $\left\vert\Phi_{s}\right\rangle$.
To construct this connection, we note that Eq.(\[evolrhoinf\]) together $0\le \Lambda_s$ imply that $0\le \Lambda_s\le 1$, therefore making possible it to associate a Boltzmann-type probability to each $\Lambda_s$. In other words, it is possible to associate, to each $\Lambda_s$, a virtual level of energy $\epsilon_s$. The precise dependence between $\Lambda _{s}$ and $\epsilon_s$ is determined by the type of ensemble we construct. We propose in the present work that this equilibrium can be made to correspond to a quantum canonical ensemble. To do this, we define the following relation $$\Lambda _{s}\equiv\frac{e^{-\beta\epsilon_s}}{\mathbb{Z}}, \label{lam20}$$ where $\mathbb{Z}$ is the partition function of the system, that is $$\mathbb{Z}\equiv\sum_{s=1}^{2N}e^{-\beta\epsilon_s}, \label{part}$$ and the parameter $\beta$ can be put into correspondence with an entanglement temperature $$T\equiv\frac{1}{\kappa\beta}, \label{tem}$$ where $\kappa$ is the Boltzmann constant. Since only the relative difference between energy eigenvalues has physical significance, we consider the eigenvalues in decreasing order, and, without loss of generality, set $$\epsilon_1=\epsilon , \label{eney0}$$ $$\epsilon_{2N}=-\epsilon . \label{eney}$$ The value of $\epsilon$ can be determined from Eqs.(\[lam20\],\[eney0\],\[eney\]) $$\epsilon=\frac{1}{2\beta}\log\frac{\Lambda _{2N}}{\Lambda _{1}} .
\label{eney00}$$ The energy eigenvalues for the remaining values of s, $s=2,3...,2N-1$, are, using again Eq.(\[lam20\]), $$\epsilon_{s}=\epsilon -\frac{1}{\beta}\log\frac{\Lambda _{s}}{\Lambda _{1}}. \label{eneres}$$ Therefore the asymptotic density matrix of Eq.(\[dia\]) can be thought as the density matrix of the canonical ensemble $${\varrho}=\frac{1}{\mathbb{Z}}
\begin{pmatrix}
e^{-\beta \epsilon_{1}} & 0 & 0 & . & . & 0 & 0 \\
0 & e^{-\beta \epsilon_{2}} & 0 & . & . & 0 & 0 \\
0 & 0 & e^{-\beta \epsilon_{3}} & . & . & 0 & 0 \\
. & . & . & . & . & . & . \\
. & . & . & . & . & . & . \\
0 & 0 & 0 & . & . & e^{-\beta \epsilon_{2N-1}} & 0 \\
0 & 0 & 0 & . & . & 0 & e^{-\beta \epsilon_{2N}}\end{pmatrix}. \label{dia2}$$
Starting from the partition function of the system given by Eq.(\[part\]), it is possible to build the thermodynamics for the QW entanglement. In particular, the Helmholtz free energy $A$ is given by $$A\equiv-\frac{1}{\beta}\log\mathbb{Z}=-\frac{1}{\beta}\log\sum_{s=1}^{2N}e^{-\beta\epsilon_s}. \label{free}$$ and the internal energy $U$ is given by $$U\equiv-\frac{1}{\mathbb{Z}}\frac{\partial\mathbb{Z}}{\partial\beta}=\frac{1}{\mathbb{Z}}\sum_{s=1}^{2N}\epsilon_s e^{-\beta\epsilon_s}. \label{free2}$$ Thus, the asymptotic entanglement entropy as a function of the eigenvalues $\Lambda _{s}$ is $$S=-\sum_{s=1}^{2N}\Lambda _{s} \log{\Lambda _{s}}. \label{s00}$$ Substituting Eq.(\[lam20\]) into Eq.(\[s00\]), after straightforward operations using Eqs.(\[free\],\[free2\]), we obtain the following expression for the asymptotic entanglement entropy $${S}= {\beta}(U- A). \label{termo}$$ As it should be expected, this last equation agrees with the thermodynamic definition of the entropy.
Of course, in Eq.(\[eney00\]) only the ratio $\epsilon/T$ is well defined; however, we chose to introduce the temperature as this concept strengthens the idea of asymptotic equilibrium between the position and chirality degrees of freedom. Note that while temperature makes sense only in the mentioned equilibrium state, the entropy concept can be introduced without such a restriction. For all practical purposes we shall take $\epsilon=\kappa$, then the entanglement temperature will be determined by $$T=\frac{2}{\log\left({\Lambda _{2N}}/{\Lambda _{1}}\right)}, \label{tem00}$$ and the energy eigenvalues by $$\epsilon_{s}=1-2\frac{\log\left({\Lambda _{s}}/{\Lambda _{1}}\right)}{\log\left({\Lambda _{2N}}/{\Lambda _{1}}\right)}. \label{enefin}$$
Initial conditions. {#initial}
===================
We now discuss the consequences of choosing different initial conditions on the thermal evolution of the system. We are interested in characterizing the long-time coin-position entanglement generated by the evolution of the $N$-dimensional QW. First we consider the case of a separable coin-position initial state. More specifically, we take initial chirality conditions of the form $$\left\vert {\psi _{\mathbf{x},0}}\right\rangle ={\xi _{\mathbf{x},0}}\left\vert {\chi }\right\rangle , \label{inipsi3}$$where ${\xi _{\mathbf{x},0}}$ is a generic position wave function and $$\left\vert {\chi }\right\rangle =\cos {(\gamma /2)}\left\vert {Z_+ }\right\rangle +e^{i\varphi }\sin {(\gamma /2)}\left\vert {Z_- }\right\rangle,
\label{inipsi4}$$ with $$\left\vert {Z_\pm }\right\rangle \equiv\frac{1}{\sqrt{N}}\sum_{\alpha =1}^{N}\left\vert \ \alpha _{\pm}\right\rangle .
\label{z1}$$ The two parameters $\gamma \in \left[ 0,\pi \right] $ and $\varphi \in\left[
0,2\pi \right] $ define the initial point on the generalized Bloch’s sphere. The DFT of Eq.(\[inipsi3\]) is $$\left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle =\sum_{\mathbf{x}}e^{-i\mathbf{k}\cdot \mathbf{x}}\left\vert \psi _{\mathbf{x},0}\right\rangle
=\sum_{\mathbf{x}}e^{-i\mathbf{k}\cdot \mathbf{x}}\xi _{\mathbf{x},0}\left\vert {\chi }\right\rangle . \label{DFT_k0}$$In order to obtain a closed equation for $\Lambda _{s}$ we consider in detail the simple case where the amplitudes ${\xi _{\mathbf{x},0}}$ have an isotropic Gaussian position distribution multiplied by the plane waves $e^{i\mathbf{k_{0}}\cdot \mathbf{x}}$ , that is $${\xi _{\mathbf{x},0}}\propto e^{i\mathbf{k_{0}}\cdot \mathbf{x}}\frac{1}{\sigma ^{N/2}}\exp {\left( -\frac{\mathbf{x}\cdot \mathbf{x}}{\sigma ^{2}}\right) }. \label{gauss1}$$where $\sigma >0$ is a characteristic width and $\mathbf{k_{0}}$ is a particular initial momentum that characterized the initial condition. We will deal with sufficiently large value of $\sigma $ for the Gaussian, so as to make possible the connection of the DFT with the continuous limit. Then, for these values of $\sigma $, Eq.(\[DFT\_k0\]) can be put as $$\left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle \propto \sigma
^{N/2}\sum_{\mathbf{x}}e^{-\frac{\sigma ^{2}}{2}{\left( \mathbf{k-k_{0}+2\pi
\mathbf{x}}\right) ^{2}}}\left\vert {\chi }\right\rangle , \label{gauss2}$$see Appendix A. If we want to simulate an uniform initial distribution for the $N$-dimensional QW we can take $\sigma
\mapsto \infty $ in Eq.(\[gauss2\]). In this case we can use the following mathematical property for the Dirac delta, $$\lim_{\sigma \mapsto \infty }\left( \frac{\sigma }{\sqrt{\pi }}\right)
^{N}e^{-\sigma ^{2}{\left( \mathbf{k-k_{0}+2\pi \mathbf{x}}\right) ^{2}}}\equiv \delta ^{N}\left( \mathbf{k-k_{0}+2\pi \mathbf{x}}\right) .
\label{fsk3}$$Eq.(\[gauss2\]) can then be expressed as $$\left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle \propto \left[ \sum_{\mathbf{x}}\delta ^{N/2}\left( \mathbf{k-k_{0}+2\pi \mathbf{x}}\right) \right] \left\vert {\chi }\right\rangle . \label{gauss3}$$We shall now assume that the components of $\mathbf{k_{0}}$ belong to the interval $\left( -\pi ,\pi \right) $, then in the sum of Eq.(\[gauss3\]) the only term that survives is the one for $\mathbf{x}=\mathbf{0}$. This is due to the fact that all components of $\mathbf{k}$ lie within the interval $\left[ -\pi ,\pi \right] $, and that the vector $\mathbf{x}$ has only discrete components. Then using Eq.(\[fsk\]), Eq.(\[gauss3\]) and the normalization condition, we have $$\left\vert \tilde{f}_{\mathbf{k}}^{\left( s\right) }\right\vert ^{2}={\left(
2\pi \right) ^{N}}\delta ^{N}\left( \mathbf{k-k_{0}}\right) \left\vert
\left\langle \phi _{\mathbf{k}}^{\left( s\right) }\right. \left\vert \chi
\right\rangle \right\vert ^{2}. \label{fsk4}$$Therefore in this case, from Eq.(\[evolrhoinfo\]), it is straightforward to obtain the eigenvalues for the asymptotic density matrix, $$\Lambda _{s}={\left\vert \left\langle \phi _{\mathbf{k_{0}}}^{\left(
s\right) }\right. \left\vert \chi \right\rangle \right\vert ^{2}},
\label{lam2}$$and their respective eigenfunctions, $$\left\vert \Phi _{s}\right\rangle =\left\vert \phi _{\mathbf{k_{0}}}^{\left(
s\right) }\right\rangle . \label{eigen}$$
As a second example we consider the case of a non separable coin-position initial state. In particular we take $$\left\vert \psi _{\mathbf{x},0}\right\rangle =\int \frac{\mathrm{d}^{N}\mathbf{k}}{\left( 2\pi \right) ^{N}}\exp \left[ {i\left( \mathbf{k}\cdot
\mathbf{x}\right) }\right] \left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle , \label{inidos}$$with $$\begin{aligned}
\left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle & =\cos {(\gamma /2)}\frac{1}{\sqrt{N}}\sum_{s=1}^{N}\left\vert {\phi _{\mathbf{k}}^{\left(
s\right) }}\right\rangle \notag \\
& +e^{i\varphi }\sin {(\gamma /2)}\frac{1}{\sqrt{N}}\sum_{s=N+1}^{2N}\left\vert {\phi _{\mathbf{k}}^{\left( s\right) }}\right\rangle , \label{renato}\end{aligned}$$and then $$\begin{aligned}
\label{renato2}
\left\vert \tilde{f}_{\mathbf{k}}^{\left( s\right) }\right\vert ^{2}
&=&\left\vert \left\langle \phi _{\mathbf{k}}^{\left( s\right) }\right.
\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle \right\vert ^{2} \notag
\\
&=&\frac{1}{N}\left\{
\begin{array}{c}
\cos ^{2}{(\gamma /2)}, \\
\sin ^{2}{(\gamma /2),}\end{array}\begin{array}{c}
\text{for }s=1,2,...,N \\
\notag \text{for }s=N+1,N+2,...,2N\end{array}\right. \\\end{aligned}$$Therefore the eigenvalues $\Lambda _{s}$ are the eigenvalues of the matrix associated to the following operator, see Eq.(\[evolrhoinfo\]) $$\begin{aligned}
&&\frac{1}{N}\int \frac{\mathrm{d}^{N}\mathbf{k}}{\left( 2\pi \right) ^{N}}\left\{ \cos ^{2}{(\gamma /2)}\sum_{s=1}^{N}\left\vert \phi _{\mathbf{k}}^{\left( s\right) }\right\rangle \left\langle \phi _{\mathbf{k}}^{\left(
s\right) }\right\vert \right. \notag \\
&&+\sin ^{2}{(\gamma /2)}\left. \sum_{s=N+1}^{2N}\left\vert \phi _{\mathbf{k}}^{\left( s\right) }\right\rangle \left\langle \phi _{\mathbf{k}}^{\left(
s\right) }\right\vert \right\} . \label{renato3}\end{aligned}$$ As a third example, we take $$\begin{aligned}
\left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle & =\cos {(\gamma /2)}\frac{1}{\sqrt{N}}\sum_{s=1}^{N}\left\vert {\phi _{\mathbf{k}}^{\left(
2s\right) }}\right\rangle \notag \\
& +e^{i\varphi }\sin {(\gamma /2)}\frac{1}{\sqrt{N}}\sum_{s=1}^{N}\left\vert
{\phi _{\mathbf{k}}^{\left( 2s-1\right) }}\right\rangle , \label{iniuno}\end{aligned}$$and then $$\left\vert \tilde{f}_{\mathbf{k}}^{\left( s\right) }\right\vert
^{2}=\left\vert \left\langle \phi _{\mathbf{k}}^{\left( s\right) }\right.
\left\vert \tilde{\psi}_{\mathbf{k},0}\right\rangle \right\vert ^{2}=\frac{1}{N}\left\{
\begin{array}{c}
\cos ^{2}{(\gamma /2)}, \\
\sin ^{2}{(\gamma /2),}\end{array}\begin{array}{c}
\text{for }s\text{ even} \\
\text{for }s\text{ odd}\end{array}\right. . \label{fsk2}$$Finally, using Eq.(\[evolrhoinfo\]), the eigenvalues $\Lambda _{s}$ are the eigenvalues of the matrix associated to the operator $$\begin{aligned}
&&\frac{1}{N}\int \frac{\mathrm{d}^{N}\mathbf{k}}{\left( 2\pi \right) ^{N}}\left\{ \cos ^{2}{(\gamma /2)}\sum_{s=1}^{N}\left\vert \phi _{\mathbf{k}}^{\left( 2s\right) }\right\rangle \left\langle \phi _{\mathbf{k}}^{\left(
2s\right) }\right\vert \right. \notag \\
&&+\sin ^{2}{(\gamma /2)}\left. \sum_{s=1}^{N}\left\vert \phi _{\mathbf{k}}^{\left( 2s-1\right) }\right\rangle \left\langle \phi _{\mathbf{k}}^{\left(
2s-1\right) }\right\vert \right\} . \label{lam0}\end{aligned}$$
Application to the 2D quantum walk {#examples2D}
==================================
In this Section we illustrate the general treatment introduced above in the special case of the $2D$ quantum walk. References [@Inui; @Watabe08] introduced a one-parameter family of quantum-walk models on $2D$ as a generalization of Grover’s model by specifying the corresponding matrix $C_{\mathbf{k}}$, see Eq.(\[Ck\_elements\]), as $$C_{\mathbf{k}}=\begin{pmatrix}
-pe^{ik_{1}} & qe^{ik_{1}} & \sqrt{pq}e^{ik_{1}} & \sqrt{pq}e^{ik_{1}} \\
qe^{-ik_{1}} & -pe^{-ik_{1}} & \sqrt{pq}e^{-ik_{1}} & \sqrt{pq}e^{-ik_{1}}
\\
\sqrt{pq}e^{ik_{2}} & \sqrt{pq}e^{ik_{2}} & -qe^{ik_{2}} & pe^{ik_{2}} \\
\sqrt{pq}e^{-ik_{2}} & \sqrt{pq}e^{-ik_{2}} & pe^{-ik_{2}} & -qe^{-ik_{2}}\end{pmatrix}, \label{G_k}$$ where the parameter $p\in[0,1]$, $q=1-p$ and $\mathbf{k}=\left(k_{1},k_{2}\right)$ is the quasi-momentum vector. If $p=q=1/2$ we have the Grover coin. From now on we take this to be the case.
Eq.(\[G\_k\]) has four eigenvalues $\lambda_{s},\;s=1,2,3,4$, $$\begin{aligned}
\lambda_{1}=1,\lambda_{2}=-1,\lambda_{3}=e^{i\omega\left(k_{1},k_{2}\right)},\lambda_{4}=e^{-i\omega\left(k_{1},k_{2}\right)}, \label{auto}\end{aligned}$$ where $$\cos\omega\left(k_{1},k_{2}\right)=-\frac{1}{2}\left(\cos{k_{1}}+\cos{k_{2}}\right). \label{dis}$$ The eigenvectors corresponding to the eigenvalues are given by the following column vectors $$\left\vert \phi_{\mathbf{k}}^{\left(s\right)} \right\rangle=\frac{1}{\mathcal{N}_{\mathbf{k}}^{(s)}}
\begin{pmatrix}
\left(1+e^{-\mathrm{i}k_{1}}\lambda_{\mathbf{k}}^{\left(s\right)}\right)^{-1}
\\
\left(1+e^{+\mathrm{i}k_{1}}\lambda_{\mathbf{k}}^{\left(s\right)}\right)^{-1}
\\
\left(1+e^{-\mathrm{i}k_{2}}\lambda_{\mathbf{k}}^{\left(s\right)}\right)^{-1}
\\
\left(1+e^{+\mathrm{i}k_{2}}\lambda_{\mathbf{k}}^{\left(s\right)}\right)^{-1}\end{pmatrix}, \label{eig_gen}$$ where the normalization factors $\mathcal{N}_{\mathbf{k}}^{(s)}$ are given by $$\begin{aligned}
\mathcal{N}_{\mathbf{k}}^{(1)}=\sqrt{\frac{1}{1+\cos{k_1}}+\frac{1}{1+\cos{k_2}}} \notag \\
\mathcal{N}_{\mathbf{k}}^{(2)}=\sqrt{\frac{1}{1-\cos{k_1}}+\frac{1}{1-\cos{k_2}}} \notag \\
\mathcal{N}_{\mathbf{k}}^{(3)}=\mathcal{N}_{\mathbf{k}}^{(4)}= \sqrt{2\,\frac{4-\left(\cos{k_1}+\cos{k_2}\right)^2}{\left(\cos{k_1}-\cos{k_2}\right)^2}}. \label{norm}\end{aligned}$$
From Eq.(\[auto\]), we see that the first two eigenvalues $\lambda_{1}=1$ and $\lambda_{2}=-1$ do not depend on $k$, and the last two eigenvalues are complex conjugates of each other. Equation (\[dis\]) is a dispersion relation of the system. The frequency $\omega\left(k_{1},k_{2}\right)\in\left[0,2\pi\right]$ and when $k_{1}=0$ and $k_{2}=0$ the system has a degeneracy because the three eigenvalues $\lambda_{2}=\lambda_{3}=\lambda_{4}=-1$, see Eqs.(\[auto\], \[dis\]). Then, due to this degeneracy the frequencies $\pm\omega\left(k_{1},k_{2}\right)$, as a function of $k_{1}$ and $k_{2}$, has a diabolo shape. These degenerate points are called “diabolical points" [@german2013].
![The eigenvalues of the reduced density matrix, Eqs.([lam1]{},\[lam2\],\[lam34\]), as a function of the parameter $x=\sin{\protect\gamma}\cos{\protect\varphi}$, with $\protect\theta=\protect\pi$. $\Lambda_1$ in full line, $\Lambda_2$ in dashed line and $\Lambda_3$ in dot-dashed line.[]{data-label="fig:a"}](lambda.eps){width="0.7\columnwidth"}
![Entanglement temperature, see Eq.(\[temk0\]), as function of the dimensionless parameter $x=\sin{\protect\gamma}\cos{\protect\varphi}$, with $\protect\theta=\protect\pi$.[]{data-label="fig:b"}](temp2.eps){width="0.7\columnwidth"}
QW’s temperature for a separable coin-position initial state {#exampleA}
------------------------------------------------------------
In order to calculate $\Lambda_s$, Eq.(\[lam2\]), we select the diabolical point $\mathbf{k_0}=\mathbf{0}$ and we must be very careful, because the calculation of the eigenvectors, Eq.(\[eig\_gen\]), has indeterminacies. The eigenvectors of the $2D$ Grover walk matrix are given by Eq.([eig\_gen]{}). Whenever $\mathbf{k}$ is not close to a diabolical point these eigenvectors vary smoothly around $\mathbf{k}$. However, we want to study the behavior of the eigenvectors close to the diabolical point at $\mathbf{k}=\mathbf{k}_{\mathrm{0}}\equiv\left(0,0\right)$. We find it convenient to use polar coordinates $\left(k_{1},k_{2}\right)=\left(k\cos\theta,k\sin\theta\right)$. Performing the limit of (\[eig\_gen\]) for $k\rightarrow0$ we find $$\begin{aligned}
\left\vert\phi_{\mathbf{k_0}}^{\left(1\right)}\right\rangle & =\frac{1}{2}\left(\begin{array}{c}
1 \\
1 \\
1 \\
1\end{array}\right), \label{eigen_close1} \\
\left\vert\phi_{\mathbf{k_0}}^{\left(2\right)}\right\rangle & =\frac{i}{\sqrt{2}}\left(\begin{array}{c}
-\sin\theta \\
+\sin\theta \\
-\cos\theta \\
+\cos\theta\end{array}\right), \label{eigen_close2} \\
\left\vert\phi_{\mathbf{k_0}}^{\left(3\right)}\right\rangle & =\frac{i}{2\sqrt{2}}\left(\begin{array}{c}
1-\sqrt{2}\cos\theta \\
1+\sqrt{2}\cos\theta \\
-1+\sqrt{2}\sin\theta \\
-1-\sqrt{2}\sin\theta\end{array}\right), \label{eigen_close3} \\
\left\vert\phi_{\mathbf{k_0}}^{\left(4\right)}\right\rangle & =\frac{i}{2\sqrt{2}}\left(\begin{array}{c}
-1-\sqrt{2}\cos\theta \\
-1+\sqrt{2}\cos\theta \\
1+\sqrt{2}\sin\theta \\
1-\sqrt{2}\sin\theta\end{array}\right). \label{eigen_close4}\end{aligned}$$ Taking the two-dimensional expression of $\left\vert{\chi}\right\rangle$, see Eq.(\[inipsi4\]), in its matrix shape $$\left\vert{\chi}\right\rangle=\frac{1}{\sqrt{2}}
\begin{pmatrix}
\cos{(\gamma/2)} \\
e^{i\varphi}\sin{(\gamma/2)} \\
\cos{(\gamma/2)} \\
e^{i\varphi}\sin{(\gamma/2)}\end{pmatrix}, \label{matini}$$ we can evaluate $\Lambda_s$, see Eq.(\[lam2\]), that is $$\Lambda_1=\frac{1}{2}\left(1+\sin{\gamma}\cos{\varphi}\right), \label{lam1}$$ $$\Lambda_2=\frac{1}{4}\left(1+\sin{2\theta}\right)\left(1-\sin{\gamma}\cos{\varphi}\right), \label{lam21}$$ $$\Lambda_3=\Lambda_4=\frac{1}{8}\left(1-\sin{2\theta}\right)\left(1-\sin{\gamma}\cos{\varphi}\right). \label{lam34}$$ Figure \[fig:a\] shows the dependence of $\Lambda_s, s=1,2,3,4$ with the initial conditions given through the parameter $$x\equiv\sin{\gamma}\cos{\varphi}. \label{xx}$$
From Eq.(\[tem00\]), the entanglement temperature in the diabolical point is $$T={2/\log\left(\frac{\Lambda_{max}}{\Lambda_{min}} \right)}, \label{temk0}$$ where $\Lambda_{max}$ and $\Lambda_{min}$ are respectively the maximum and minimum value of $\Lambda$ given by Eqs.(\[lam1\],\[lam21\],\[lam34\]).
Equation (\[temk0\]) shows that the QW initial conditions $\gamma,\varphi$ and $\theta$ ($\mathbf{k_0}$) determine the entanglement temperature and for a fixed $\theta$ the isothermal lines as a function of the initial conditions are determined by the following equation $$x=\sin{\gamma}\cos{\varphi}=\mathcal{C}, \label{iso1}$$ where $\mathcal{C}$ is a constant.
In Fig. \[fig:b\] we see that the temperature as a function of $x$ increases from $T=0$ for $x=-1$ to the constant value $T_0=2/\log2$ in the $x$ interval $[-3/5,-1/3]$, and then decreases gradually, reaching $T=0$ at $x=1$. The isotherms are the intersections of the Bloch sphere with the planes $x=constant$ .
![[(Color online) Isotherms on the Bloch sphere. $\left\vert {Z_+ }\right\rangle$ and $\left\vert
{Z_-}\right\rangle$ are the North and South Pole, respectively. The two black points (“cold points", corresponding to $T=0$) on the sphere are the points $\frac{1}{\sqrt{2}}(\left\vert {Z_+}\right\rangle+\left\vert {Z_- }\right\rangle)$ and $\frac{1}{\sqrt{2}}(\left\vert {Z_+ }\right\rangle-\left\vert {Z_-}\right\rangle)$. The light (yellow) zone is the “hot zone" $T=T_0$.]{}[]{data-label="fig:bloch"}](blochsphere3.eps){width="0.7\columnwidth"}
Figure \[fig:bloch\] shows the isotherms for the entanglement temperature as a function of the QW initial position, defined on the Bloch sphere. The figure shows three regions, two dark zones left and right, corresponding to temperatures $0<T<T_0$, and the a light one corresponding to the constant temperature $T=T_0$.
QW’s temperature for a non separable coin-position initial state I {#exampleB}
------------------------------------------------------------------
Taking the initial state given by Eqs.(\[inidos\],\[renato\]) and adding Eq.(\[renato3\]), it is easy to show that $\widehat{\varrho }$ reduces to $$\widehat{\varrho }=\frac{1}{4}\left(
\begin{array}{cccc}
1 & a & b & b \\
\noalign{\medskip}a & 1 & b & b \\
\noalign{\medskip}b & b & 1 & a \\
\noalign{\medskip}b & b & a & 1\end{array}\right) , \label{renato11}$$where$$\begin{aligned}
a &=&\left( 1-4/{\pi }\right) \cos \left( \gamma \right) , \label{re1} \\
b &=&\left( 1-2/{\pi }\right) \cos \left( \gamma \right) . \label{re2}\end{aligned}$$The eigenvalues of Eq.(\[renato11\]) are $$\begin{aligned}
\Lambda _{1} &=&[1-\cos (\gamma )]/4, \label{r1} \\
\Lambda _{2} &=&[1-(3-8/\pi)\cos (\gamma )]/4, \label{rr2} \\
\Lambda _{3} &=&[1-(1-4/\pi)\cos (\gamma )]/4\\
\Lambda _{4} &=&\Lambda _{3}.\end{aligned}$$The entanglement temperature Eq.(\[tem00\]) is thus given by $$T=\frac{2}{\left\vert \ln \frac{1+\left( \frac{4}{\pi }-1\right) \cos \gamma}{1-\cos \gamma}\right\vert}. \label{rena}$$Figure \[fig:1\] shows that the temperature as a function of $\gamma$ increases from $T=0$ for $\gamma=0$, to infinity for $\gamma=\pi/2$, and then decreases gradually to $T={2}/{\left\vert \ln \left( 1-2/\pi \right) \right\vert }$ at $\gamma=\pi$.
![Entanglement temperature, see Eq. (\[rena\]), as a function of the dimensionless parameter $\protect\gamma$.[]{data-label="fig:1"}](tempvsgamma.eps){width="0.7\columnwidth"}
In order to take the initial condition on the generalized Bloch sphere, we redefine $$\left\vert {Z_+ }\right\rangle \equiv \frac{1}{\sqrt{N}}\sum_{s=1}^{N}\left\vert {\phi _{\mathbf{k}}^{\left( s\right)
}}\right\rangle , \label{z1p}$$ $$\left\vert {Z_- }\right\rangle \equiv \frac{1}{\sqrt{N}}\sum_{s=N+1}^{2N}\left\vert {\phi _{\mathbf{k}}^{\left( s\right) }}\right\rangle . \label{z2p}$$ Then the initial state Eq.(\[renato\]) takes the following form $$\begin{aligned}
\left\vert \tilde{\psi}_{\mathbf{k,}0}\right\rangle =\cos {(\gamma /2)}\left\vert {Z_+ }\right\rangle
+e^{i\varphi }\sin {(\gamma /2)} \left\vert {Z_- }\right\rangle, \label{renato000000}\end{aligned}$$ where $\gamma$ and $\varphi$ define a point on the unit Bloch sphere. In this case the isotherms have a rotation symmetry around the axis defined by the points $\left\vert {Z_+}\right\rangle$ and $\left\vert {Z_-}\right\rangle$, North and South poles respectively. Therefore the isotherms are the parallels $z = constant$ on the Bloch sphere. In the northern hemisphere the temperatures of the isotherms increases from $T=0$, in the North pole, to infinity at the Equator, and on the southern one the temperature of the isotherms decreases from infinity at the Equator, to the finite value $T={2}/{\left\vert \ln \left( 1-2/\pi \right) \right\vert }$ in the South pole.
QW’s temperature for non separable coin-position initial state II {#exampleC}
-----------------------------------------------------------------
For the 2D case, taking the initial state given by Eq.(\[iniuno\]), after some heavy but straightforward operations, we can evaluate $\Lambda _{s}$ and they satisfy $$\Lambda _{s}=\frac{1}{4}, \mathrm{for}~~ s=1,2,3,4, \label{cuatro1}$$ which, according to Eq.(\[tem00\]), indicates that the temperature is infinite all over the Bloch sphere, representing a degenerate case. The symmetries of the Grover coin seem to point out that $\widehat{\varrho }=\frac{\hat{I}}{2N}$ for $N>2$ when we use the initial condition Eq.(\[iniuno\]).
Conclusion
==========
During the last thirty years, several technological advances have made possible to construct and preserve quantum states. They also have increased the possibility of building quantum computing devices. Therefore, the study of the dynamics of open quantum systems becomes relevant both for development of these technologies as well as for the algorithms that will run on those future quantum computers. The quantum walk has emerged as a useful theoretical tool to study many fundamental aspects of quantum dynamics. It provides a frame to study, among other effects, the entanglement between its degrees of freedom, in a simple setting that often allows for a full analytical treatment of the problem. The study of this kind of entanglement is important in order to understand the asymptotic equilibrium between its internal degrees of freedom.
In this paper we have studied the asymptotic regime of the $N$-dimensional quantum walk. We have focused into the asymptotic entanglement between chirality and position degrees of freedom, and have shown that the system establishes a stationary entanglement between the coin and the position that allows to develop a thermodynamic theory. Then we were able to generalize previous results, obtained in references [@alejo2012; @gustavo]. The asymptotic reduced density operator was used to introduce the entanglement thermodynamic functions in the canonical equilibrium. These thermodynamic functions characterize the asymptotic entanglement and the system can be seen as a particle coupled to an infinite bath, the $|x\rangle$ position states. It was shown that the QW initial condition determines the system’s temperature, as well as other thermodynamic functions. A map for the isotherms was analytically built for arbitrary localized initial conditions. The behavior of the reduced density operator looks diffusive but it has a dependence on the initial conditions, the global evolution of the system being unitary. Then, if an observer only had information related with the chirality degrees of freedom, it would be very difficult for it to recognize the unitary character of the quantum evolution. In general, from this simple model we can conclude that if the quantum system dynamics occurs in a composite Hilbert space, then the behavior of the operators that acts on only one sub-space could camouflage the unitary character of the global evolution.
The development of experimental techniques has made possible the trapping of samples of atoms using resonant exchanges in momentum and energy between atoms and laser light. However, it is not yet possible to prepare a system with a particular initial chirality. Therefore, the average thermodynamical functions could have more meaning when considered from an experimental point of view. It is interesting to point out that for a given family of initial conditions, such as that given by Eq.(\[inipsi4\]), the explicit dependence of thermodynamic functions with the initial position on the Bloch’s sphere, $\gamma$ and $\varphi$, can be eliminated if we take the average of $\Lambda_s$ over all initial conditions. Then each family could be characterized by a single asymptotic average temperature.
We acknowledge the support from PEDECIBA and ANII (FCE-2-211-1-6281, Uruguay), CNPq and LNCC (Brazil), and the CAPES-UdelaR collaboration program. FLM acknowledges financial support from FAPERJ/APQ1, CNPq/Universal and CAPES/AUXPE grants.
Here we derive Eq.(\[gauss2\]). We employ the well known Poisson summation formula $$\sum_{n=-\infty}^{n=\infty}g(n)=
\sum_{n=-\infty}^{n=\infty}\int_{-\infty}^{\infty}g(x)e^{-i2\pi n x} dx,
\label{Poisson}$$ which, together with Eqs.(\[DFT\_k0\],\[gauss1\]), lead to $$\sum_{\mathbf{x}}e^{-i\mathbf{\left(k-k_0\right)}\cdot\mathbf{x}} \exp\left(-\frac{\mathbf{x}\cdot\mathbf{x}}{2\sigma^{2}}\right)= \notag$$ $$\sum_{\mathbf{x}}\int_{-\infty}^{\infty}\ldots \int_{-\infty}^{\infty}e^{-i\mathbf{\left(k-k_0\right)}\cdot\mathbf{y}} \exp\left(-\frac{\mathbf{y}\cdot\mathbf{y}}{2\sigma^{2}}\right)e^{-i2\pi\mathbf{x\cdot y}}\mathbf{dy}=
\notag$$ $$\sum_{\mathbf{x}}\int_{-\infty}^{\infty}\ldots \int_{-\infty}^{\infty}e^{-i\mathbf{\left(k-k_0+2\pi\mathbf{x}\right)}\cdot\mathbf{y}} \exp\left(-\frac{\mathbf{y}\cdot\mathbf{y}}{2\sigma^{2}}\right)\mathbf{dy} . \label{dos}$$ The last integrals can be evaluated using $$\int_{-\infty}^{\infty} e^{-p^{2}x^{2}\pm q x}{dx}=\frac{\sqrt{\pi}}{p}\exp\left(-\frac{q^{2}}{2p^{2}}\right), \label{tres}$$ where $p\geq 0$. In this way we obtain $$\sum_{\mathbf{x}}e^{-i\mathbf{\left(k-k_0\right)}\cdot\mathbf{x}} \exp\left(-\frac{\mathbf{x}\cdot\mathbf{x}}{2\sigma^{2}}\right)= \notag$$ $$\left(\sqrt{2\pi}\sigma\right)^{N}\sum_{\mathbf{x}}e^{-\sigma^{2}\left(\mathbf{k-k_0-2\pi\mathbf{x}}\right)^{2}}. \label{cuatro2}$$
[99]{}
Y. Aharonov, L. Davidovich, and N. Zagury, *Phys. Rev. A* **48**, 1687 (1993).
D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani. Quantum walks on graphs. In [*Proc. 33th STOC*]{}, pages 50–59, New York, NY, 2001.
T. D. Mackay, S. D. Bartlett, L. T. Stephenson, and B. C. Sanders. Quantum walks in higher dimensions. , **35**, 2745 (2002).
B. Tregenna, W. Flanagan, R. Maile, and V. Kendon, *New J. Phys.* **5** 83 (2003).
A.C. Oliveira, R. Portugal, and R. Donangelo. Decoherence in two-dimensional quantum walks. , **74**, 012312 (2006).
K. Watabe, N. Kobayashi, M. Katori, and N. Konno, Limit distributions of two-dimensional quantum walks, Phys. Rev. A **77**, 062331 (2008).
David Aldous and James A. Fill. . Monograph at [http://www.stat.berkeley.edu/\$\\sim\$aldous/RWG/book.html](http://www.stat.berkeley.edu/$\sim$aldous/RWG/book.html), 2002.
N. Shenvi, J. Kempe, and K.B. Whaley. A quantum random walk search algorithm. , **67**, 052307 (2003).
A. Ambainis, J. Kempe, and A. Rivosh. Coins make quantum walks faster. In [*Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA*]{}, pages 1099–1108, 2005.
Renato Portugal. . Quantum Science and Technology. Springer, New York, 2013.
E. Farhi and S. Gutmann. Quantum computation and decision trees. , **58**, 915 (1998).
A. Patel, K. S. Raghunathan, and P. Rungta. Quantum random walks do not need a coin toss. , **71** 032347 (2005).
A. Ambainis, R. Portugal, and N. Nahimovs. Spatial Search on Grids with Minimum Memory. , 2013.
A. Romanelli, *Phys. Rev. A* **81**, 062349 (2010).
A. Romanelli, *Phys. Rev. A* **85**, 012319 (2012).
W. H. Zurek, Phys. Rev. D **24** 1516 (1981); Phys. Rev. D **26**, 1862 (1982).
D. A. Meyer, e-print quant-ph/9804023.
Carl M. Bender and Steven A. Orszag. . International series in pure and applied mathematics, McGraw-Hill, New York, 1978.
M. Hinarejos, A. Pérez, Eugenio Roldán, A. Romanelli, G.J. de Valcárcel, *New J. Phys.* **15**, 073041 (2013).
G. Grimmett, S. Janson, P.F. Scudo, *Phys. Rev. E* **69**, 026119 (2004).
A. Nayak and A. Vishwanath, e-print quant-ph/0010117
M. Hor-Meyll, J. O. de Almeida, G. B. Lemos, P. H. Souto Ribeiro, S. P. Walborn, *Phys. Rev. Letters* **112**, 053602 (2014).
I. Carneiro, M. Loo, X. Xu, M. Girerd, V. M. Kendon, and P. L. Knight, *New J. Phys.* **7**, 56 (2005).
G. Abal, R. Siri, A. Romanelli, and R. Donangelo,*Phys. Rev. A* **73**, 042302, 069905(E) (2006).
S. Salimi, R. Yosefjani, *Int. J of Mod. Phys. B*, **26**, 1250112 (2012).
M. Annabestani, M. R. Abolhasani and, G. Abal, *J.Phys. A: Math. Theor.* **43**, 075301 (2010).
Y. Omar, N. Paunkovic, L. Sheridan, and S. Bose, *Phys. Rev. A*, **74**, 042304 (2006)
P. K. Pathak, and G. S. Agarwal, *Phys. Rev. A*, **75**, 032351 (2007)
C. Liu, and N. Petulante, *Phys. Rev. A* **79**, 032312 (2009).
S. E. Venegas-Andraca, J.L. Ball, K. Burnett, and S. Bose, *New J. Phys.*, **7**, 221 (2005).
J. Endrejat, H. Büttner, *J. Phys. A: Math. Gen*. **38**, 9289 (2005).
A.J. Bracken, D. Ellinas, and I. Tsohantjis, *J. Phys. A: Math. Gen.* **37**, L91 (2004).
D. Ellinas, and A.J. Bracken, *Phys. Rev. A* **78**, 052106 (2008).
O. Maloyer, and V. Kendon, *New J. Phys.*, **9**, 87 (2007).
R. Kubo, M. Toda, and N. Hashitsume *Statistical Physics II, Nonequilibrium Statistical Mechanics*, Springer-Verlag, Berlin Heidelberg New York Tokyo; ISBN 3 540 11461 0, (1985).
N. Inui, Y. Konishi, and N. Konno, *Phys. Rev. A* *Phys. Rev. E*, 052323 (2004).
A. Romanelli, G. Segundo,**Physica A**, **393,** 646 (2014).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as “mirror entanglement”. They are constructed by first considering the (squared) Hilbert-Schmidt distance of the state from the set of states obtained by applying to it a given local unitary. To the action of each different local unitary there corresponds a different distance. We then minimize these distances over the sets of local unitaries with different spectra, obtaining an entire family of different entanglement monotones. We show that these mirror entanglement monotones are organized in a hierarchical structure, and we establish the conditions that need to be imposed on the spectrum of a local unitary for the associated mirror entanglement to be faithful, i.e. to vanish on and only on separable pure states. We analyze in detail the properties of one particularly relevant member of the family, the “stellar mirror entanglement” associated to traceless local unitaries with nondegenerate spectrum and equispaced eigenvalues in the complex plane. This particular measure generalizes the original analysis of \[Giampaolo and Illuminati, Phys. Rev. A [**76**]{}, 042301 (2007)\], valid for qubits and qutrits. We prove that the stellar entanglement is a faithful bipartite entanglement monotone in any dimension, and that it is bounded from below by a function proportional to the linear entropy and from above by the linear entropy itself, coinciding with it in two- and three-dimensional spaces.'
author:
- 'A. Monras'
- 'G. Adesso'
- 'S. M. Giampaolo'
- 'G. Gualdi'
- 'G. B. Davies'
- 'F. Illuminati'
date: 'May 16, 2011'
title: Entanglement quantification by local unitaries
---
[ ]{}
[ ]{}
[ ]{}
[ ]{}
[ ]{}
[[^1] ]{}[ ]{}
Introduction {#secIntro}
============
Achieving a satisfactory understanding of the nature and structure of quantum correlations is of paramount importance in quantum information theory [@hororev] as well as in the study of complex quantum systems [@faziorev]; very recent speculations even hint at possible fundamental roles played by quantum entanglement in biological systems and processes with enhanced properties of quantum coherence [@biology]. For bipartite quantum systems prepared in globally pure states, there is universal consensus on the fact that quantum correlations identify with entanglement, and they can be signaled by either one of several features that distinguish them from the classical ones. For instance, their non-local character, or their operational significance in quantum informational primitives such as entanglement creation and distillation, or the performance they enable as resources for quantum communication protocols such as teleportation [@hororev]. Entanglement in this case is simply measured by how much information one is missing on the state of the global system by accessing only one part of it, hence capturing the correlations between the two parties. Mathematically, such an information is quantified by the [*entropy of entanglement*]{}, i.e. the von Neumann entropy (or any monotonically increasing function of it) of the reduced density matrix of either of the two subsystems [@Bennett96], although it is worth remarking that other inequivalent measures of pure-state bipartite entanglement can be introduced beyond the von Neumann entropy, such as the infinite set of Rényi entanglement entropies of the reduced density matrix (entanglement spectrum) [@Hastings] and the bipartite geometric entanglement [@Blasone], that are particularly relevant in the investigation of ground-state properties of condensed matter systems and in the theory of quantum information with continuous variables [@Adesso].
For bipartite mixed states and for general multipartite states (pure or mixed), several inequivalent entanglement monotones (as well as a number of measures of more general types of quantum correlations beyond entanglement in mixed states [@discord]) have been proposed, each apt to capture a different signature of quantumness and/or possessing a different operational meaning, especially in relation to different informational tasks [@virmani]. Whilst all entanglement monotones must vanish on separable states, there can be some that vanish also on some entangled states, if the latter fail to encode the particular resource character associated to a given entanglement measure. This is the case, for example, of bound entangled states, which have a nonzero entanglement cost but a vanishing distillable entanglement, as no entangled singlets can be extracted from them by local operations and classical communication (LOCC) in the asymptotic regime [@boundent]. Fundamental requirements for any [*bona fide*]{} entanglement monotone are then monotonicity under LOCC – operations that cannot increase bipartite entanglement on average – and invariance under local unitary (LU) operations [@vidal]. The latter is in fact a basic requirement for any measure of correlation in general, since the choice of the local basis in which the density matrix of a system is expressed cannot of course affect the information encoded in shared correlations between two (or more) subsystems. This fact has motivated the search for the simplest LU-invariant “normal forms” of quantum states in discrete as well as continuous variable systems [@lindenetal], so as to reduce as much as possible the number of state parameters needed for a complete evaluation of a particular entanglement monotone.
Invariance of entanglement under LUs has however more far-reaching consequences, especially, and not entirely without surprise, in its quantification. Given a generic state (pure or mixed) of a bipartite systems, Li-Bin Fu investigated the consequences of applying a local cyclic operation on one of the subsystems [@Fu]. Fu denoted by local cyclic operation any LU that leaves the corresponding reduced state invariant. Consider a bipartite quantum system $(A|B)$ in a global pure state ${|\Psi\rangle}$ with reduced density matrices $\rho_A$ and $\rho_B$ respectively for subsystems $A$ and $B$, and denote by $U_A$ a LU acting on $\rho_A$ only. Then, the LU $U_A$ is cyclic if $[U_A,\rho_A] = 0$. Although being local, thus leaving the entanglement unchanged, such an operation changes the global state, yielding a nonlocal effect that can be detected only by measuring the two subsystems jointly. Employing the fidelity induced by the Hilbert-Schmidt metric, Fu identified such a nonlocal effect by the distance between the initial and the final state. Recently, Fu’s pioneering work has been greatly extended by Gharibian, Kampermann, and Bruss [@Bruss]. They focused on the [*m*aximization]{} of the Fu distance for bipartite states in Hilbert spaces of arbitrary dimension as a possible indicator of non-local properties, and derived and discussed closed formulae for the maximal Fu distance in three relevant cases: (pseudo)pure quantum states, Werner states, and two-qubit states. In between, two of us (S.M.G. and F.I.) investigated independently the consequences of LUs on global pure states of $2 \times D$ and $3 \times D$ bipartite systems [@GiampaoloIlluminati]. They attacked the problem from the opposite side, investigating the [*m*inimization]{} of the (squared) Fu distance, and proved, somewhat surprisingly, that in these two particular cases the minimum (squared) Fu distance is a full bipartite entanglement monotone, coinciding with the linear entropy of entanglement (also known as tangle for qubit systems [@ckw; @osborne]). A similar analysis was performed to define a geometric LU-based entanglement measure for Gaussian states of continuous variable systems, where subsystem $A$ comprises a single bosonic mode [@squoCV].
In the present paper we will present a complete generalization of the analysis carried out in Ref. [@GiampaoloIlluminati] to all pure states of bipartite quantum systems with Hilbert space ${\cal H_{AB}}$ of arbitrary finite dimension. We will show that one can construct an entire family of bipartite entanglement monotones that capture quantum correlations as quantified by the action of minimally perturbing LUs on global pure states. Specifically, we prove that the (squared) Hilbert-Schmidt distance between a pure bipartite state ${|\psi\rangle}_{AB} \in {\cal H_{AB}} = {\cal H}_A \otimes {\cal H}_B$ and the pure state $U_A{|\psi\rangle}_{AB}$ obtained from it by applying a LU operation $U_A$ on subsystem $A$ only, once suitably optimized (minimized) over all LUs with fixed spectrum, defines a hierarchy of bipartite entanglement monotones. Moreover, the cyclic condition $[U_A,\rho_A] = 0$, rather than being imposed [*a* priori]{} is derived as a consequence of the minimization (optimization) procedure. We denote any such pure-state bipartite entanglement monotone as “[*mirror entanglement*]{}”. This is pictorially reminiscent of someone mirroring herself/himself in a mirror (which is curved, symbolizing the action of a LU): in absence of entanglement, the mirror image of the original pure state under a LU (the mirror) is a perfect reflection. Viceversa, the more entanglement is contained in the state of the system, the more distorted is the image that is reflected by the mirror.
As just stated, no constraints on the employed LUs need to be imposed [*a*b initio]{}: all the mirror entanglement measures are proven to be LOCC-monotones for arbitrary pure states of bipartite systems in any dimension. However, we will show that by imposing, as originally done in Ref. [@GiampaoloIlluminati], specific requirements on $U_A$, namely a fully nondegenerate spectrum, one restricts the class further to [*faithful*]{} mirror entanglement monotones that vanish if and only if the state ${|\psi\rangle}_{AB}$ is a product state. Upon further restricting the admissible LUs by requiring that they be traceless and with equispaced, nondegenerate eigenvalues (thus with a pattern resembling a star in the complex plane), we will single out a special faithful mirror entanglement monotone, that we will name “[*stellar mirror entanglement*]{}” . We will prove that the stellar mirror entanglement enjoys the property of being a lower bound to the linear entropy in any dimension, reducing to the latter in the special cases ${\cal H}_A={\mathbb{C}}^2$ and ${\cal H}_A={\mathbb{C}}^3$ originally considered in [@GiampaoloIlluminati]. Moreover, it is an upper bound to a function proportional to the linear entropy, with the proportionality constant being a simple function of the dimension of the reduced Hilbert space ${\cal H}_A$.
We remark that, by construction, the class of mirror entanglement measures is experimentally accessible by means of interferometric schemes [@interfero] involving at least two copies of a given bipartite state, one of which needs to be rotated by suitable LUs. In principle, each mirror entanglement monotone can be straightforwardly extended to mixed bipartite states via the conventional convex roof construction. Solving the convex roof optimization problem is of course in general a formidable task. However, as the mirror and stellar entanglement measures are defined in terms of distances (in partial analogy with the construction of the geometric measures of entanglement [@Wei; @Blasone]), in the conclusions we will briefly discuss how it might be possible to envisage alternative strategies to compute their mixed-state extension without resorting directly to the convex roof construction.
The paper is organized as follows. In Sec. \[secAlex\] we define the class of mirror entanglement measures for all pure states of bipartite quantum systems, prove their monotonicity under LOCC, characterize their hierarchic structure, and determine the conditions for faithfulness. In Sec. \[secGary\] we focus on the stellar mirror entanglement and investigate its relationship with the linear entropy of entanglement, providing the exact lower and upper bounds that relate the two quantities (The detailed proofs of the bounds are reported in two Appendices). Finally, in Sec. \[secDiscuss\] we briefly discuss some of the implications of our results and some possible future research directions concerning the extension to mixed states and the problem of identifying total and partial pure-state factorization (separability) in quantum many-body systems.
mirror entanglement: definition and monotonicity {#secAlex}
================================================
Let us consider a bipartite system in a pure quantum state ${|\psi\rangle}\equiv{|\psi\rangle}_{AB}$ belonging to a Hilbert space ${\cal{H}}_{AB} = {\cal H}_A \otimes {\cal H}_B \equiv {\mathbb{C}}^{d_A} \otimes {\mathbb{C}}^{d_B}$. We will assume without loss of generality that $d \equiv d_A \le d_B$. Let us consider general LUs acting on ${\cal H}_A$ of the form $$\label{squdos}
W_{\Lambda,A} \equiv W_{\Lambda} = \sum_j \lambda_j {|\phi_j\rangle}{\langle\phi_j|},$$ where $$\label{lambda}
\Lambda = \{\lambda_j\}\equiv\{e^{i\theta_j}\} \quad (j=1,\ldots,d)\,,$$ denotes the spectrum of the eigenvalues of $W_{\Lambda}$. The maximal fidelity (squared overlap) between state ${|\psi\rangle}$ and the LU-transformed state $W_{\Lambda}{|\psi\rangle}$ is $$\label{Fpsi}
F^\Lambda_\psi=\max_{W_{\Lambda}}|{\langle\psi|}W_{\Lambda}{|\psi\rangle}|^2 \; ,$$ and it takes non-negative real values in the interval $[0,1]$.
\[def1\] The bipartite $\Lambda$-[*mirror entanglement*]{} ($\Lambda$ME) between subsystems $A$ and $B$ in the state ${|\psi\rangle}$ is defined as the square of the minimum Euclidean distance between ${|\psi\rangle}$ and the set of transformed states $W_{\Lambda}{|\psi\rangle}$ obtained by the action of LUs of the form [Eq. (\[squdos\])]{} with spectrum $\Lambda$ on subsystem $A$: $$\label{se}
{\cal E}_\Lambda(\psi) \doteq \min_{W_{\Lambda}} \left(1-|{\langle\psi|}W_{\Lambda}{|\psi\rangle}|^2\right) = 1-F^\Lambda_\psi \; .$$
Consider the reduced density matrix of subsystem $A$: $$\varrho \equiv \varrho_A={{\rm Tr}}_B[{|\psi\rangle}{\langle\psi|}] \; .
\label{Reduced}$$ Definition (\[se\]) is recast in terms of the reduced state (\[Reduced\]) by rewriting Eq. (\[Fpsi\]) as $$\begin{aligned}
F^\Lambda_\psi&=\max_{W_{\Lambda}} |{{\rm Tr}}[{W_{\Lambda}} {|\psi\rangle}{\langle\psi|}]|^2\\&=\max_{W_{\Lambda}} |{{\rm Tr}}_A[{W_{\Lambda}}\,{{\rm Tr}}_B[{|\psi\rangle}{\langle\psi|}]]|^2\\&=\max_{W_{\Lambda}}|{{\rm Tr}}[{W_{\Lambda}}\varrho]|^2
\; .\end{aligned}$$ By the monotonicity of the square function one can write $\sqrt {F^\Lambda_\psi}=\max_{W_{\Lambda}} |{{\rm Tr}}[W_{\Lambda}\varrho]|$. Let ${|i\rangle}$ be the eigenbasis of $\varrho$ and $p_i$ its eigenvalues, so that one has the spectral decomposition $\varrho=\sum_i p_i {|i\rangle}{\langlei|}$. The set of allowed LUs, that is, the set of unitary matrices acting on ${\cal H}_A$ with spectrum $\Lambda$ \[[Eq. (\[lambda\])]{}\], can be written in terms of $V_\Lambda=\sum_i \lambda_i{|i\rangle}{\langlei|}$ as $$\label{WUV}
{W_{\Lambda}}=U V_\Lambda U^\dagger=\sum_i \lambda_i U{|i\rangle}{\langlei|}U^\dagger \; ,$$ where $U$ rotates the eigenbasis of $\varrho$ into the eigenbasis of ${W_{\Lambda}}$, ${|\phi_i\rangle}=U{|i\rangle}$. In principle, $U$ can be any $SU(d)$ unitary matrix. We can write $\sqrt{F^\Lambda_\psi}$ as $$\begin{aligned}
\label{expr1}
\sqrt{F^\Lambda_\psi}&=&\max_{U\in SU(d)} \left|{{\rm Tr}}\left[U{V_\Lambda}U^\dagger \varrho\right]\right| \nonumber\\
&=&\max_{U\in SU(d)}\bigg|\sum_i \lambda_i\,{{\rm Tr}}\big[U{|i\rangle}{\langlei|}U^\dagger \sum_j p_j {|j\rangle}{\langlej|}\big] \bigg| \nonumber\\
&=&\max_{U\in SU(d)}\bigg|\sum_i \lambda_i\sum_j p_j |u_{ij}|^2 \bigg| \; ,\end{aligned}$$ where $u_{ij}={\langlei|}U{|j\rangle}$.
We will now show (Theorems \[TVanSep\] and \[TLOCC\] below) that the $\Lambda$ME measures are indeed legitimate pure-state entanglement monotones.
\[TVanSep\] The ME vanishes on pure separable (i.e., product) bipartite states ${|\psi^\otimes\rangle} = {|\psi_A\rangle}\otimes {|\psi_B\rangle}$.
[*Proof.*]{} For a product state, $\varrho$ is a rank-one matrix, with eigenvalues $p_j= \delta_{jk}$ for some index $k$. Choosing $U={\mathbbm{1}}$ one has $|\sum_i \lambda_i\sum_j p_j |u_{ij}|^2| = |\sum_i \lambda_i\sum_j \delta_{jk} \delta_{ij}|=|\lambda_k|=|e^{i \theta_k}|=1\equiv\sqrt{F^\Lambda_{\psi^{\otimes}}}$. Therefore, from [Eq. (\[se\])]{} ${\cal E}_\Lambda(\psi^\otimes)=0$ for any $\Lambda$. $\Box$
Before tackling the monotonicity of the ME under LOCC, we first prove an auxiliary lemma that simplifies the optimization problem involved in the definition of the ME.
\[lemmaPerm\] The maximizing unitary $U$ in [Eq. (\[expr1\])]{} is a permutation matrix.
[*Proof.*]{} Eq. can be written as $$\sqrt{F^\Lambda_\psi}=\max_{U\in SU(d)}\left|{{\rm Tr}}[M^\Lambda_\psi B(U)]\right|\,,$$ where $[B(U)]_{ij}=|u_{ij}|^2$ and $(M^\Lambda_\psi)_{ij}=p_i \lambda_j$. Noticing that $B(U)$ is a unistochastic matrix, we can write $$\sqrt{F_\psi^\Lambda}=\max_{B\ \!\textrm{unistoch.}}\left|{{\rm Tr}}[M^\Lambda_\psi B]\right|\leq\max_{B\in \textsf{B}_d}\left|{{\rm Tr}}[M^\Lambda_\psi B]\right|\,,$$ where we have enlarged the optimization domain to the whole set $\textsf{B}_d$ of all $d\times d$ doubly stochastic matrices, i.e. the $d$-dimensional Birkhoff polytope. By the Birkhoff-von Neumann theorem, $\textsf{B}_d$ is the convex hull of the set $\textsf{S}_d$ of $d \times d$ permutation matrices (that is, the permutation matrices in $\textsf{S}_d$ are the extreme points of $\textsf{B}_d$). We can thus write $B=\sum_k q_k S_k$, where $S_k \in \textsf{S}_d$, and $\vec q = \{q_k\}$ is a $d!$-dimensional probability vector. The maximal fidelity becomes $\sqrt{F^\Lambda_\psi}\leq\max_{\vec q}\left|\sum_k q_k{{\rm Tr}}[M^\Lambda_\psi S_k]\right|
\leq \max_{\vec q}\sum_k q_k \left|{{\rm Tr}}[M^\Lambda_\psi S_k]\right|$, where we have used the triangle inequality. Let $S_{\max}$ be the permutation matrix that maximizes $\left|{{\rm Tr}}[M^\Lambda_\psi S]\right|$. Then $\sqrt{F^\Lambda_\psi} \leq \left(\max_{\vec q}\sum_k q_k\right) \left|{{\rm Tr}}[M^\Lambda_\psi S_{\max}]\right|=\left|{{\rm Tr}}[M^\Lambda_\psi S_{\max}]\right|$. We are left to show that $S_{\max}=B(U)$ for some $U$. This is achieved by noticing that all permutation matrices, including $S_{\max}$, are orthogonal and hence unitary, and that $B(S_{\max})=S_{\max}$. This concludes the proof.
As a corollary of Lemma \[lemmaPerm\], we find that the optimal LU operation ${W_{\Lambda}}=S_{\max}V_{\Lambda}S_{\max}^\dagger$ that maximizes $F^\Lambda_\psi$ \[[Eq. (\[Fpsi\])]{}\] commutes with the reduced state $\varrho$. To see this, let us write $S_{\max}=\sum_i{|{\sigma_i}\rangle}{\langlei|}$, where $\sigma$ is the permutation described by the matrix $S_{\max}$. Then $$\begin{aligned}
\label{squo}
{W_{\Lambda}}&=&\left(\sum_{i}{|{\sigma_i}\rangle}{\langlei|}\right)\,V_\Lambda\,\left(\sum_j {|j\rangle}{\langle\sigma_j|}\right) \nonumber\\
&=&\sum_{i}\lambda_i\,{|{\sigma_i}\rangle}{\langle{\sigma_i}|}\\
&=&\sum_{i}\lambda_{\sigma_i^{-1}}\,{|i\rangle}{\langlei|} \nonumber \; ,\end{aligned}$$ which is diagonal in the same basis as $\varrho$, and therefore $[\varrho,{W_{\Lambda}}]=0$. The last result shows that the eigenvectors of the optimal LU $W_{\Lambda}$ that solves the minimization in the definition of the $\Lambda$ME \[[Eq. (\[se\])]{}\] are just obtained by a reordering of the eigenvectors of the reduced state $\varrho$. Collecting the previous findings, $\sqrt{F^\Lambda_\psi}$ can be written as $$\label{eq:permutation_form}
\sqrt{F^\Lambda_\psi}=\max_{S\in \textsf{S}_d}\left|{{\rm Tr}}[M^\Lambda_\psi S]\right|\,.$$
We are now ready to prove the following important result.
\[TLOCC\] The ME is monotonically non-increasing under LOCC operations, i.e., is a full pure-state bipartite entanglement monotone.
[*Proof.*]{} We will prove[^2] that $F_\psi$ is monotonically increasing under LOCC, which implies the statement. An arbitrary pure state ${|\psi\rangle}$ is ensemble-transformed under LOCC according to: ${|\psi\rangle}\rightarrow \{p_i,{|\psi_i\rangle}\}$, where each LOCC-transformed state of the ensemble reads: $$\sqrt{p_i} {|\psi_i\rangle}=(A_i\otimes\openone_B) {|\psi\rangle} \; .$$ The positive weights $\{ p_i \}$ satisfy the normalization condition $\sum_{i} p_i = 1$, while the Kraus operators associated to the local dynamics satisfy the POVM (positive operator valued measure) completeness relation: $\sum_i A_i^\dagger A_i=\openone$. Let the reduced state be $\rho={{\rm Tr}}_B[{|\psi\rangle}{\langle\psi|}]$ and likewise the reduced LOCC-transformed states be $\rho_i={{\rm Tr}}_B[{|\psi_i\rangle}{\langle\psi_i|}]$. For each of the latter, the local dynamics yields: $$p_i\rho_i=A_i\rho A_i^\dagger \; .$$ Then, in order to prove that ${\cal E}(\psi) \geq \sum_{i} p_i {\cal E}(\psi_i)$, it is sufficient to prove that $\sum_i p_i\sqrt{F(\psi_i)}\geq\sqrt{F(\psi)}$. Let $$\label{polar}
A_i \sqrt\rho=\sqrt{A_i\rho A_i^\dagger}V_i=\sqrt{p_i}\sqrt\rho_i V_i$$ be the polar decomposition of $A_i \sqrt\rho$, where $V_i$ is a suitable unitary matrix. Exploiting the properties of $F$, we can write: $$\begin{aligned}
\nonumber
\sum_i p_i\sqrt{F(\psi_i)}=&\sum_ip_i \max_{W_i}\left|{{\rm Tr}}[\rho_i W_i]\right|\\
\nonumber
=&\sum_ip_i \max_{W_i}\left|{{\rm Tr}}[\rho_i V_i W_iV_i^\dagger]\right|\\
\nonumber
=&\sum_i p_i\max_{W_i}\left|{{\rm Tr}}[(V_i^\dagger\sqrt\rho_i)(\sqrt\rho_i V_i) W_i]\right|\\
\nonumber
=&\sum_i\max_{W_i}\left|{{\rm Tr}}[(\sqrt\rho A_i^\dagger)( A_i \sqrt\rho) W_i]\right|\\
\nonumber
\geq&\max_{W}\sum_i\left|{{\rm Tr}}[\sqrt\rho A_i^\dagger A_i \sqrt\rho W]\right|\\
\nonumber
\geq&\max_{W}\left|{{\rm Tr}}[\sqrt\rho\left(\sum_i A_i^\dagger A_i \right)\sqrt\rho W]\right|\\
\nonumber
\geq&\max_{W}\left|{{\rm Tr}}[\rho W]\right|\\
\label{line8}
=&\sqrt{F(\psi)}.\end{aligned}$$ This concludes the proof that local operations on party $A$ do not increase $1-F$. The proof for operations on party $B$ follows trivially. We have: $$\begin{aligned}
\nonumber
\sum_i p_i\sqrt{F(\psi_i)}=&\sum_ip_i \max_{W_i}\left|{\langle\psi_i|} W_i\otimes\openone{|\psi_i\rangle}\right|\\
\nonumber
=&\sum_i\max_{W_i}\left|{\langle\psi|} W_i\otimes B^\dagger_iB_i{|\psi\rangle}\right|\\
\nonumber
\geq &\max_W\left|{\langle\psi|} W\otimes\sum_iB^\dagger_iB_i{|\psi\rangle}\right|\\
\label{line14}
=&\sqrt{F(\psi)} \; .\end{aligned}$$ Therefore, since ${\cal E}(\psi) = 1-F_\psi$ and all local operations do not increase $1-F$, we have proved that the ME is non-increasing under LOCC. $\Box$
We have thus introduced a family of pure-state bipartite entanglement monotones, that satisfy the three fundamental axiomatic properties of vanishing on separable states, being invariant under local unitaries, and being monotonic under LOCC [@Nielsen; @vidal; @hororev; @virmani; @geometry]. They are associated to the minimum distance between a quantum state ${|\psi\rangle}$ and its image after suitable unitary operations with fixed spectrum performed on one subsystem only. Surprisingly enough, one sees that a LU operation on one part of a bipartite system, while leaving the entanglement invariant, leads nevertheless to a state alteration whose proper quantification is itself an entanglement measure. The properties of the spectrum $\Lambda$ define the shape, or reflectivity, of a fictitious mirror which produces the image of the quantum state after the action of a LU. In the absence of entanglement, there exists always one such LU that leaves the state invariant, yielding a perfect reflection from the mirror. If the state ${|\psi\rangle}$ is entangled, the action of the minimal or least perturbing LU with spectrum $\Lambda$ necessarily results in a transformed state which has a nonmaximal fidelity with ${|\psi\rangle}$; in turn, this distortion quantifies the amount of bipartite entanglement present in ${|\psi\rangle}$.
The class of $\Lambda$ME exhibits a hierarchical structure depending on the characteristics of the spectrum $\Lambda=\{e^{i \theta_j}\}$. One extreme case is represented by $\theta_j=0$ $\forall j$. In this case the identity is clearly the extremal LU operation that defines the ${\mathbbm{1}}$ME according to [Eq. (\[se\])]{}, and the ensuing entanglement measure ${\cal E}_{\mathbbm{1}}(\psi)$ is trivially zero for all quantum states ${|\psi\rangle}$. Progressing in the hierarchy, if $\Lambda$ contains an $r$-degenerate eigenvalue, then the corresponding ME vanishes on entangled states ${|\psi\rangle}$ of Schmidt rank $\leq r$. On the opposite extreme, if the eigenvalues in $\Lambda$ are all nondegenerate, one obtains the most sensitive measure of mirror entanglement $\Lambda$ME. This classification is summarized by the following theorem:
\[teorango\] Let $W_{\Lambda}$ be a LU and let the associated spectrum $\Lambda$ have degeneracy $r$ (any eigenvalue repeated at most $r$ times). Then ${\cal E}_\Lambda(\psi)=0$ if and only if the Schmidt rank of ${|\psi\rangle}$, $\mathrm{SR}(\psi)$, is no larger than $r$.
[*Proof.*]{} The Schmidt rank of ${|\psi\rangle}$ amounts to the rank of the reduced density matrix $\varrho$, or the number of nonvanishing elements in the probability vector $\vec p$. We first prove sufficiency and then necessity:
I\) \[$\mathrm{SR}(\psi)\leq r\Rightarrow {\cal E}_\Lambda(\psi)=0 $\].\
If the Schmidt rank of $\psi$ is $s\leq r$ then $|{{\rm Tr}}[S V_\Lambda S^\dagger \varrho]|=1$ can be attained by any permutation matrix $S_{\max}$ such that $S_{\max}^{\dagger}$ maps the $s$-dimensional domain of $\varrho$ into the $r$-fold degenerate subspace of $V_\Lambda$, so that $|{{\rm Tr}}[S V_\Lambda S^\dagger \varrho]|=|\sum_{i=1}^s \lambda_i p_i|=1$, where $\lambda_1=\ldots=\lambda_r$. In this way one has ${\cal E}_\Lambda(\psi)=0$.
II\) \[${\cal E}_\Lambda(\psi)=0 \Rightarrow \mathrm{SR}(\psi)\leq r $\].\
To prove the inverse implication, let $\sigma$ be the maximizing permutation in Eq. , $\bar\lambda_i=\lambda_{\sigma^{-1}_i}$, and let $\Sigma$ be the set of indices for which $p_i\neq0$. Then $s=|\Sigma|$. Expanding $F_\psi^\Lambda$ we get
$$\begin{aligned}
F_\psi^\Lambda&=&\Big| \sum_{i\in \Sigma} \bar\lambda_i\, p_i\Big|^2\\
&=&\sum_{i\in \Sigma} p_i^2+\sum_{\{i\neq j\}\in \Sigma} \bar\lambda_i^* \bar\lambda_jp_i p_j\\
&=&\sum_{i\in \Sigma} p_i^2+\sum_{\{i<j\}\in \Sigma}(\bar\lambda_i^* \bar\lambda_j+ \bar\lambda_j^* \bar\lambda_i)p_i p_j\\
&\leq&\sum_{i\in \Sigma} p_i^2+2\sum_{\{i<j\}\in \Sigma}p_i p_j\\
&=&\Big(\sum_{i\in \Sigma} p_i\Big)^2=1 \; .\end{aligned}$$
The inequality follows from $(a^* b+b^* a)\leq 2|a||b|$, and it is saturated if and only if ${\textrm{Re}}[a^*b]=1$, thus for each pair $i,j\in\Sigma$ we have $$\bar\lambda_i=\bar\lambda_j \quad\forall i,j\in\Sigma \; .$$ By assumption, there are at most $r$ equal values of $\bar\lambda$, thus $|\Sigma|\leq r$. This completes the proof.$\Box$
![\[fig:phases\] Parameterization of the spectrum $\Lambda$ of the LU $W_{\Lambda}$, given by the normalized phases $\theta_i$ \[[Eq. (\[lambda\])]{}\] or the gaps $\varphi_i$ between phases (in units of $2\pi$rads). A spectrum is degenerate whenever a $\varphi_k$ equals zero, which corresponds to the boundary of the simplex \[see Fig. \[fig:simplex\]\].](phases.eps){width=".35\textwidth"}
By proper parameterization one can obtain a pictorial representation of all the $\Lambda$ME monotones. To do so, consider again the LU spectrum $\Lambda=\{e^{i\theta_i}\}$. The $\Lambda$ME is clearly invariant under permutations of the phases $\theta_i$, therefore one can restrict to $0\leq\theta_1\leq \theta_2\leq\ldots\leq\theta_d\leq2\pi$ without loss of generality. Furthermore, every monotone ${\cal E}_\Lambda$ is invariant under global phase shifts of the eigenvalues, thus $\theta_1=0$ can be assumed throughout. Finally, one can equivalently re-parameterize the phases by the set of gaps $\varphi_k=(\theta_{k+1}-\theta_k)/2\pi$, with $\varphi_d=1-\theta_d/2\pi$. Since the phases are ordered, one has $\varphi_k\geq0$ and clearly $\sum_{k=1}^d\varphi_k=1$ \[see Fig. \[fig:phases\]\]. Thus, each $\Lambda$ME is in correspondence with a $d$-dimensional probability distribution $\{\varphi_k\}$. This means that all the $\Lambda$ME monotones can be represented in a simplex, as shown in Fig. \[fig:simplex\].
As a Corollary of Theorem \[teorango\], one can easily identify all the $\Lambda$ME monotones which are [*faithful*]{}, i.e., vanish on and only on product states. These correspond to fully nondegenerate spectra $\Lambda$, and fill the entire shaded region in Fig. \[fig:simplex\]. A particular case of a faithful measure is obtained when the eigenvalues constituting the LU spectrum $\Lambda$ are equispaced in the complex plane and add up to zero, as it is the case, e.g., when they are taken to be the $d$th roots of $(-1)^{d-1}$. Since their representation in the Argand diagram resembles a star (or a regular polygon), we name the associated entanglement monotone “stellar mirror entanglement” (${{\mbox{\ding{75}}}}$ME), or “stellar entanglement" for brevity. The properties of this particularly important monotone are discussed in the forthcoming Section \[secGary\].
![\[fig:simplex\] (Color online) Representation of the set of $\Lambda$-mirror entanglement monotones for $d=3$ in the three-dimensional simplex of the gaps $\{\varphi_k\}$ associated to the spectrum of $\Lambda$. The stellar monotone ${\cal E}_{{\mbox{\ding{75}}}}$ is placed at the center of the simplex ($\varphi_i=1/3$) and denoted by a full (red) circle. The trivial monotones, i.e. those that are identically vanishing, are associated to spectra for which two out of three $\varphi_i$’s are zero. These monotones fall at the extremal points (corners) of the simplex and are represented by full (white) circles (white dots). Partially degenerate monotones ${\cal E}_\Lambda$ are those for which one or more $\varphi_i$ is vanishing. They fall at the boundaries of the simplex. For instance, the particular case reported by a full (green) circle on one side of the simplex represents a partially degenerate monotone that is nonvanishing only on entangled states with Schmidt rank SR=3.](3dsimplex.eps){width=".4\textwidth"}
For every [*mixed*]{} state $\varrho_{AB}$ of a bipartite quantum system, and for any $\Lambda$, the $\Lambda$ME can be defined via the convex roof construction $$\label{croof}
{\cal E}_\Lambda(\varrho_{AB}) \doteq \inf_{\{p_i,\psi_i\}} p_i {\cal E}_\Lambda(\psi_i)\,,$$ where the infimum runs over all pure-state decompositions of $\varrho_{AB}= \sum_i p_i {|\psi_i\rangle}\!{\langle\psi_i|}$. The convex roof construction ensures, by definition, that the ME is still a full LOCC monotone, and hence a valid measure of entanglement, even in the general case of mixed states. It is obvious that, like in the case of most other measures of entanglement, the actual computation of the convex roof on generic mixed quantum states is a formidable task. In the conclusions we will briefly discuss how the ME measures, being defined in terms of geometric distances in Hilbert space, might be extended to the general mixed-state case via alternative procedures that bypass and avoid the explicit evaluation of the convex roof construction.
Stellar mirror entanglement {#secGary}
===========================
In this Section we study in more detail a special instance of the $\Lambda$ME monotones introduced in the previous Section, characterized by a LU with spectrum $\Lambda$ \[[Eq. (\[lambda\])]{}\] obeying precise constraints, defined in the following. Observing that the LUs entering in the optimization for the $\Lambda$ME can always be represented as ${W_{\Lambda}}=U V_\Lambda U^\dagger$, as in [Eq. (\[WUV\])]{}, we then define the stellar spectrum $\Lambda={{\mbox{\ding{75}}}}$ as the one such that $$V_{{\mbox{\ding{75}}}}=\exp\left[i \frac{2\pi}{d} \hat{S}_z\right]\,,$$ where the matrix $\hat{S}_z$ represents the $z$ component of the spin-$J$ operator with $J=(d-1)/2$, $\hat{S}_z = \text{diag}\{J, J-1, \ldots, -(J-1), -J\}$. Explicitly, the stellar spectrum is given by the diagonal of $V_{{\mbox{\ding{75}}}}$, $$\label{stellarspec}
{{\mbox{\ding{75}}}}=\{e^{i \theta_j}\} \; , \quad \theta_j = \frac{d-2j+1}{d}\pi \quad (j=1,\ldots,d) \, .$$
The [*stellar mirror entanglement*]{} (${{\mbox{\ding{75}}}}$ME) ${\cal E}_{{\mbox{\ding{75}}}}(\psi)$ is defined as the $\Lambda$ME \[see Definition \[def1\], Eq. (\[se\])\] with $\Lambda={{\mbox{\ding{75}}}}$.
Reminding that the local dimension $d$ is defined as $d=\min\{d_A,d_B\}$, the eigenvalues in the stellar spectrum correspond to the $d$th complex roots of $(-1)^{d-1}$, do not give rise to any degeneracy, and are equispaced in the complex plane (resembling a star once connected by rays to the origin). It is straightforward to show that $\sum_{j=1}^d e^{i \theta_j} = 0$; as a consequence, all the LUs $W_{{{\mbox{\ding{75}}}}}$ with spectrum ${{\mbox{\ding{75}}}}$ are traceless: ${{\rm Tr}}[W_{{{\mbox{\ding{75}}}}}]=0$. According to Theorem \[teorango\] proved above, the stellar mirror entanglement (or, for brevity, stellar entanglement) ${\cal{E}}_{{{\mbox{\ding{75}}}}}(\psi)$ vanishes if and only if ${|\psi\rangle}$ is a separable (product) state; thus, the ${{\mbox{\ding{75}}}}$ME is a faithful entanglement monotone. This result includes and extends to any dimension $d$ the corresponding finding of Ref. [@GiampaoloIlluminati], valid for $d=2,3$. It also generalizes to any dimension the LU-based separability criterion that was established in that same paper for $d=2,3$. Indeed, consider the optimal LU $W_{{{\mbox{\ding{75}}}}}^{opt}$, i.e. the Single-QUDit Unitary Operation (SQUDUO) that realizes the stellar entanglement by minimizing the (squared) Euclidean distance over the entire set of LUs with stellar spectrum: ${\cal{E}}_{{{\mbox{\ding{75}}}}}(\psi) = \min_{W_{{{\mbox{\ding{75}}}}}}\left(1-|{\langle\psi|}W_{{{\mbox{\ding{75}}}}}{|\psi\rangle}|^{2}\right)$. It is straightforward to show that the faithfulness of the stellar entanglement ${\cal{E}}_{{{\mbox{\ding{75}}}}}(\psi)$ implies the following LU-based separability criterion: A pure state ${|\psi\rangle}$ of a bipartite system is separable (product) if and only if the optimal SQUDUO $W_{{{\mbox{\ding{75}}}}}^{opt}$ leaves it invariant: $W_{{{\mbox{\ding{75}}}}}^{opt}{|\psi\rangle} = {|\psi\rangle}$. The faithfulness of what we here call ${{\mbox{\ding{75}}}}$ME, when restricted to the qubit case $d=2$ [@GiampaoloIlluminati], has played a key role in the development of a general theory for the exact and rigorous detection and characterization of fully factorized ground states in several classes of non-exactly solvable spin-$\frac12$ models on translationally-invariant lattices [@ourfactoriz1] as well as in more general geometries [@ourfactoriz2] with arbitrary spatial dimensions, both at finite size and in the thermodynamic limit. Based on the general results of the present work, in the concluding Section \[secDiscuss\] we will discuss some possible guidelines for the extensions of such methods to the problem of the occurrence of total and partial factorizations (such as dimerization, trimerization, and polymerization) in models with local spin variables of arbitrary dimension.
At this stage, we notice that by using the results of Lemma \[lemmaPerm\], according to which the optimal change-of-basis matrix $U$ is a permutation matrix, we can write the following compact expression for the ${{\mbox{\ding{75}}}}$ME: $$\label{ssecomp}
{\cal E}_{{\mbox{\ding{75}}}}(\psi) = \min_\sigma \left(1- \sum_{i,j=1}^d \cos\left[\frac{2\pi(i-j)}{d}\right] p_{\sigma_i} p_{\sigma_j}\right) \,,$$ where the optimization is over all permutations $\sigma$ encoding a reordering of the eigenvalues $p_k$ of the reduced state $\varrho$. Equipped with this expression, we can proceed to investigate the relation of the ${{\mbox{\ding{75}}}}$ME with other measures of entanglement that are expressed as sums of products of eigenvalues of the reduced density matrix. The foremost measure of this kind is the [*linear entropy of entanglement*]{} $E_L(\psi)$, defined as the linear entropy of the reduced density matrix $\varrho$ [@geometry]: $$\label{tangle}
E_L(\psi) = S_L(\varrho) \equiv \frac{d}{d-1}(1-{{\rm Tr}}[\varrho^2]) = \frac{d}{d-1}\bigg(1-\sum_{i=1}^d p_i^2\bigg)\!.$$ It has been observed in Ref. [@GiampaoloIlluminati] that the ${{\mbox{\ding{75}}}}$ME and the linear entropy of entanglement coincide exactly on all pure states ${|\psi\rangle}$ of bipartite systems with reduced states $\varrho$ of local dimension $d=2$ or $d=3$ (qubit or qutrit). This can be easily verified by comparing Eqs. and and recalling the normalization condition ${{\rm Tr}}\varrho=\sum_i p_i = 1$. In general, however, the two measures can differ, resulting in an inequivalent ordering imposed on the set of pure entangled quantum states ${|\psi\rangle}$ with respect to a bipartition involving at least a qu$d$it with $d \ge 4$. It is interesting to illustrate explicitly the discrepancy between the linear entropy and the stellar entanglement in the case $d=4$. In Fig. \[figrandello\] we report ${\cal E}_{{\mbox{\ding{75}}}}$ versus $E_L$ for a sample of 20000 randomly generated states $\psi \in {\mathbb{C}}^{d_A} \otimes {\mathbb{C}}^{d_B}$ with $d_A=4$ and arbitrary $d_B \ge 4$ (upon applying the Schmidt decomposition, the effective dimension of each subsystem is reduced to $d=\min\{d_A,d_B\}$ which amounts to $4$ in this example). We find that, although physical qu$d$it states span a two-dimensional surface in the $(E_L,{\cal E}_{{\mbox{\ding{75}}}})$ plane, nonetheless for a given values of $E_L$ there exist sharp upper and lower bounds on ${\cal E}_{{\mbox{\ding{75}}}}$. In fact, we will see below that these bounds are general and admit an exact analytical expression in any dimension. The classes of entangled states that saturate them in the case $d=4$ are specified in the caption of Fig. \[figrandello\].
![(Color online) Behavior of the stellar entanglement ${\cal E}_{{\mbox{\ding{75}}}}$ plotted against the linear entropy of entanglement $E_L$ for 20000 random pure bipartite states $\psi \in {\mathbb{C}}^{d_A} \otimes {\mathbb{C}}^{d_B}$ with local dimension $d=\min\{d_A,d_B\}=4$. The upper boundary (black online) is given by the bisectrix ${\cal E}_{{\mbox{\ding{75}}}}=E_L$. These states, that accommodate for a stellar entanglement coinciding with the linear entropy are characterized by a three-fold degenerate eigenvalue in the spectrum of the reduced state $\varrho$: $p_1=p_2=p_3=\frac{p}{3}$, $p_4=1-p$ (with $0 \le p \le 1$). The lower boundary is branched into two different segments. For $0 \le E_L \le 2/3$, the minimum ${\cal E}_{{\mbox{\ding{75}}}}$ satisfies ${\cal E}_{{\mbox{\ding{75}}}}= \frac{3}{4} E_L$; this boundary (red online) accommodates states ${|\psi\rangle}$ with rank-2 marginal states $\varrho$: $p_1=p$, $p_2=1-p$, $p_3=p_4=0$. The second branch (blue online) of the lower boundary, defined for $2/3 < E_L \le 1$, accommodates states satisfying ${\cal E}_{{\mbox{\ding{75}}}}=\frac32 E_L-\frac12$. These states have reduced density matrix $\varrho$ with doubly degenerate spectrum of the form: $p_1=p_2 = \frac{1+p}{4}$, $p_3=p_4=\frac{1-p}{4}$. All the quantities plotted are dimensionless.[]{data-label="figrandello"}](stellarvstangle.eps){width="7.5cm"}
The pattern observed for the local dimension $d \le 4$ extends indeed to arbitrary values of the local dimension $d$. In particular, the hierarchical relationship ${\cal E}_{{\mbox{\ding{75}}}}\le E_L$ always holds (with equality for local dimension $d=2,3$), and a rigorous proof is provided below. Moreover, the stellar entanglement ${{\mbox{\ding{75}}}}$ME presents a structured, multi-branched [*lower*]{} bound as a function of the linear entropy of entanglement, with the number of branches growing with local dimension $d \geq 4$. Without aiming at a characterization of the complete lower boundary for states in Hilbert spaces of arbitrary dimension, we can nevertheless show that a simple rescaling of the linear entropy $E_L$ allows to derive an exact analytical lower bound on ${\cal E}_{{\mbox{\ding{75}}}}$ that holds in general for arbitrary dimension. In the particular case of local dimension $d=4$ it corresponds to the bottommost branch of the lower boundary in Fig. \[figrandello\]. Indeed, one can prove that the following holds:
\[Tlower\] The ${{\mbox{\ding{75}}}}$ME is a lower bound to the linear entropy of entanglement and is an upper bound to a rescaling of it on all pure bipartite states in Hilbert spaces of arbitrary finite dimension: $$\label{eqlower}
\frac{2(d-1) \sin^2(\pi/d)}{d}E_L(\psi) \; \le {\cal E}_{{\mbox{\ding{75}}}}(\psi) \; \le E_L(\psi) \; .$$
The rightmost inequality is always tight and the states ${|\psi\rangle}$ that saturate it are those for which $|\sum_i\lambda_ip_i|$, with $\lambda_i$ being the eigenvalues of the stellar spectrum, [Eq. (\[stellarspec\])]{}, is invariant under permutations. Incidentally, we note that for local dimension $d=2,3$, all pure states have such permutational invariance. The proof of the rightmost inequality is provided in the Appendix \[app:upperbound\].
The leftmost inequality is generally tight for $E_L \le d/[2(d-1)]$ and is saturated by states ${|\psi\rangle}$ with rank-2 marginal states $\varrho$: $p_{\sigma_1}=p$, $p_{\sigma_2}=1-p$, $p_{\sigma_i}=0$ ($i=3,\ldots,d$), for any local dimension $d$. For $d=2,3$, the lower and upper bounds in [Eq. (\[eqlower\])]{} coincide. The leftmost inequality is proven in Appendix \[app:lowerbound\].
The attainable region for ${\cal E}_{{\mbox{\ding{75}}}}$ at a fixed $E_L$ increases with increasing local dimension $d$, and the lower bound vanishes asymptotically as $d \rightarrow \infty$, showing that there can exist pure bipartite entangled states of two qu$d$its with local dimension $d \gg 1$, whose linear entropy of entanglement lies in the range $0 < E_L \le \frac12$, and yet possess an infinitesimal degree of stellar entanglement ${{\mbox{\ding{75}}}}$ME. The situation may change again and the trend may be reverted in the infinite-dimensional case. Indeed, in a previous study, some of us have shown that restricting to Gaussian pure bipartite states of two continuous variable modes, and to Gaussian LU operations, one can define a specific Gaussian counterpart to the stellar entanglement ${{\mbox{\ding{75}}}}$ME that amounts to a simple monotonically increasing function of the linear entropy of entanglement [@squoCV], and thus back in analogy to the case of low-dimensional qu$d$its ($d=2,3$) [@GiampaoloIlluminati]. In conclusion, the analysis reported in this Section shows that the stellar entanglement ${{\mbox{\ding{75}}}}$ME, a prominent representative of the $\Lambda$ME mirror entanglement monotones, provides in general an independent characterization of bipartite entanglement in pure states of arbitrary finite dimension and is endowed with an intrinsic geometric origin depending entirely upon the global non-local effects induced only by the action of suitably identified classes of LU operations.
Discussion and conclusion {#secDiscuss}
=========================
In the present work we have introduce a geometric framework to derive bipartite entanglement monotones, including faithful ones, that apply to all pure states in Hilbert spaces of arbitrary dimension. The measures of mirror entanglement and the faithful mirror stellar entanglement that we have introduced are defined in terms of the minimal (squared) distance from a pure state to the pure state obtained from it by the action of suitably optimized LUs acting only on one part of the bipartite quantum system under consideration. We identified a hierarchy of these LU-based entanglement monotones, studied their properties, and determined conditions for their faithfulness. Among the faithful mirror entanglement measures, we focused on a special instance, the stellar mirror entanglement, characterized by additional symmetries in the spectrum of the associated LUs. We proved that the stellar mirror entanglement obeys upper and lower bounds as a function of the linear entropy of entanglement in any dimension, reducing to the latter for local dimension $d=2,3$. Our results generalize an earlier study limited to pure states with reduced density matrices of lower local dimension $d=2,3$ [@GiampaoloIlluminati]. Our work goes along a complementary direction compared to other studies that have investigated the nature of non-local effects when the distance from a state and its image under the action of LUs is maximized rather than minimized [@Bruss]. It is remarkable and somewhat surprising that looking at such a simple structure of LUs from opposite ends can provide so much insight both on the structure of pure state bipartite entanglement and, at the same time, on the patterns of non-local effects generated from the application of LU operations. This interplay, yielding a host of complementary results, might hide yet undiscovered features common to the two approaches. Both rely on the natural physical intuition of the operational approach to the study of physical systems: looking at the response to a given action as basic diagnostic tool of physical properties. It might then be worth to try to compare the two different situations in terms of a classification/ordering of least-disturbing and/or maximally disturbing LUs.
An interesting and important subject for future research is the generalization of the entanglement structure generated by LUs to include mixed states. It has been recently shown [@Streltsov] that the problem of computing the convex roof of a prototype distance measure of entanglement such as the global geometric entanglement [@Wei] can in fact be recast in terms of determining the maximal fidelity to a separable state. It is tempting to speculate that this important result might perhaps be adapted and generalized to other classes of distance measures of entanglement, such as the mirror entanglement and the stellar entanglement introduced in the present work, or the mixed bipartite-multipartite geometric measures of entanglement defined as the hierarchy of distances from the sets of $K$-separable states [@Blasone].
Extensions of the present investigation to the qualification and quantification of multipartite entanglement and the characterization of monogamy constraints on its distribution appear to be challenging, even when restricted to pure states. Here the problem appears of course to be that of understanding the nature of generalized “local operations” in the multipartite setting, and their ordering according to the associated local dimension. Viceversa, a more readily exploitable application of our results is concerned with the factorization of quantum ground states in cooperative qu$d$it systems, a phenomenon that is currently receiving significant attention from both quantum information and condensed matter communities [@faziorev; @kurmannetal; @ourfactoriz1; @ourfactoriz2]. Indeed, the variety of quantum states belonging to one and the same LU-equivalence class may anyway have rather distinct features that become relevant especially in the context of many-body physics. For instance, if we consider a spin chain in a perfectly ferromagnetic state ${|\uparrow\uparrow\ldots\uparrow\rangle}$, flipping every other spin amounts to a LU operation that has no effect on the entanglement, yet results in a totally different ordered phase with a vanishing magnetization[^3]. The specific form of the ground state of a many-body Hamiltonian, then, is important as well as its entanglement content and distribution in the form of bipartite or multipartite quantum correlations [@faziorev]. In this context, the formalism of LU-based geometric entanglement for states with reduced density matrices of lower local dimension $d=2,3$ has already been applied to define energy witnesses as efficient diagnostic tools of factorization, relating it to the vanishing of faithful entanglement measures such as the stellar mirror entanglement. However, this LU-based theory of ground-state factorization had to be restricted so far to spin-1/2 systems [@giampiverruca] and to be limited to investigate only total factorization into products of single-spin states [@ourfactoriz1; @ourfactoriz2], as the extension to higher spin systems and partial factorizations (such as dimerization) required considering higher local dimensions $d$. The results of the present work allow in principle to extend the LU-based methods, originally developed for spin-1/2 models and total factorization, to the study of total as well as partial factorization points in models of higher-dimensional spins and in spin-1/2 models with frustration. In the latter case of frustrated spin-1/2 systems, the LU-based entanglement formalism has been recently exploited to relate the existence of fully separable ground states (totally factorized in the tensor product of single-spin states) to the absence of frustration [@ourfrust]. Equipped with the general proof of equivalence between pure-state factorization and invariance under LU spin operations with stellar (or, generally non-degenerate) spectrum, one may now investigate the possible existence of dimerized quantum phases (i.e. ground states composed of factorized singlets: tensor products of $d=4$ units, each of which is internally entangled) by looking for candidate ground states invariant under stellar LUs in $d=4$. For spin-1/2 chains in the thermodynamics limit or in $2D$ lattices one may expect that a hierarchy of compatible types of ground state correlations may take place, ranging from full factorization into products of single-spin states, to dimerization, up to genuine multipartite-entangled phases, as the degree of frustration increases as a function of a given tunable external magnetic field. The exploration of these intriguing scenarios, as enabled by the general analysis developed in the present work, will be the subject of further future studies.
Proof of ${\cal E}_{{\mbox{\ding{75}}}}(\psi) \le E_L(\psi)$ {#app:upperbound}
============================================================
In order to prove the theorem, we will use Dirac bracket notation throughout. It is convenient to establish some definitions and facts. Let us denote the maximally mixed vector by ${|\openone\rangle}=(1/d,1/d,\ldots,1/d)^{\dagger}$. With this, $\langle\openone|\openone\rangle=1/d$ and we denote the projector onto the ${|\openone\rangle}$ subspace by $P^\parallel=d{|\openone\rangle}{\langle\openone|}$, the complement being $P^\perp=\openone-P^\parallel$. The eigenvalues $\lambda_i$ can be arranged in the vector ${|\lambda\rangle}$, with complex conjugate ${|\lambda^*\rangle}$. We have $\langle\lambda|\lambda\rangle=\langle \lambda^*|\lambda^*\rangle=d$ and for the stellar monotone, also $\langle\lambda|\openone\rangle=0$.
Define $M_\sigma=\frac{1}{2}\sigma^\top ({|\lambda\rangle}{\langle\lambda|}+{|\lambda^*\rangle}{\langle\lambda^*|})\sigma$. Denoting the identity permutation by $e$, we can write $M\equiv M_e={\textrm{Re}}\,{|\lambda\rangle}{\langle\lambda|}$, and $M_\sigma=\sigma^\top M\sigma$. It is clear that $M\geq0$ and thus $M_\sigma\ge0$ for all $\sigma$. Let $$g_\sigma(p)=1-\big|\sum_i\lambda_i p_{\sigma_i}\big|^2=1-{\langlep|}M_\sigma{|p\rangle}.$$ For any $0< s\leq 1$, let ${\cal G}_{e,s} =\{p|g_e(p) \geq s\}$. Proceed now to define ${\cal C}_s=\{p~|~{\cal E}_{{\mbox{\ding{75}}}}(p)\geq s\}$. Then ${\cal C}_s=\cap_{\sigma\in S_d}{\cal G}_{s,\sigma}$.
Before proceeding any further, we will find the following relation useful, $$\sum_{\sigma\in\textsf{S}_d}M_\sigma=d!\frac{d}{d-1}P^\perp,$$ which can be easily derived by use of Schur’s lemma.
The proof consists on proving a lower bound of $E_L$ over the set ${\cal C}_{s}$. Equivalently, we may search for the minimum of the objective function $f(p)=1-\langle p|p\rangle$, $$\begin{aligned}
\label{eq:program}
\textrm{minimize~}& f(p)\\
\nonumber
\textrm{subject to}~&p\in{\cal C}_s.\end{aligned}$$ Let us denote the solution to this problem by $p^*$ and the achieved value $f(p^*)$ by $f^*_s=f(p^*)$.\
[**Lemma**]{}: The solution to the optimization in satisfies $$f^*_s\leq s\frac{d-1}{d}.$$ [*Proof*]{}. Consider the vector ${|q\rangle}=(1-\sqrt{1-s}){|\openone\rangle}+\sqrt{1-s}{|e_1\rangle}$, where ${|e_1\rangle}=(1,0,0,\ldots)^{\dagger}$. This vector has $g_\sigma(q)=s$, for all $\sigma$, thus $q\in{\cal C}_s$, and $f(q)=s\frac{d-1}{d}$. Therefore $f^*_s\leq f(q)=s\frac{d-1}{d}$$\Box$\
[**Lemma**]{}: The solution to the optimization in satisfies $$f^*_s\geq s\frac{d-1}{d}.$$ [*Proof*]{}. The program is spelled out as $$\begin{aligned}
\nonumber
\textrm{minimize~~~}& f(p)\\
\label{eq:primal}
\textrm{subject to~~~}&\left\{
\begin{aligned}
s-g_\sigma(p)&\leq0\\
-p_i&\leq0\\
\sum_ip_i-1&=0
\end{aligned}\right.,\end{aligned}$$ and the Lagrangian is $$\begin{gathered}
L(p,\mu,\alpha,\nu)=\\
f(p)+\sum_\sigma \mu_\sigma(s-g_\sigma(p))-\langle \alpha|p\rangle-\nu(1-d\langle \openone|p\rangle).\end{gathered}$$ Consider the dual program [@boyd_convex; @convexanalysis] to that of . The dual Lagragian $L_D$ is $$\label{eq:dual}
L_D(\mu,\alpha,\nu)=\inf_{p}L(p,\mu,\alpha,\nu) \, .$$ In order to have the dual Lagrangian $L_D(\mu_\sigma,\alpha,\nu)>-\infty$ it is convenient to write the objective function as $$f(p)={\langlep|}(d^2{|\openone\rangle}{\langle\openone|}-\openone){|p\rangle}.$$ $L(p,\mu,\alpha,\nu)$ can then be written as $$\begin{aligned}
L(p,\mu,\alpha,\nu)=&\,{\langlep|}\left(d^2{|\openone\rangle}{\langle\openone|}-\openone+\sum_\sigma\mu_\sigma M_\sigma\right){|p\rangle}\\
\nonumber
&-\left({\langle\alpha|}-\nu d{\langle\openone|}\right){|p\rangle}-(1-s)\sum_\sigma\mu_\sigma-\nu.\end{aligned}$$ Hence, $L$ is bounded from below whenever $H\equiv\nabla^2_p L=\sum_\sigma\mu_\sigma M_\sigma-\openone>0$. In such case, the infimum in can be readily computed by solving $\nabla_p L=0$, $${|p\rangle}=\frac{1}{2}H^{-1}\left[{|\alpha\rangle}-\nu d{|\openone\rangle}\right],$$ yielding $$\begin{aligned}
\nonumber
L_D(\mu,\alpha,\nu)=&-\frac{1}{4}\left({\langle\alpha|}-\nu d{\langle\openone|}\right)H^{-1}\left({|\alpha\rangle}-\nu d{|\openone\rangle}\right)\\
&-(1-s)\sum_\sigma\mu_\sigma-\nu \, .\end{aligned}$$ At this point, any value of $L_D$ with $\{\mu_\sigma\}$ satisfying $H>0$ yields a lower bound to $f^*(s)$. Moreover, observe that $\sum_\sigma\mu_\sigma={{\rm Tr}}[H]/d$, $$\begin{aligned}
\nonumber
L_D(\mu,\alpha,\nu)=&-\frac{1}{4}\left({\langle\alpha|}-\nu d{\langle\openone|}\right)H^{-1}\left({|\alpha\rangle}-\nu d{|\openone\rangle}\right)\\
&-(1-s){{\rm Tr}}[H]/d-\nu.\end{aligned}$$ We can choose to evaluate $L_D(\mu,\alpha,\nu)$ at $\mu_\sigma=\mu$ so that $H$ becomes $$H=\left(\mu\frac{d!d}{d-1}-1\right) P^\perp+(d-1)P^\parallel \, .$$ Defining $x\equiv \mu\frac{d!d}{d-1}-1$, we have
$$\begin{aligned}
H&=&x P^\perp+(d-1)P^\parallel\\
H^{-1}&=&x^{-1}P^\perp+(d-1)^{-1}P^\parallel \, ,\end{aligned}$$
while the $H> 0$ condition reduces to $x>0$ or $\mu>\frac{d-1}{d!d}$. Moreover, $\sum_\sigma \mu_\sigma={{\rm Tr}}[H]/d=(1+x)\frac{d-1}{d}$. Thus, we obtain $$\begin{aligned}
\nonumber
L_D(x,\alpha,\nu)=&-\frac{1}{4x}\|{|\alpha^\perp\rangle}\|^2-(1-s)\frac{d-1}d x\\
\nonumber
&-\frac{1}{4(d-1)}\|{|\alpha^\parallel\rangle}-\nu d{|\openone\rangle} \|^2\\
&-(1-s)\frac{d-1}{d}-\nu.\end{aligned}$$ Maximizing over $x$ we obtain $x^*=\frac{\|{|\alpha^\perp\rangle}\|}{2\sqrt{1-s}}\sqrt{\frac{d}{d-1}}$ and $$\begin{aligned}
\nonumber
L_D(x^*,\alpha,\nu)&=&-\|{|\alpha^\perp\rangle}\|\sqrt{(1-s)\frac{d-1}{d}}\\
\nonumber
&&-\frac{1}{4(d-1)}\|{|\alpha^\parallel\rangle}-\nu d{|\openone\rangle} \|^2\\
&&-(1-s)\frac{d-1}{d}-\nu.\end{aligned}$$ The term proportional to $\|{|\alpha^\perp\rangle}\|$ can only be negative, thus we set it to zero by choosing ${|\alpha\rangle}={|\alpha^\parallel\rangle}=\alpha \sqrt d{|\openone\rangle}$ ($\alpha\geq0$), leaving $$\begin{aligned}
L_D(x^*,\alpha,\nu)=-\frac{(\alpha-\nu\sqrt d)^2}{4(d-1)}-\nu-(1-s)\frac{d-1}{d}.~~~\end{aligned}$$ Maximizing over $\nu$ we get $\nu^*=2\frac{\alpha}{\sqrt d}-4\frac{d-1}{d}$ and $$L_D(x^*,\alpha,\nu^*)=s\frac{ d-1}{d}-\frac{\alpha}{\sqrt d}.$$ Since $\alpha\geq0$ the maximum is achieved at $\alpha^*=\alpha=0$ yielding $$L_D(x^*,\alpha^*,\nu^*)=s\frac{d-1}{d} \, ,$$ which shows that $$f^*_s\geq s\frac{d-1}{d}.$$ $\Box$
Combining the last two lemmas, we have $$f^*_s=s\frac{d-1}{d}.$$ From this we have the following relation, $${\cal E}_{{\mbox{\ding{75}}}}(p)\geq s\Rightarrow f(p)\geq s\frac{d-1}{d},$$ or equivalently $${\cal E}_{{\mbox{\ding{75}}}}(p)\geq s\Rightarrow E_L(p)\geq s.$$ In particular, for points with ${\cal E}_{{\mbox{\ding{75}}}}(p)=s$ we have $${\cal E}_{{\mbox{\ding{75}}}}(p)=s\leq E_L(p) \, .$$ This completes the proof of the rightmost inequality in Theorem \[Tlower\].$\Box$
Proof of $2\frac{d-1}d \sin^2(\pi/d)E_L(\psi) \le {\cal E}_{{\mbox{\ding{75}}}}(\psi)$ {#app:lowerbound}
======================================================================================
For this proof we will use the same notation as in Appendix \[app:upperbound\]. Additionally, let $\Lambda={|\lambda\rangle}{\langle\lambda|}$. The approach of the proof is the following. Since all entanglement monotones can be expressed as function of the spectrum of the reduced density matrix, we will talk about [*probability vectors*]{} instead of quantum states. We will show that any probability vector ${|p\rangle}$ can be obtained by a series of transformations starting from the pure vector ${|q\rangle}=(1,0,0\ldots)^{\dagger}$ such that the increments in both entanglement monotones under the action of these transformations always verify $$\label{eq:incineq}
\Delta {\cal E}_{{\mbox{\ding{75}}}}\geq 2\frac{d-1}d \sin^2(\pi/d)\Delta E_L.$$ This, combined with the fact that both monotones vanish on $(1,0,0,\ldots)$ completes the proof. We now proceed to prove Eq. . An essential ingredient in the proof consists in obtaining a sequence of transformations that will bring $q$ to $p$ while still having a manageable form of $\Delta {\cal E}_{{\mbox{\ding{75}}}}$. To this end, we realize that the most disturbing element in the expression of $\Delta{\cal E}_{{\mbox{\ding{75}}}}$ are the optimizations over the set of permutation matrices for the initial and final states. We can avoid this complication by showing that a sequence of transformations can be devised such that for each one of them, the initial and final states have at least one optimal permutation matrix in common. This will allow us to get rid of the independent optimization for the initial and final values ${\cal E}_{{\mbox{\ding{75}}}}(\psi)$.
Any quantum state with reduced density matrix $\varrho$ whose eigenvalues are $p_i$ is majorized by a pure state with $q=(1,0\ldots)$, which means that $$\label{eq:ttransf}
{|p\rangle}=T_{d-1}\cdots T_2T_1{|q\rangle} \, ,$$\
where the $T_k$’s are T-transforms \[[*i.e.*]{}, matrices of the form $T(t)=(1-t) \openone+t W$ where $W$ is a transposition of two particular elements and $0\leq t\leq 1$\], and $t_k$ are their respective arguments. We will say that $T(t)=(1-t)\openone+t W$ is a T-transform [*of the kind*]{} $W$. T-transforms with $t\geq1/2$ can be reduced to T-transforms with $ t\leq 1/2$ by prepending the $W$ permutation: $T(t)W=T(1-t)$. Thus, Eq. can be cast as $$\label{eq:ttransf2}
{|p\rangle}=T_{d-1}W_{d-1}\cdots T_2 W_2T_1W_1{|q\rangle} \, ,$$\
where $W_1,W_2,\ldots$ are appropriate permutation matrices (could be the identity) and now all T-transforms have arguments $t_k\leq1/2$.
Moreover, any T-transform with $0\leq t\leq1/2$ can be split into an arbitrary number of intermediate transforms of the same kind $T(t'),T(t''),\ldots$ with $0\leq t',t'',\ldots\leq1/2$. This can be seen by re-parameterizing the set of T-transforms of any given kind \[$0\leq t\leq1/2$\] by $$T(s)=\frac{1+e^{-s}}{2} \openone+\frac{1-e^{-s}}2W \, ,$$ where $T(0)=\openone$ and $0\leq s<\infty$. Observing that $T(s_1)T(s_2)=T(s_1+s_2)$ shows that any splitting of $s=\sum_i s_i$ with positive summands can be accomplished. This allows to decompose each $T_k$ into a product of $T_k^{n_k}\cdots T_k^2T_k^1$ such that the states $(T_k^{i-1}\cdots T_k^1)\,T_{k-1}\cdots T_2 T_1{|q\rangle}$ and $(T_k^iT_k^{i-1}\cdots T_k^1)\,T_{k-1}\cdots T_2 T_1{|q\rangle}$ share at least one optimal permutation matrix, for all $i,k$. Thus, finally the target state ${|p\rangle}$ can be written as $${|p\rangle}= \prod_{k=1}^{d-1}\left[\left(\prod_{i=1}^{n_k} T_k^i \right)W_k\right]{|q\rangle} \, .$$
We are now ready to show that Eq. holds for any transformation ${|p\rangle}\rightarrow T_k^l {|p\rangle}$. Let $\sigma$ be a common optimal permutation matrix for both states, and $\Lambda_\sigma=\sigma^\top\Lambda\sigma$, and $T_k^l=(1-t) \openone+tW$, with $t\leq1/2$. The increment in ${\cal E}_{{\mbox{\ding{75}}}}$ can be cast as $$\begin{aligned}
\nonumber
\Delta &{\cal E}_{{\mbox{\ding{75}}}}(p)\\
\nonumber
=&\,{\langlep|} \Lambda_\sigma{|p\rangle}-{\langlep|} T(t)^\top\Lambda_\sigma T(t){|p\rangle}\\
\nonumber
=&\,{\langlep|}\left((1-(1-t)^2)\Lambda_\sigma-t(1-t)\left[\Lambda_\sigma W+W\Lambda_\sigma\right]\right){|p\rangle}\\
\nonumber
&\,-t^2{\langlep|}W\Lambda_\sigma W{|p\rangle}\\
\label{eq:bound1}
\geq&\,2 t(1-t){\langlep|} \left[\Lambda_\sigma-\frac{1}{2}(W\Lambda_\sigma+\Lambda_\sigma W)\right]{|p\rangle}.\end{aligned}$$ In the last inequality we have exploited the assumption that $\sigma$ is the optimal permutation, hence ${\langlep|}\Lambda_\sigma{|p\rangle}\geq {\langlep|}W\Lambda_\sigma W{|p\rangle}$.
Evaluating expression we find $${\langlep|} \left[\Lambda_\sigma-\frac{1}{2}(W\Lambda_\sigma+\Lambda_\sigma W)\right]{|p\rangle}=2\Delta p^2\sin^2\frac{\Delta\theta}{2},$$ where $\Delta p^2=(p_i-p_j)^2$ where $i,j$ are the indices swapped by $W$, or the subspace where $T_k^l$ acts non-trivially, and $\Delta \theta=\theta_{\sigma^{-1}_i}-\theta_{\sigma^{-1}_j}=\frac{2\pi}{d}(\sigma^{-1}_i-\sigma^{-1}_j)$. Indeed, since $$\sin^2\frac{\Delta\theta}{2}\geq\sin^2\frac{\pi}{d} \; ,$$ we have $$\Delta {\cal E}_{{\mbox{\ding{75}}}}(p)\geq4 t(1-t)\Delta p^2\sin^2\frac{\pi}{d} \; .$$
It is then finally straightforward to verify that $4t(1-t) \Delta p^2=2\frac{d-1}{d}\Delta E_L$. $\Box$
[99]{}
R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Rev. Mod. Phys. [**81**]{}, 865 (2009).
L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Rev. Mod. Phys. [**80**]{}, 517 (2008).
D. Abbott, P. C. W. Davies, and A. K. Pati (eds.), [*Quantum Aspects of Life*]{} (Imperial College Press, London, 2008); M. Sarovar, A. Ishizaki, G. R. Fleming, and K. B. Whaley, Nature Phys. [**6**]{}, 462 (2010).
C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher, Phys. Rev. A **53**, 2046 (1996).
M. B. Hastings, I. González, A. B. Kallin, and R. G. Melko, Phys. Rev. Lett. [**104**]{}, 157201 (2010), and references therein.
M. Blasone, F. Dell’Anno, S. De Siena, and F. Illuminati, Phys. Rev. A [**77**]{}, 062304 (2008).
G. Adesso, A. Serafini, and F. Illuminati, Phys. Rev. A [**70**]{}, 022318 (2004).
H. Ollivier and W. H. Zurek, Phys. Rev. Lett. [**88**]{}, 017901 (2001); L. Henderson and V. Vedral, J. Phys. A [**34**]{}, 6899 (2001); M. Piani, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. [**100**]{}, 090502 (2008).
M. B. Plenio and S. Virmani, Quant. Inf. Comp. **7**, 1 (2007).
M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. [**80**]{}, 5239 (1998); D. Yang, M. Horodecki, R. Horodecki, and B. [Synak-Radtke]{}, Phys. Rev. Lett. **95**, 190501 (2005).
G. Vidal, J. Mod. Opt. **47**, 355 (2000).
N. Linden, S. Popescu, and A. Sudbery, Phys. Rev. Lett. [**83**]{}, 243 (1999); G. Adesso, Phys. Rev. Lett. [**97**]{}, 130502 (2006).
L.-B. Fu, Europhysics Lett. [**75**]{}, 1 (2006).
S. Gharibian, H. Kampermann, and D. Bru[ß]{}, Quant. Inf. Comp. **9**, 1013 (2009).
S. M. Giampaolo and F. Illuminati, Phys. Rev. A [**76**]{}, 042301 (2007).
V. Coffman, J. Kundu, and W. K. Wootters, Phys. Rev. A [**61**]{}, 052306 (2000).
T. J. Osborne and F. Verstraete, Phys. Rev. Lett. [**96**]{}, 220503 (2006).
G. Adesso, S. M. Giampaolo, and F. Illuminati, Phys. Rev. A [**76**]{}, 042334 (2007).
A. K. Ekert, C. M. Alves, D. K. L. Oi, M. Horodecki, P. Horodecki, and L. C. Kwek, Phys. Rev. Lett. [**88**]{}, 217901 (2002); D. K. L. Oi and J. [Å]{}berg, Phys. Rev. Lett. [**97**]{}, 220404 (2006); F. A. Bovino, G. Castagnoli, A. Ekert, P. Horodecki, C. M. Alves, and A. V. Sergienko, Phys. Rev. Lett. [**95**]{}, 240407 (2005); F. Mintert, M. Kus, and A. Buchleitner, Phys. Rev. Lett. [**92**]{}, 167902 (2004); S. P. Walborn, P. H. Souto Ribeiro, L. Davidovich, F. Mintert, and A. Buchleitner, Nature [**440**]{}, 1022 (2006).
T.-C. Wei and P. M. Goldbart, Phys. Rev. A [**68**]{}, 042307 (2003); H. Barnum and N. Linden, J. Phys. A: Math. Gen. [**34**]{}, 6787 (2001); A. Shimony, Ann. NY. Acad. Sci. [**755**]{}, 675 (1995).
M. A. Nielsen, Phys. Rev. Lett. [**83**]{}, 436 (1999).
I. Bengtsson and K. Žyczkowski, [*Geometry of Quantum States*]{} (Cambridge University Press, Cambridge, 2008).
S. M. Giampaolo, G. Adesso, and F. Illuminati, Phys. Rev. Lett. [**100**]{}, 197201 (2008).
S. M. Giampaolo, G. Adesso, and F. Illuminati, Phys. Rev. B [**79**]{}, 224434 (2009).
A. Streltsov, H. Kampermann, and D. Bru[ß]{}, New J. Phys. [**12**]{}, 123004 (2010).
J. Kurmann, H. Thomas, and G. Müller, Physica A (Amsterdam) [**112**]{}, 235 (1982); C. Hoeger, G. von Gehlen, and V. Rittenberg, J. Phys. A: Math. Gen. [**18**]{}, 1813 (1985); S. Dusuel and J. Vidal, Phys. Rev. B [**71**]{}, 224420 (2005); L. Amico [*et al.*]{}, Phys. Rev. A [**74**]{}, 022322 (2006); R. Rossignoli, N. Canosa, and J. M. Matera, Phys. Rev. A [**77**]{}, 052322 (2008); [*ibid.*]{} [**80**]{}, 062325 (2009).
S. M. Giampaolo, F. Illuminati, P. Verrucchi, and S. De Siena, Phys. Rev. A [**77**]{}, 012319 (2008).
S. M. Giampaolo, G. Adesso, and F. Illuminati, Phys. Rev. Lett. [**104**]{}, 207202 (2010).
S. Boyd and L. Vandenberghe, [*Convex Optimization*]{} (Cambridge University Press, Cambridge, 2004).
R. T. Rockafellar, [*Convex Analysis*]{} (Princeton University Press, Princeton, 1970).
[^1]: Corresponding author. Electronic address: [email protected]
[^2]: Here and in the following steps of the proof we omit the label $\Lambda$ for ease of notation.
[^3]: In this context, we should also notice the fact that the states in the above example are fully factorized (fully separable), while the condensed matter terminology identifies them as strongly correlated, referring to the behavior of the two-point correlation functions; here we will adhere to the conventions of entanglement theory.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. Roughly speaking, a sample compression scheme of size $k$ means that given an arbitrary list of labeled examples, one can retain only $k$ of them in a way that allows to recover the labels of all other examples in the list. They showed that compression implies PAC learnability for binary-labeled classes, and asked whether the other direction holds. We answer their question and show that every concept class $C$ with VC dimension $d$ has a sample compression scheme of size exponential in $d$. The proof uses an approximate minimax phenomenon for binary matrices of low VC dimension, which may be of interest in the context of game theory.'
author:
- 'Shay Moran[^1]'
- 'Amir Yehudayoff[^2]'
bibliography:
- 'compRef.bib'
title: Sample compression schemes for VC classes
---
Introduction
============
Learning and compression are known to be deeply related to each other. Learning procedures perform compression, and compression is an evidence of and is useful in learning. For example, support vector machines, which are commonly applied to solve classification problems, perform compression (see Chapter 6 in [@Cristianini00a]). Another example is the use of compression to boost the accuracy of learning procedures (see [@littleWarm; @DBLP:journals/iandc/Freund95] and Chapter 4 in [@schapire2012boosting]).
About thirty years ago, Littlestone and Warmuth [@littleWarm] provided a mathematical framework for studying compression in the context of learning theory. In a nutshell, they showed that compression indeed implies learnability and asked whether learnability implies compression.
Learning
--------
Here we provide a brief description of standard learning terminology. For more information, see the books [@KearnsVazirani94; @schapire2012boosting; @Cristianini00a].
Imagine a student who wishes to learn a concept $c : X \to \{0,1\}$ by observing some training examples. In order to eliminate measurability issues, we focus on the case that $X$ is a finite or countable set (although the arguments we use are more general). The high level goal of the student is to come up with an hypothesis $h : X \to \{0,1\}$ that is close to the unknown concept $c$ using the least number of training examples. There are many possible ways to formally define the student’s objective. An important one is Valiant’s probably approximately correct (PAC) learning model [@zbMATH03943062], which is closely related to an earlier work of Vapnik and Chervonenkis [@zbMATH03391742]. This model is defined as follows.
The training examples are modeled as a pair $(Y,y)$ where $Y \subseteq X$ is the multiset of points the student observes and $y = c|_Y$ is their labels according to $c$. The collection of all possible training examples is defined as follows. Let $C\subseteq \{0,1\}^X$ be a concept class. A $C$-labeled sample is a pair $(Y,y)$, where $Y \subseteq X$ is a multiset and $y = c|_Y$ for some $c \in C$. The size of a labeled sample $(Y,y)$ is the size of $Y$ as a multiset. For an integer $k$, denote by $L_C(k)$ the set of $C$-labeled samples of size at most $k$. Denote by $L_C(\infty)$ the set of all $C$-labeled samples of finite size.
The concept class $C$ is PAC learnable with $d$ samples, generalization error ${\epsilon}$, and probability of success $1-\delta$ if there is a learning map $H:L_C(d) \to \{0,1\}^X$ so that the hypothesis $H$ generates is accurate with high probability. Formally, for every $c \in C$ and for every probability distribution $\mu$ on $X$, $$\Pr_{\mu^d} \Big[ \left\{ Y \in X^d : \mu(\{x \in X : h_Y(x) \neq c(x)\}) \leq {\epsilon}\right\} \Big] \geq 1-\delta ,$$ where $h_Y = H(Y,c|_Y)$. In this text, when the parameters ${\epsilon},\delta$ are not explicitly stated we mean that their value is $1/3$. If the image of $H$ is contained in $C$, we say that $C$ is properly PAC learnable.
A fundamental question that emerges is characterizing the sample complexity of PAC learning. The work of Blumer, Eherenfeucht, Haussler, and Warmuth [@zbMATH04143473], which is based on [@zbMATH03391742], provides such a characterization. The characterization is based on the Vapnik-Chervonenkis (VC) dimension of $C$, which is defined as follows. A set $Y \subseteq X$ is $C$-shattered if for every $Z \subseteq Y$ there is $c \in C$ so that $c(x)=1$ for all $x \in Z$ and $c(x)=0$ for all $x \in Y-Z$. The VC dimension of $C$, denoted ${\text{VC}}(C)$, is the maximum size of a $C$-shattered set (it may be infinite). They proved that the sample complexity of PAC learning $C$ is ${\text{VC}}(C)$, up to constant factors[^3].
\[thm:BlumerPAC\] If $C \subseteq \{0,1\}^X$ has VC dimension $d$, then $C$ is properly PAC learnable with $O((d\log (2/{\epsilon})+\log(2/\delta))/{\epsilon})$ samples, generalization error ${\epsilon}$ and success probability $1-\delta$.
Compression
-----------
Littlestone and Warmuth [@littleWarm] defined sample compression schemes as a natural abstraction that captures a common property of many learning procedures, like procedures for learning geometric shapes or algebraic structures (see also [@DBLP:conf/colt/Floyd89; @DBLP:journals/ml/FloydW95]).
#### Definition.
A sample compression scheme takes a long list of samples and compresses it to a short sub-list of samples in a way that allows to invert the compression. Formally, a sample compression scheme for $C$ with kernel size $k$ and side information $I$, where $I$ is a finite set, consists of two maps $\kappa,\rho$ for which the following hold:
(${\kappa}$)
: The [*compression map*]{} $$\kappa: L_C(\infty) \to L_C(k) \times I$$ takes $(Y,y)$ to $((Z,z),i)$ with $Z \subseteq Y$ and $z = y|_Z$.
($\rho$)
: The [*reconstruction map*]{} $$\rho : L_C(k) \times I \to \{0,1\}^X$$ is so that for all $(Y,y)$ in $L_C(\infty)$, $$\rho(\kappa(Y,y))|_Y = y.$$
The size of the scheme is[^4] $k + \log (|I|)$. In the language of coding theory, the side information $I$ can be thought of as list decoding; the map $\rho$ has a short list of possible reconstructions of a given $(Z,z)$, and the information $i \in I$ indicates which element in the list is the correct one. See [@DBLP:conf/colt/Floyd89; @DBLP:journals/ml/FloydW95; @MSWY15] for more discussions of this definition, and some insightful examples.
#### Motivation and background.
Littlestone and Warmuth showed that every compression scheme yields a natural learning procedure: Given a labeled sample $(Y,y)$, the learner compresses it to $\kappa(Y,y)$ and outputs the hypothesis $h = \rho(\kappa(Y,y))$. They proved that this is indeed a PAC learner.
\[thm:LWPAC\] Let $C \subseteq \{0,1\}^X$, and let $\kappa,\rho$ be a sample compression scheme for $C$ of size $k$. Let $d \geq 8 \big(k\log(2/{\epsilon})+\log(1/\delta)\big)/{\epsilon}$. Then, the learning map $H: L_C(d) \to \{0,1\}^X$ defined by $H(Y,y) = \rho(\kappa(Y,y))$ is PAC learning $C$ with $d$ samples, generalization error ${\epsilon}$ and success probability $1-\delta$.
Let $\mu$ be a distribution on $X$, and $x_1,\ldots,x_d$ be $d$ independent samples from $\mu$. There are $\sum_{j=0}^{k} {d \choose j}$ subsets $T$ of $[d]$ of size at most $k$. There are $|I|$ choices for information $i \in I$. Every fixing of $T,i$ yields a random function $h_{T,i}
= \rho((T,c|_T),i)$ that is measurable with respect to $x_T = (x_t : t \in T)$. The random function $h_{T,i}$ is independent of $x_{[d]-T}$. For every fixed $T,i,x_T$, therefore, if $\mu(\{x \in X: h_{T,i}(x) \neq c(x)\}) > {\epsilon}$ then the probability that $h_{T,i}$ agrees with $c$ on all samples in $[d]-T$ is less than $(1-{\epsilon})^{d-|T|}$. The function $h$ is one of the functions in the random set $\{h_{T,i} : |T|\leq k,i\in I\}$, and it satisfies $h|_Y = c|_Y$. The union bound completes the proof.
Littlestone and Warmuth also asked whether the other direction holds: [*“Are there concept classes with finite dimension for which there is no scheme with bounded kernel size and bounded additional information?”*]{}
Further motivation for considering compression schemes comes from the problem of boosting a weak learner to a strong learner. Boosting is a central theme in learning theory that was initiated by Kearns and Valiant [@Kearns88; @DBLP:conf/stoc/KearnsV89]. The boosting question, roughly speaking, is: given a learning algorithm with generalization error $0.49$, can we use it to get an algorithm with generalization error ${\epsilon}$ of our choice? Theorem \[thm:LWPAC\] implies that if the learning algorithm yields a sample compression scheme, then boosting follows with a multiplicative overhead of roughly $1/{\epsilon}$ in the sample size. In other words, efficient compression schemes immediately yield boosting.
Schapire [@DBLP:journals/ml/Schapire90] and later on Freund [@DBLP:journals/iandc/Freund95] solved the boosting problem, and showed how to efficiently boost the generalization error of PAC learners. They showed that if $C$ is PAC learnable with $d$ samples and generalization error $0.49$, then $C$ is PAC learnable with $O(d \log^2(d/{\epsilon}) /{\epsilon})$ samples and generalization error ${\epsilon}$ (see e.g. Corollary 3.3 in [@DBLP:journals/iandc/Freund95]). Interestingly, their boosting is based on a weak type of compression. They showed how to compress a sample of size $m$ to a sample of size roughly $d \log m$, and that such compression already implies boosting (see Section \[sec:LiC\] below for more details).
Additional motivation for studying sample compression schemes relates to feature selection, which is about identifying meaningful features of the underlying domain that are sufficient for learning purposes (see e.g. [@DBLP:journals/jmlr/GuyonE03]). The existence of efficient compression schemes, loosely speaking, shows that in any arbitrarily big data there is a small set of features that already contains all the relevant information. More concretely, a construction of an efficient compression scheme provides tools that may be helpful for feature selection.
#### Previous constructions.
Littlestone and Warmuth’s question and variants of it lead to a rich body of work that revealed profound properties of VC dimension and learning. Floyd and Warmuth [@DBLP:conf/colt/Floyd89; @DBLP:journals/ml/FloydW95] constructed sample compression schemes of size $\log |C|$ for every finite concept class $C$. They also constructed optimal compression schemes of size $d$ for maximum classes[^5] of VC dimension $d$, as a first step towards solving the general question. As the study of sample compression schemes deepened, many insightful and optimal schemes for special cases have been constructed: Floyd [@DBLP:conf/colt/Floyd89], Helmbold et al. [@DBLP:journals/siamcomp/HelmboldSW92], Floyd and Warmuth [@DBLP:journals/ml/FloydW95], Ben-David and Litman [@DBLP:journals/dam/Ben-DavidL98], Chernikov and Simon [@chernikovS], Kuzmin and Warmuth [@DBLP:journals/jmlr/KuzminW07], Rubinstein et al. [@DBLP:journals/jcss/RubinsteinBR09], Rubinstein and Rubinstein [@DBLP:journals/jmlr/RubinsteinR12], Livni and Simon [@DBLP:conf/colt/LivniS13] and more. These works discovered and utilized connections between sample compression schemes, and model theory, topology, combinatorics, and geometry. Finally, in our recent work with Shpilka and Wigderson [@MSWY15], we constructed sample compression schemes of size roughly $2^{O(d)} \cdot \log \log|C|$ for every finite concept class $C$ of VC dimension $d$.
Our contribution {#sec:LiC}
----------------
Our main theorem states that VC classes have sample compression schemes of finite size. The key property of this compression is that its size does not depend on the size of the given sample $(Y,y)$.
\[thm:compressionVC\] If $C\subseteq \{0,1\}^X$ has VC dimension $d$, then $C$ has a sample compression scheme of size $2^{O(d)}$.
Our construction (see Section \[sec:const\]) of sample compression schemes is overall quite short and simple. It is inspired by Freund’s work [@DBLP:journals/iandc/Freund95] where majority is used to boost the accuracy of learning procedures. It also uses several known properties of PAC learnability and VC dimension, together with von Neumann’s minimax theorem, and it reveals approximate but efficient equilibrium strategies for zero-sum games of low VC dimension (see Section \[sec:press\] below).
The construction is even more efficient when the dual class is also under control. The dual concept class $C^*\subseteq \{0,1\}^C$ of $C$ is defined as the set of all functions $f_x:C\rightarrow \{0,1\}$ defined by $f_x(c) = c(x)$. If we think of $C$ as a binary matrix whose rows are concepts in $C$ and columns are elements of $X$, then $C^*$ corresponds to the distinct rows of the transposed matrix.
\[thm:mainVC\*\] If $C\subseteq \{0,1\}^X$ has VC dimension $d >0$ and $C^*$ has VC dimension $d^*>0$, then $C$ has a sample compression scheme of size $k \log k$ with $k = O(d^* \cdot d)$.
Theorem \[thm:compressionVC\] follows from Theorem \[thm:mainVC\*\] via the following bound, which was observed by Assouad [@Assouad].
\[clm:assou\] If ${\text{VC}}(C) \leq d$, then ${\text{VC}}(C^*) < 2^{d+1}$.
A natural example for which the dual class is well behaved is geometrically defined classes. Assume, for example, that $C$ represents the incidence relation among halfspaces and points in $r$-dimensional real space (a.k.a. sign rank or Dudely dimension $r$). That is, for every $c \in C$ there is a vector $a_c \in {\mathbb R}^r$ and for every $x \in X$ there is a vector $b_x \in {\mathbb R}^r$ so that $c(x) = 1$ if and only if the inner product $\langle a_c, b_x \rangle = \sum_{j=1}^r a_c(j) b_x(j)$ is positive. It follows that ${\text{VC}}(C) \leq r$, but the symmetric structure also implies that ${\text{VC}}(C^*) \leq r$. So, the compression scheme constructed here for this $C$ actually has size $O(r^2 \log r)$ and not $2^{O(r)}$.
#### Proof background and overview.
Freund [@DBLP:journals/iandc/Freund95] and later on Freund and Schapire [@DBLP:journals/jcss/FreundS97] showed that for every class $C$ that is PAC learnable with $d$ samples, there exists a compression scheme that compresses a $C$-labeled sample $(Y,y)$ of size $m$ to a sub-sample of size $k = O(d\log m)$ with additional information of $k \log k$ bits (for a more detailed discussion, see Sections 1.2 and 13.1.5 in [@schapire2012boosting]). Their constructive proof is iterative: In each iteration $t$, a distribution $\mu_t$ on $Y$ is carefully and adaptively chosen. Then, $d$ independent points from $Y$ are drawn according to $\mu_t$, and fed into the learning map to produce an hypothesis $h_t$. They showed that after $T=O(\log(1/{\epsilon}))$ iterations, the majority vote $h$ over $h_1,\ldots,h_T$ is an ${\epsilon}$-approximation of $y$ with respect to the uniform measure on $Y$. In particular, if we choose ${\epsilon}< 1/m$, then $h$ completely agrees with $y$ on $Y$. This makes $T = O(\log m)$ and gives a sample compression scheme from a sample of size $m$ to a sub-sample of size $d \cdot T = O(d\log m)$.
The size of Freund and Schapire’s compression scheme is not uniformly bounded, it depends on $|Y|$. A first step towards removing this dependence is observing that their proof can be replaced by a combination of von Neumann’s minimax theorem and a Chernoff bound. In this argument, the $\log m$ factor eventually comes from a union bound over the $m$ samples. The compression scheme presented in this text replaces the union bound with a more accurate analysis that utilizes the VC dimension of the dual class. This analysis ultimately replaces the $\log m$ factor by a $d^*$ factor.
Preliminaries {#sec:press}
=============
#### Approximations.
The following theorem shows that every distribution can be approximated by a distribution of small support, when the statistical tests belong to a class of small VC dimension. This phenomenon was first proved by Vapnik and Chervonenkis [@zbMATH03391742], and was later quantitively improved in [@Li:2000:IBS:338219.338267; @zbMATH00567173].
\[thm:VC\] Let $C\subseteq\{0,1\}^X$ of VC dimension $d$. Let $\mu$ be a distribution on $X$. For all ${\epsilon}>0$, there exists a multiset $Y\subseteq X$ of size $|Y|\leq O(d/ {\epsilon}^2)$ such that for all $c\in C$, $$\left| \mu(\{x \in X : c(x) =1 \}) - \frac{|\{x \in Y : c(x) =1 \}|}{|Y|} \right| \leq{\epsilon}.$$
#### Carathéodory’s theorem.
The following simple lemma can be thought of as an approximate and combinatorial version of Carathéodory’s theorem from convex geometry. Let $C\subseteq\{0,1\}^n \subset {\mathbb R}^n$ and denote by $K$ the convex hull of $C$ in ${\mathbb R}^n$. Carathéodory’s theorem says that every point $p\in K$ is a convex combination of at most $n+1$ points from $C$. The lemma says that if ${\text{VC}}(C^*)$ is small then every $p \in K$ can be approximated by a convex combination with a small support.
\[lem:VCsample\] Let $C \subseteq \{0,1\}^X$ and let $d^* = {\text{VC}}(C^*)$. Let $p$ be a distribution on $C$ and let ${\epsilon}> 0$. Then, $p$ can be ${\epsilon}$-approximated in $L^\infty$ by an average of at most $O(d^*/{\epsilon}^2)$ points from $C$. That is, there is a multiset $F \subseteq C$ of size $|F| \leq O(d^*/{\epsilon}^2)$ so that for every $x \in X$, $$\left| p ( \{ c \in C : c(x) = 1\}) - \frac{|\{ f \in F : f(x) = 1 \}|}{|F|} \right|
\leq {\epsilon}.$$
Every $x \in X$ corresponds to a concept in $C^*$. The distribution $p$ is a distribution on the domain of the functions in $C^*$. The lemma follows by Theorem \[thm:VC\] applied to $C^*$.
#### Minimax.
Von Neumann’s minimax theorem [@Neumann1928] is a seminal result in game theory (see e.g. the textbook [@owen1995game]). Assume that there are 2 players[^6], a row player and a column player. A pure strategy of the row player is $r \in [m]$ and a pure strategy of the column player is $j \in [n]$. A mixed strategy is a distribution on pure strategies. Let $M$ be a binary matrix so that $M(r,j) = 1$ if and only if the row player wins the game when the pure strategies $r,j$ are played.
The minimax theorem says that if for every mixed strategy $q$ of the column player, there is a mixed strategy $p$ of the row player that guarantees that the row player wins with probability at least $V$, then there is a mixed strategy $p^*$ of the row player so that for all mixed strategies $q$ of the column player, the row player wins with probability at least $V$. A similar statement holds for the column player. This implies that there is a pair of mixed strategies $p^*,q^*$ that form a Nash equilibrium for the zero-sum game $M$ defines (see [@owen1995game]).
\[thm:minmax\] Let $M\in\mathbb{R}^{m\times n}$ be a real matrix. Then, $$\min_{p\in \Delta^m}\max_{q\in \Delta^n} \ p^tMq =
\max_{q\in \Delta^n}\min_{p\in\Delta^m} \ p^tMq,$$ where $\Delta^\ell$ is the set of distributions on $[\ell]$.
The arguments in the proof of Theorem \[thm:mainVC\*\] below imply the following variant of the minimax theorem, which may be of interest in the context of game theory. The minimax theorem holds for a general matrix $M$. In other words, there is no assumption on the set of winning/losing states in the game.
We observe that a combinatorial restriction on the winning/losing states in the game implies that there is an approximate efficient equilibrium state. Namely, if the rows of $M$ have VC dimension $d$ and the columns of $M$ have VC dimension $d^*$, then for every ${\epsilon}>0$, there is a multiset of $O(d^*/{\epsilon}^2)$ pure strategies $R \subseteq [m]$ for the row player, and a multiset of $O(d/{\epsilon}^2)$ pure strategies $J \subseteq [n]$ for the column player, so that a uniformly random choice from $R,J$ guarantees the players a gain that is ${\epsilon}$-close to the gain in the equilibrium strategy. Such a pair of mixed strategies is called an ${\epsilon}$-Nash equilibrium. Lipton and Young [@DBLP:journals/corr/cs-CC-0205035] showed that in every zero-sum game there are ${\epsilon}$-Nash equilibriums with logarithmic support[^7]. The ideas presented here show that if, say, the rows of the matrix of the game have constant VC dimension, then there are ${\epsilon}$-Nash equilibriums with constant support.
A sample compression scheme {#sec:const}
===========================
We start with a high level description of the compression process (Theorem \[thm:mainVC\*\]). Given a sample of the form $(Y,y)$, the compression identifies $T \leq O(d^*)$ subsets $Z_1,\ldots,Z_T$ of $Y$, each of size at most $d$. It then compresses $(Y,y)$ to $(Z,z)$ with $Z = \bigcup_{t \in [T]} Z_t$ and $z = y|_Z$. The additional information $i \in I$ allows to recover $Z_1,\ldots,Z_T$ from $Z$. The reconstruction process uses the information $i \in I$ to recover $Z_1,\ldots,Z_T$ from $Z$, and then uses the PAC learning map $H$ to generate $T$ hypotheses $h_1,\ldots,h_T$ defined as $h_t = H(Z_t,z|_{Z_t})$. The final reconstruction hypothesis $h = \rho((Z,z),i)$ is the majority vote over $h_1,\ldots,h_T$.
Since the VC dimension of $C$ is $d$, by Theorem \[thm:BlumerPAC\], there is $s=O(d)$ and a proper learning map $H:L_C(s) \to C$ so that for every $c \in C$ and for every probability distribution $q$ on $X$, there is $Z \subseteq \text{supp}(q)$ of size $|Z| \leq s$ so that $q(\{x \in X : h_Z(x) \neq c(x)\}) \leq 1/3$ where $h_Z = H(Z,c|_Z)$.
#### Compression.
Let $(Y,y)\in L_C(\infty)$. Let $${\cal H} = {\cal H}_{Y,y} = \{H(Z,z) : Z\subseteq Y, |Z|\leq s, z=y|_Z\}
\subseteq C.$$ The compression is based on the following claim.
\[clm:ThereIsF\] There are $T \leq O(d^*)$ sets $Z_1,Z_2,\ldots,Z_T \subseteq Y$, each of size at most $s$, so that the following holds. For $t \in [T]$, let $$\begin{aligned}
\label{eqn:Ft}
f_t = H(Z_t,y|_{Z_t}).\end{aligned}$$ Then, for every $x \in Y$, $$\begin{aligned}
\label{eqn:MajOfF}
|\{ t \in [T] : f_t(x) = y(x)\}| > T/2 .\end{aligned}$$
Given the claim, the compression $\kappa(Y,y)$ is defined as $$Z = \bigcup_{t \in [T]} Z_t \ \ \text{and} \ \ z = y|_Z.$$ The additional information $i \in I$ allows to recover the sets $Z_1,\ldots,Z_T$ from the set $Z$. There are many possible ways to encode this information, but the size of $I$ can be chosen to be at most $k^{k}$ with $k = 1+ O(d^*) \cdot s \leq O(d^*\cdot d)$.
By choice of $H$, for every distribution $q$ on $Y$, there is $h \in {\cal H}$ so that $$q\left(\{x \in Y : h(x) = y(x) \} \right)\geq 2/3.$$ By Theorem \[thm:minmax\], there is a distribution $p$ on ${\cal H}$ such that for every $x\in Y$, $$\begin{aligned}
p(\{h \in {\cal H} : h(x) = y(x)\}) \geq 2/3.\end{aligned}$$ By Lemma \[lem:VCsample\] applied to ${\cal H}$ and $p$ with ${\epsilon}=1/8$, there is a multiset $F = \{f_1,f_2,\ldots,f_T\} \subseteq {\cal H}$ of size $T \leq O(d^*)$ so that for every $x \in Y$, $$\begin{aligned}
\frac{|\{ t \in [T] : f_t(x) = y(x) \}|}{T}
\geq p(\{h \in {\cal H} : h(x) = y(x) \}) - 1/8 > 1/2.\end{aligned}$$ For every $t \in [T]$, let $Z_t$ be a subset of $Y$ of size $|Z_t| \leq d$ so that $$\begin{aligned}
H(Z_t,y|_{Z_t}) = f_t.\end{aligned}$$
#### Reconstruction.
Given $((Z,z),i)$, the information $i$ is interpreted as a list of $T$ subsets $Z_1,\ldots,Z_T$ of $Z$, each of size at most $d$. For $t \in [T]$, let $$h_t = H(Z_t,z|_{Z_t}).$$ Define $h = \rho((Z,z),i)$ as follows: For every $x \in X$, let $h(x)$ be a symbol in $\{0,1\}$ that appears most in the list $$\lambda_x((Z,z),i) = (h_1(x),h_2(x),\ldots,h_T(x)),$$ where ties are arbitrarily broken.
#### Correctness.
Fix $(Y,y) \in L_C(\infty)$. Let $((Z,z),i) = \kappa(Y,y)$ and $h = \rho((Z,z),i)$. For $x \in Y$, consider the list $$\phi_x(Y,y) = (f_1(x),f_2(x),\ldots,f_T(x))$$ defined in the compression process of $(Y,y)$. The list $\phi_x(Y,y)$ is identical to the list $\lambda_x((Z,z),i)$ due to the following three reasons: Equation ; the information $i$ allows to correctly recover $Z_1,\ldots,Z_T$; and $y|_{Z_t} = z|_{Z_t}$ for all $t \in [T]$. Finally, by , for every $x \in Y$, the symbol $y(x)$ appears in more than half of the list $\lambda_x((Z,z),i)$ so indeed $h(x) = y(x)$.
Concluding remarks and questions
================================
We have shown that every VC class admits a sample compression scheme with size exponential in its VC dimension. This is the first bound that depends only on the VC dimension, and holds for all binary-labeled classes. It is worth noting that many of the known compression schemes for special cases, like [@DBLP:journals/ml/FloydW95; @DBLP:journals/dam/Ben-DavidL98; @DBLP:journals/jmlr/KuzminW07; @DBLP:journals/jmlr/RubinsteinR12; @DBLP:conf/colt/LivniS13], have size $d$ or $O(d)$ which is essentially optimal. In many of these cases, our construction is in fact of size polynomial in $d$, since the VC dimension of the dual class is small as well. Nevertheless, Floyd and Warmuth’s question [@DBLP:journals/ml/FloydW95; @DBLP:conf/colt/Warmuth03] whether sample compression schemes of size $O(d)$ always exist remains open.
#### Multi-labeled classes.
Unlike VC dimension, sample compression schemes as well as the fact that they imply PAC learnability naturally generalizes to multi-labeled concept classes (see e.g. [@DBLP:conf/alt/SameiYZ14].) Littlestone and Warmuth’s question is therefore an instance of a more general question: Does the size of an optimal sample compression scheme for a given class capture the sample complexity of PAC learning of this class? A positive answer to this question will yield a universal and natural parameter that captures the sample complexity of PAC learning.
There are many generalization of VC dimension to multi-labeled concept classes $C\subseteq \Sigma^X$, see [@BenDavid95] and references within. An example that naturally comes up in our analysis is the distinguishing dimension ${\text{DD}}(C)$: For every $c \in C$, define a binary concept class $B_c \subseteq \{0,1\}^X$ as the set of all $b_h$, for $h \in C$, defined by $b_h(x) = 1$ if and only if $h(x) = c(x)$. Define $${\text{DD}}(C) = \sup \{ {\text{VC}}(B_{c}) : c \in C \}.$$ If $C$ is binary then ${\text{VC}}(C) = {\text{DD}}(C)$. This definition of dimension is similar to notions used in [@Natarajan89; @Dudley87; @BenDavid95]. It can be verifies that if $C$ is multi-labeled then our compression scheme for $C$ has size exponential in ${\text{DD}}(C)$. However, although $\Omega({\text{VC}}(C))$ is a lower bound on the sample complexity of PAC learning for a binary-labeled $C$, the distinguishing dimension ${\text{DD}}(C)$ is not a lower bound on the sample complexity of PAC learning for a multi-labeled $C$. Indeed, an example constructed by Danieli and Shalev-Schwartz [@DBLP:conf/colt/DanielyS14] implies that there is a concept class $C \subseteq \Sigma^X$ that is properly PAC learnable with $O(1)$ samples but ${\text{DD}}(C) \geq \Omega(\log|\Sigma|)$.
#### Learners’ complexity.
The efficiency of our construction relies on the fact that every binary-labeled concept class $C$ has a proper learner with optimal sample complexity. A closer look at the proof reveals that it is valid even if the learner is not proper; it suffices that the set of hypotheses produced by the learner have low VC dimension.
This motivates the following natural question: Is it true that for every learning map $H$ for $C \subseteq \{0,1\}^X$ with ${\text{VC}}(C)=d$ and for every $c \in C$, the set of hypotheses that $H$ outputs when learning $c$ has VC dimension $O(d)$ as well?
The answer is negative; some students learn although they make things more complicated than necessary. Here is an example. Let $n$ be a power of $2$, and consider the concept class $C = \{(00\ldots 0)\} \subset \{0,1\}^{X}$ with $X=[n+3 \log n]$ consisting only of the all zero concept. The learning map $H$ gets as input a labeled sample $(Y,y) \in L_C(3)$ of size $3$, and outputs the following hypothesis $h$. If $Y \not \subseteq [n]$ then $h$ is defined to be $0$ everywhere. Otherwise, $h$ is defined as $0$ on $[n]$ and on the last $3 \log n$ coordinates $h$ is defined as $\psi(Y)$, where $\psi$ is a bijection from $[n]^3$ to $\{0,1\}^{[3 \log n]}$. First, the image of $H$ has VC dimension $3 \log n$ since the last $3 \log n$ coordinates are shattered by it. Second, the map $H$ is a PAC learner for $C$. Indeed, let $\mu$ be a distribution on $X$. If $\mu([n]) \geq 2/3$ then the error of $h$ is always smaller than $1/3$. If $\mu([n]) < 2/3$ then the only case that $h$ has positive error is that $Y \subseteq [n]$, which happens with probability $(2/3)^3 < 1/3$.
A variation of the question above is: Does every multi-labeled class $C$ have a learner $H$ that makes a nearly optimal number of samples with an image that is not much more complicated than $C$?
The answer for binary-labeled classes is affirmative; $C$ has a nearly optimal proper learner. Danieli and Shalev-Schwartz [@DBLP:conf/colt/DanielyS14] showed that there are multi-labeled concept classes that are PAC learnable with $O(1)$ samples but are not properly PAC learnable with $O(1)$ samples. In their example, however, the image of $H$ has just one more concept than $C$. This question therefore remains open.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Amir Shpilka and Avi Wigderson for helpful discussions. We also thank Ben Lee Volk and Manfred Warmuth for comments on an earlier version of this text.
[^1]: Departments of Computer Science, Technion-IIT, Israel and Max Planck Institute for Informatics, Saarbrücken, Germany. [[email protected].]{} Research is supported by ISF and BSF.
[^2]: Department of Mathematics, Technion-IIT, Israel. [[email protected].]{} Horev fellow – supported by the Taub foundation. Research is also supported by ISF and BSF.
[^3]: Big $O$ and $\Omega$ notation means up to absolute constants.
[^4]: Logarithms in this text are base $2$.
[^5]: That is, $C \subseteq \{0,1\}^X$ of size $|C| = \sum_{j=0}^d {|X| \choose j}$ with $d = {\text{VC}}(C)$.
[^6]: We focus on the case of zero-sum games.
[^7]: Lipton, Markakis and Mehta [@Lipton:2003:PLG:779928.779933] proved a similar statement for general games.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this article, a large data set containing every course taken by every undergraduate student in a major university in Canada over 10 years is analysed. Modern machine learning algorithms can use large data sets to build useful tools for the data provider, in this case, the university. In this article, two classifiers are constructed using random forests. To begin, the first two semesters of courses completed by a student are used to predict if they will obtain an undergraduate degree. Secondly, for the students that completed a program, their major is predicted using once again the first few courses they have registered to. A classification tree is an intuitive and powerful classifier and building a random forest of trees improves this classifier. Random forests also allow for reliable variable importance measurements. These measures explain what variables are useful to the classifiers and can be used to better understand what is statistically related to the students’ situation. The results are two accurate classifiers and a variable importance analysis that provides useful information to university administrations.
**Keywords** : Higher Education, Student Retention, Academic Success, Machine Learning, Classification Tree, Random Forest, Variable Importance
author:
- Cédric Beaulac
- 'Jeffrey S. Rosenthal'
bibliography:
- 'mybibfile.bib'
title: 'Predicting University Students’ Academic Success and Major using Random Forests'
---
Introduction {#intro}
============
Being able to predict if a student is at risk of not completing its program is valuable for universities that would like to intervene and help those students move forward. Predicting the major that will be completed by students is also important in order to understand as soon as possible which program attracts more students and allocate resources accordingly. Since gathering data can be an expensive procedure, it would be useful being able to predict both of these things using data the university already possesses such as student records. Understanding which variables are useful in both of these predictions is important as it might help understand what drives student in taking specific classes.
Formally, these two prediction problems are classification ones. To solve these, a popular machine learning algorithm is used, a classification tree. A classification tree is an easy to interpret classification procedure that naturally allows interactions of high degree across predictors. The classification tree uses the first few courses attempted and grades obtained by students in order to classify them. To improve this classifier, multiple trees are grown and the result is a random forest. A random forest can also be used to assess variable importance in a reliable manner.
The University of Toronto provided a large data set containing individual-level student grades for all undergraduate students enrolled at the Faculty of Arts and Science at the University of Toronto - St. George campus between 2000 and 2010. The data set contains over 1 600 000 grades and over 65 000 students. This data set was studied by Bailey et al. [-@Bailey16] and was used to build an adjusted GPA that considers course difficulty levels. Here, random forest classifiers are built upon this data set and these classifiers are later tested.
The contribution in this article is two-fold. First, classifiers are built and the prediction accuracy of those classifiers exceeds the accuracy of the linear classifiers thus making them useful for universities that would like to predict where their resources need to be allocated. Second, the variable importance analysis contains a lot of interesting information. Among many things, the high importance of grades in low-grading departments was noted and might be a symptom of grade inflation.
Literature review {#sec:2}
=================
Predicting success
------------------
In this article a statistical learning model is established to predict if a student succeeds at completing an undergraduate program and to predict what major was completed. This statistical analysis of a higher education data set shares similarities with recent articles by Chen and Desjardins [-@Chen08; -@Chen10] and Leeds and DesJardins [-@Leeds15] as a new statistical approach will be introduced, a data set will be presented and policy making implications will be discussed. The task of predicting student academic success has already been undertaken by many researchers. Recently Kappe and van des Flier [-@Kappe12] tried to predict academic success using personality traits. In the meanwhile, Glaesser and Cooper [-@Glaesser12] were interested in the role of parents’ education, gender and other socio-economic metrics in predicting high school success.
While the articles mentioned above use socio-economic status and personality traits to predict academic success, many researchers are looking at academic-related metrics to predict graduation rates. Johnson and Stage [-@Johnson18] use High-Impact Practices, such as undergraduate research, freshman seminars, internships and collaborative assignments to predict academic success. Using regression models, they noted that freshman seminars and internships were significant predictors. Niessen and al. [-@Niessen16] discuss the significance of trial-studying test in predicting student dropouts. This test was designed to simulate a representative first-year course and student would take it before admission. The authors noted that this test was consistently the best academic achievement predictor.
More recently, Aulck and al. [-@Aulck16] used various machine learning methods to analyse a rather large data set containing both socio-economic and academic metrics to predict dropouts. They noted similar performances for the three methods compared; logistic regression, k-nearest neighbours and random forests. The proposed analysis differs from the above-mentioned as it takes on the challenge to predict academic success and major using strictly academic information available in student records. The benefits of having classifiers built upon data they already own is huge for university administrations. It means university would not need to force students to take entry tests or relies on outside firms in order to predict success rate and major which is useful in order to prevent dropout or to allocate resources among departments. As noted by Aulck and al. [-@Aulck16] machine learning analysis of academic data has potential and the uses of random forest in the following article aims at exploiting this potential.
Identifying important predictors {#gi}
--------------------------------
Identifying and interpreting the variables that are useful to those predictions are important problems as well. It can provide university administrator with interesting information. The precise effect of grades on a student motivation lead to many debates and publications over the years (more recently [@Mills00; @Ost10]). Because grades should be indicators of a student’s abilities, evaluating the predictive power of grades in various departments is important. University administrators might want to know if grades in a department are better predictors than grades in other departments. Continuing on the point, it is also important to understand what makes the evaluations in a department a better indicator of students’ success. Random forest mechanisms lead to variable importance assessment techniques that will be useful to understand the predictive power of grades variables.
Understanding the importance ranking of grades in various departments can also enlighten us regarding the phenomenon of *grade inflation*. This problem and some of its effect has been already discussed in many papers ([@Sabot91; @Johnson03; @Bar09] ) and it is consensual that this inflation differs from one department to another. According to Sabot and Wakeman-Linn, [-@Sabot91] this is problematic since grades serve as incentives for course choices for students and now those incentives are distorted by the grade inflation. As a consequence of the different growths in grades, they noted that in many universities there exist a chasm in grading policies creating high-grading departments and low-grading departments. Economics, Chemistry and Mathematics are examples of low-grading departments while English, Philosophy and Political Science are considered high-grading.
As Johnson mentions [@Johnson03], students are aware of these differences in grading, openly discuss them and this may affect the courses they select. This inconsistency in course difficulty is also considered by Bailey and al. [-@Bailey16] as they built an adjusted GPA that considers course difficulty levels. The accuracy of that adjusted GPA in predicting uniform test result is a great demonstration that courses do vary in difficulty. If some departments suffer from grade inflation, the grades assigned in that department should be less tied to the actual student ability and therefore they should be less predictive of student success. A thorough variable importance analysis will be performed in order to test this assumption.
Understanding which predictors are important can also provide university administrators with feedback. For example, some of the High-Impact Practices identified by Randall Johnson and King Stage [-@Johnson18] are part of the University of Toronto’s program. The variable importance analysis could be a useful tool to assess the effect of such practices.
Methodology {#sec:3}
===========
Data
----
The data set provided by the University of Toronto contains 1 656 977 data points, where each observation represents the grade of one student in one course. A data point is a 7 dimensions observation containing the student ID, the course title, the department of the course, the semester, the credit value of the course and finally the numerical grade obtained by the student. As this is the only data obtained, some pre-processing is required in order for algorithms to be trained. The [**first research question**]{} is whether it is possible to design an algorithm which accurately predicts whether or not a student will complete their program. The [**second research question**]{} is whether it is possible to design an algorithm which accurately predicts, for students who complete their program, which major they will complete. These two predictions will be based upon first-year student records.
The data has been pre-processed for the needs of the analyses. At the University of Toronto, a student must complete 20 credits in order to obtain an Honours B.A. or B.Sc [@UofT2017]. A student must also either complete 1 Specialist, 2 Majors or 1 Major and 2 Minors. The first five credits attempted by a student roughly represent one year of courses. Therefore, for each student every semester until the student reaches 5 attempted credits are used for prediction. It means that for some students, the predictors represent exactly 5 attempted credits and for some other students, a bit more. The set of predictors consists of the number of credits a student attempted in every department and the average grade across all courses taken by the student in each department. Since courses were taken by students in 71 different departments, the predictor vector is of length 142. Of course, many other predictors could also be computed from the data set, but these are the most appropriate ones for the purpose of the variable importance analysis.
To answer the first research question, a binary response indicating whether or not a student completed their program is needed. Students that completed 18 credits were labelled as students who completed their program. Students who registered to 5 credits worth of courses, succeeded at fewer than 18 credits worth of courses and stopped taking courses for 3 consecutive semesters are considered students who began a program but did not complete it. All other students were left out of the analysis. Since some students take classes in other faculties or universities, 18 credits was deemed a reasonable threshold. It is possible that some students did not complete their program even though they completed 18 credits, but it is more likely that they took courses in other faculties or universities. To be considered dropouts, only students who registered to at least 5 credits worth of courses were considered. It was assumed that students that registered to fewer credits were registered in another faculty, campus, university or were simply auditing students. After this pre-processing was performed, the data set contains 38 842 students of which 26 488 completed an undergraduate program and 12 294 did not.
To answer the second research question a categorical response representing the major completed by the student is required. To do so, the 26 448 students who completed a program are kept. The response will represent the major completed by the student. Since this information is not available in the data set, the department in which the student completed the largest number of credits is considered the program they majored in. Therefore, the response variable is a categorical variable that can take 71 possible values. This formatting choice might be a problem for students who completed more than 1 major. Some recommendations to fix that problem can be found in the conclusion.
Regarding the various grading policies of this university it was noticed that Mathematics, Chemistry and Economics are the three departments with the lowest average grades. As grades do vary widely across the data set there is no statistically significant difference between the departments but it is still interesting to observe that departments that were defined as low-grading departments in many papers do appear as the lowest grading departments in this data set too. Finally, the data set was divided in three parts as is it usually done. The algorithm is trained upon the training set, which contains 90% of the observations in order to learn from a large portion of the data set. 5% of the data set is assigned to the validation set which is utilized to select various optimization parameters. Finally, the rest of the data set is assigned to the test set, which is a data set totally left aside during training and later used to test the performances of the trained classifier.
Classification Tree {#sectree}
-------------------
A typical supervised statistical learning problem is defined when the relationship between a response variable and an associated set of predictors (used interchangeably with inputs) is of interest. The response is what needs prediction, such as the program completion, and the predictors, such as the grades, are used to predict the response. When the response variable is categorical, this problem is defined as a classification problem. One challenge in classification problems is to use a data set in order to construct a classifier. A classifier is built to emit a class prediction for any new observation with unknown response. In this analysis, classifiers are built upon the data set described in section \[data\] to predict if a new student will complete its program and what major will be completed using information related to its first year of courses.
A classification tree [@Breiman84] is a model that classifies new observations based on set of conditions related to the predictors. For example, a classification tree could predict a student is on its way to complete a program because it attempted more than 2 Mathematics courses, obtained an averaged grade in Mathematics above 80 and attempted fewer than 2 Psychology courses. The set of conditions established by a decision tree partitions in multiple regions the space defined by possible predictors values. Intuitively, a classification tree forms regions defined by some predictors values and assign a response label for new observations that would belong in those regions. Figure \[figtree\] illustrates an example of a predictor space partition, its associated regions and its associated classification tree for observations defined by two predictors. The final set of regions can be defined as leaves in a tree as represented in Figure \[figtree\], hence the name classification trees.
{width="6cm"} {width="6cm"}
Now that the model has been established, an algorithm that creates the classification tree using a training set of labelled observations needs to be defined. The algorithm creates the regions by recursively establishing the conditions. It aims at building regions that contains a high concentration of observations of the same class. Usually a measure of impurity is defined; the further the region is from containing only observations with the same label, the bigger this measure is. Intuitively, it is desired to obtain a set of conditions under which all students either completed their programs or not. Therefore, the algorithm analyses how mixed are the labels according to all possible conditions and selects the condition that minimizes the measure of impurity. For example, the algorithm will look at all conditions of the form : “did the student attempt more or less than 1 Mathematics course ?” and select the condition that best divides students that completed a program from students that did not.
Once a condition is selected, the training observations are effectively divided in two sets of training observations based upon the condition. The process is repeatedly applied on the two resulting training sets. The algorithm divides the training observations in smaller sets until each resulting set contains few observations. When the partitioning process is completed, each region is labelled with the class representing the majority of observations respecting the conditions defining the region. A more formal definition of the algorithm is included in the appendix.
Random Forest {#secforest}
-------------
By constructing a decision tree, a powerful and easy to interpret classifier is obtained. As will be demonstrated in this section, one way to improve this classifier is to build a set of classifiers using samples of the training set.
Suppose there is a way to obtain a set of classifiers. The goal is to find a technique that uses the entire set of classifiers to get a new classifier that is better than any of them individually. One method of aggregating the class predictions is by *voting*: the predicted class for a new observation is the most picked class among individual classifier. A critical factor in whether the aggregating procedure will improve the accuracy or not is the stability of the individual classifiers. If a small variation in the training set has almost no effect on the classifier, this classifier is said to be stable, and utilizing a set of classifiers based upon similar training sets will result in a set of almost identical classifiers. For unstable procedures, the classifiers in the set are going to be very different from one another. For such classifiers, the aggregation will greatly improve both the stability and accuracy of the procedure. Procedure stability was studied by Breiman [-@Breiman96a]; classification trees are unstable.
Bootstrap aggregating (*bagging*) was introduced by Breiman [-@Breiman96] as a way to improve unstable classifiers. In bagging, each classifier in the set is built upon a different bootstrap sample of the training set. A bootstrap sample is simply a random sample of the original training sets. Each of the samples are drawn at random with replacement from the original training set and are of the same size. Doing so will produce a set of different training sets. For each of these training set a decision tree is fitted and together they form a random forest. Overfitting is a problem caused when a classifier identifies a structure that corresponds too closely to the training set and generalizes poorly to new observations. By generating multiple training sets, fitting multiple trees and building a forest out of these tree classifiers it greatly reduces the chances of overfitting. Breiman [-@Breiman01] defines a *random forest* as a classifier consisting of a set of tree-structured classifiers where each tree casts a unit vote for the most popular class at one input.
Breiman introduced in 2001 random forests with random inputs [@Breiman01] which is the most commonly used random forest classifier. The novelty of this random forest model is in the tree-growing procedure. Instead of finding the best condition among all the predictors, the algorithm will now randomly select a subset of predictors and will find the best condition among these, this modification greatly improved the accuracy of random forests.
Random forests are easy to use and are stable classifiers with many interesting properties. One of these interesting properties is that they allow for powerful variable importance computations that evaluate the importance of individual predictors throughout the entire prediction process.
Variable Importance in Random Forests {#VISec}
-------------------------------------
A variable importance analysis aims at understanding the effect of individual predictors on the classifier output. A predictor with a great effect is considered an important predictor. A random forest provides multiple interesting variable importance computations. The *Gini decrease importance* sums the total impurity measure decrease caused by partitioning upon a predictor throughout an entire tree and then computes the average of this measure across all trees in a forest. This technique is tightly related to the construction process of the tree itself and is pretty easy to obtain as it is non-demanding computationally.
The *permutation decrease importance* was introduced by Breiman [-@Breiman01]. Intuitively if a predictor has a significant effect on the response, the algorithm should lose a lot of prediction accuracy if the values of that predictor are mixed up in the data set. One way to disrupt the predictors values is by permutations. The procedure computes the prediction accuracy on the test set using the true test set. Then, it permutes the values of one predictor, $j$, across all observations, run this permuted data through the forest and compute the new accuracy. If the input $j$ is important, the algorithm should lose a lot of its prediction accuracy by permuting the values of $j$ in the test set. The process is repeated for all predictors, then it is averaged across all trees and the averaged prediction accuracy decreases are compared. The larger the decrease in accuracy the more important the variable is considered.
Storbl & al. [-@Strobl07] recently published an article where these techniques are analysed and compared. According to this paper, the selection bias of the decision tree procedure might lead to misleading variable importance. Numerous papers [@Breiman84; @Loh01; @Kononenko95] noticed a selection bias within the decision tree procedure when the predictors are of different nature. The simulation studies produced by Storbl & al. [-@Strobl07] show that the Gini decrease importance is not a reliable variable importance measure when predictors are of varying types. The Gini decrease importance measure tends to overestimate the importance of continuous variables.
It is also shown [@Strobl07] that the variable importance techniques described above can give misleading results due the replacements when drawing bootstrap samples. It is recommended that researchers build random forests with bootstrap samples without replacements and use an unbiased tree-building procedure [@Loh97; @Loh01; @Loh02; @Hothorn12]. If a classic tree-building procedure is used, predictors should be of the same type or only the permutation decrease importance is reliable.
Algorithms
----------
A classification tree using the Gini impurity as split measurement was coded in the C++ language using the Rcpp library [@Eddelbuettel11]. The code is available upon request from the first author. The algorithm proceeds as explained in Section \[sectree\], the tree it produces is unpruned and training sets are partitioned until they contain only 50 observations. Three versions of the random forest algorithm are going to be used. Even though one of these models will outperform to two other in terms of prediction accuracy, the variable importance analysis of all three models will be considered and aggregate. For clarity and conciseness purposes, only the best model’s performance will be assessed. **Random forest \# 1** consists of 200 trees and can split upon every variable in each region. Bootstrap samples are drawn without replacement and contain 63% of the original training set. **Random forest \# 2** fits 200 trees but randomly selects the variable to be partitioned upon in each region.
Finally, the popular R RandomForest package [@Liaw02] was also used. It is an easy to use and reliable package that can fit random forests and produce variable importance plots. Using this package, **random forest \# 3** was built. It contains 200 trees. Once again, bootstrap samples are drawn without replacement and contain about 63% of the size of the original training set. By default, this algorithm randomly selects a subset of inputs for each region. Regarding the impurity measure, the Gini impurity was selected because it has interesting theoretical properties, such as being differentiable, and has been performing well empirically.
Linear models were trained for both of the classification problems serving as benchmarks. In order for the comparison to be as direct as possible, the linear model classifiers were constructed upon the same set of predictors; it may be possible to improve both the random forest and the linear model with different predictors. As the problems are two classification ones, the linear models selected were logistic regression models and details regarding their parametrizations are included in the appendix.
Results {#secres}
=======
First research question : Predicting program completion
-------------------------------------------------------
**Random forest \# 3** produced the best accuracy on the test set. Among the students who completed their program in the test set, the classifier achieves a 91.19% accuracy. Out of the 418 students who did not complete their program, the classifier achieves a 52.95% accuracy. The combined result it a 78.84% accuracy over the complete test set.
Obviously this is higher accuracy than if all students would be classified as students who competed their program, which would result in a 68.08% accuracy. The random forest accuracy is also slightly higher than the 74.21% accuracy achieved with a logistic regression based upon the same predictors. These predictions can be useful for university administrations that would like to predict the number of second-year students and prepare accordingly with a sufficient margin. About 75% of students identified as dropouts by the random forest classifier are true dropouts. Therefore students identified as dropouts by the algorithm could be considered higher-risk students and these predictions could be useful in order to target students in need of more support to succeed. The relatively high accuracy of the classifier is also an indicator that the variable importance analysis is reliable.
Variable importance is determined by the average decrease in accuracy in the test set caused by a random permutation of the predictor. This technique has been selected since it is more reliable as explained in Section \[VISec\]. The top 15 variables according to the permutation decrease were kept and ordered in Figures \[FoN\_VI\_RF\],\[FoN\_VI\_RFRI\] and \[FoN\_VI\_RFPack\]. Since variable importance varies from one model to another, the three variable importance plots were included and the results will be aggregated.
![Variables importance boxplots for the **random forest \# 1**.[]{data-label="FoN_VI_RF"}](Better011.pdf){width="14cm" height="10cm"}
![Variables importance boxplots for the **random forest \# 2**.[]{data-label="FoN_VI_RFRI"}](Better022.pdf){width="14cm" height="10cm"}
![Variable importance plot produced by the RandomForest package for the **random forest \# 3**.[]{data-label="FoN_VI_RFPack"}](Better033.pdf){width="14cm" height="10cm"}
In Figures \[FoN\_VI\_RF\],\[FoN\_VI\_RFRI\] and \[FoN\_VI\_RFPack\] and for all the following figures, the variable representing the number of credits in a department is identified by the department code, i.e. the number of credits in Chemistry is identified by CHM. The variable representing the averaged grade in a department is identified by the department code followed by the letter G, i.e CHM G represents the averaged grade in Chemistry.
To begin, it was also noted that the variance for the grade variables were larger. Across all three random forests, the grades in Mathematics (MAT), Finance (COMPG), Economics (ECO) are consistently among the most important grade variable. These departments are considered low-grading departments and perhaps the strict marking of these departments helps to better distinguish students among themselves. A possible explanation is that the grade inflation that suffered the high-grading departments caused the grades to be no longer a reliable tool to distinguish students among themselves which could be a symptom of grade inflation as suggested in section \[gi\]. Other factors could have caused this phenomenon such as less sequential courses in Human Science fields, larger classes size, reduced access to a professor or other factors. It is impossible to claim for sure that these results are caused by the grade inflation problem, but these results could indicate such thing. Therefore, universities could use such technique to verify if grades in a department have more predictive power than grades in other departments and act accordingly since grades should represent students’ abilities.
It is also important to notice the importance of ASSEM in the three variable importance plots. The ASSEM code represents a special type of first year seminar course. It seems that the students that registers in theses courses are easy to classify as both grades and the number of credits are considered important. This result agrees with the result obtained by Johnson and Stage [-@Johnson18] about the importance of first year seminar courses. The first year seminar courses (ASSEM) were brand new at the University of Toronto and the analysis performed provided evidence of the merit of such courses in order to establish a student’s profile and to predict success. In other words, such variable importance analysis could help university administrations assess the usefulness of new programs and courses.
Second research question : Predicting the major
-----------------------------------------------
The second task at hand is to build a random forest that predicts the student’s major. Once again, from a prediction accuracy perspective, **random forest \# 3** offered better performances with a 47.41% accuracy in predicting the major completed. This appears slightly lower than expected, but considering there are 71 different programs, being able to pin down the right program for about half of the students seems successful. This is a better result than the meager 4.75% obtained by assigning majors with probabilities weighted by the proportion of the majors completed. The 47.41% accuracy of the random forest is also above the 42.63% accuracy obtained by the multinomial logistic regression benchmark. For classification purposes, these classifiers could help individual departments predict the number of students registering to second, third or fourth year courses and graduate programs. Predicting the major could also help university administrations to allocate the financial resources among the departments or to decide the programs that require more advertisements.
Variable importance is also interesting for that research questions. Here is the variable importance analyses produced by the three random forests; once again, the 15 most important predictors are displayed. The importance of a predictor is determined by the average decrease in accuracy in the test set caused by a random permutation of the predictor.
![Variables importance boxplots for the **random forest \# 1**. []{data-label="MC_VI_RF"}](Better111.pdf){width="14cm" height="10cm"}
![Variables importance boxplots for the **random forest \# 2**. []{data-label="MC_VI_RFRI"}](Better122.pdf){width="14cm" height="10cm"}
![Variable importance plot produced by the RandomForest package for the **random forest \# 3**.[]{data-label="MC_VI_RFPack"}](Better133.pdf){width="14cm" height="10cm"}
A decrease in importance for the grades variable is noted in Figure \[MC\_VI\_RF\],\[MC\_VI\_RFRI\] and \[MC\_VI\_RFPack\]. This was to be expected because of how the data was formatted. Since the department in which the highest amount of credit was obtained is considered the major completed by the student, these variable importance measures are not surprising. Actually, if all the courses were included, instead of only the first year, the amount of credit in every department precisely defines the response variable. Considering this weakness in the data formatting, the grades still have a relatively high importance. It seems hard to see any effect of grading policies in the predictive power of grades regarding that research question.
It seems like for some departments, such as English (ENG) and Computers Sciences (CSC), it is easy to predict students that will complete a major in those departments by almost solely looking at the number of courses attempted in those departments during the first year. This is caused by the fact that a vast majority of students that take courses in Computers Science or English during their first year end up completing an undergraduate program in these departments respectively. From a policy-making perspective, departments could use this information as they might want to adapt the content of their first-year courses now that they know more about the audience of these courses.
Conclusion
==========
The first year’s worth of courses and grades were used to build two classifiers; one that predicts if a student will complete their undergraduate program, the other that predicts the major of a student who completed a program. Random forests were used to build those classifiers. Random forests are easy to use with most statistical computing languages, fast to train, and they outperform linear logistic models in terms of prediction accuracy. For practitioners, random forests could be an alternative to typical linear models for various prediction tasks; to predict the number of students registered in second-year courses, the distribution of students across the many programs or to identify students at risk of failing or dropping out.
Evaluating the importance of each predictor is also something that offers random forest in comparison to the benchmark model. In this study, it was observed in Section \[secres\] that grades were important for predicting if a student will complete their program. Grades in departments that were considered low-grading departments in some grades inflation research articles like Mathematics, Economics and Finance are consistently among the most important variables. These results indicate that a strong relationship exists between the grades in low-grading departments and the chance of succeeding at an undergraduate program, although this does not necessarily indicate a [*causal*]{} connection. Grades were somewhat less important predictors for predicting the students’ major but even though they were less important, grades in Mathematics, Finance, Economics and Psychology (PSY) were still frequently significantly important.
Finally, for potential improvements in the data analysis, it is to be noted that some students might have completed more than one major or specialization. This might explain the relatively low accuracy for major choice prediction. Allowing for multiple major choices is a potential improvement for this model. This is in fact a multi-label classification problem and some solutions have already been proposed to adapt decision trees to accommodate this more complicated problem [@Clare01; @Chen03; @Chou05]. Some departments also share a great deal of similarities and might be considered equivalent by the university, thus combining some of them might increase the prediction accuracy. The missing values in the predictors were also problematic. Ideally, the algorithm would consider splitting on the grade variables for a certain department only to classify students who took courses in that department. Developing a new decision tree algorithm where new variables are added to the pool of potential split variables depending on previous partitioning should be a great way to improve the actual model in certain scenarios. Overall, implementing a new tree-building procedure where variable are added or discarded based upon previous partitioning and considering a multi-label classifier like suggested by Chen & al. [-@Chen03] could be great improvements for future work on that data set.
Acknowledgement {#acknowledgement .unnumbered}
===============
We are very grateful to Glenn Loney and Sinisa Markovic of the University of Toronto for providing us with students grade data. The authors also gratefully acknowledge the financial support from the NSERC of Canada.
Appendix {#append}
========
The following section contains some mathematical notations and definitions for readers who are interested in more a thorough explanation of sections’ \[sectree\] and \[secforest\] content. Full understanding of the appendix is not needed in order to grasp the essential of the article but it serves as a brief but precise introduction to the mathematical formulation of decision trees and random forests.
Rigorously, a typical supervised statistical learning problem is defined when the relationship between a response variable $\mathbf{Y}$ and an associated $m$-dimensional predictor vector $\mathbf{X} = (X_1,...,X_m)$ is of interest. When the response variable is categorical and takes $k$ different possible values, this problem is defined as a $k$-class classification problem. One challenge in classification problems is to use a data set $D = \{ (Y_i,X_{1,i},...,X_{m,i}) ; i = 1,...,n \}$ in order to construct a classifier $\varphi(D)$. A classifier is built to emit a class prediction for any new data point $\mathbf{X}$ that belongs in the feature space $\mathcal{X} = \mathcal{X}_1 \times ... \times \mathcal{X}_m$. Therefore a classifier divides the feature space $\mathcal{X}$ into $k$ disjoint regions such that $\cup_{j =1}^k B_l = \mathcal{X}$, i.e. $\varphi(D,\mathbf{X}) = \sum_{j=1}^k j \mathbf{1}\{ \mathbf{X} \in B_j\}$.
As explained in section \[sectree\] a classification tree [@Breiman84] is an algorithm that forms these regions by recursively dividing the feature space $\mathcal{X}$ until a stopping rule is applied. Most algorithms stop the partitioning process whenever every terminal node of the tree contains less than $\beta$ observations. This $\beta$ is a tuning parameter that can be established by cross-validation. Let $p_{rk}$ be the proportion of the class $k$ in the region $r$, if the region $r$ contains $n_r$ observations then :
$$p_{rk}= \frac{1}{n_r} \sum_{x_i \in R_r} \mathbf{1}\{y_i = k\}.$$
The class prediction for a new observation that shall fall in the region $r$ is the majority class in that region, i.e. if $\mathbf{X} \in R_r$, $\varphi(D,\mathbf{X}) = \textrm{argmax}_k (p_{kr})$. When splitting a region into two new regions $R_1$ and $R_2$ the algorithm will compute the total impurity of the new regions ; $ n_{1} Q_1 + n_2 Q_2$ and will pick the split variable $j$ and split location $s$ that minimizes that total impurity. If the predictor $j$ is continuous, the possible splits are of the form $X_{j} \leq s$ and $X_j > s$ which usually results in $n_r-1$ possible splits. For a categorical predictor having $q$ possible values, it is common to consider all of the $2^{q-1} -1$ possible splits. Hastie & al. [-@Hastie09] introduces many possible region impurity measurements $Q_r$, in this project, the *Gini index* has been chosen :
$$Q_r = \sum_{j=1}^k p_{rj}(1-p_{rj}).$$
Here is a pseudo-code of the algorithm :
**Algorithm** : DT($D$,$\beta$)
---------------------------------------------------------------------------------
1\. Starting with the entire data set $D$ as the first set of observations $r$.
2\. Check ($n_r$ > $\beta$).
3\. **if** (false) :
Assign a label to the node and exit.
**else if** :
**for** ($j$ in all predictors):
**for** ($s$ in all possible splits) :
Compute total impurity measure.
Select variable $j$ and split $s$ with minimum impurity measure and split
the set $r$ into two children sets of observations.
Repeat steps 2 & 3 on the two resulting sets.
Since decision trees are unstable procedures [@Breiman96a] they greatly benefit from bootstrap aggregating (bagging) [@Breiman96]. In classifier aggregating, the goal is to find a way to use an entire set of classifiers $\{ \varphi(D_q) \}$ to get a new classifier $\varphi_a$ that is better than any of them individually. One method of aggregating the class predictions $\{ \varphi(D_q,\mathbf{X}) \}$ is by *voting*: the predicted class for the input $\mathbf{X}$ is the most picked class among the classifiers. More precisely, let $T_k = | \{ q : \varphi(D_q, \mathbf{X}) = k \} |$ then, the aggregating classifier becomes $\varphi_a(\mathbf{X}) = \textrm{argmax}_k (T_k)$.
On way to form a set of classifiers is to draw bootstrap samples of the data set $D$ which forms a set of learning sets $\{ D_B \}$. Each of the bootstrap samples will be of size $n$ drawn at random with replacement from the original training set $D$. For each of these learning set a classifier $\varphi(D_b)$ is constructed and the resulting set of classifiers $\{ \varphi(D_b) \}$ can be used to create an aggregating classifier. If the classifier is an unpruned tree then the aggregating classifier is a random forest.
A random forest classifier is more precise than a single classification tree in the sense that it has lower mean-squared prediction error [@Breiman96]. By bagging a classifier, the bias will remain the same but the variance will decrease. One way to further decrease the variance of the random forest is by construction trees that are as uncorrelated as possible. Breiman introduced in 2001 random forests with random inputs [@Breiman01]. In these forests, instead of finding the best variable and partitioning among all the variables, the algorithm will now randomly select $p < m$ random covariates and will find the best condition among those $p$ covariates.
The fitted random forest classifiers were compared to two logistic regression models. A simple logistic model is used to predict if a student completes its program or not with the following parametrization :
$$P(Y_i =1) = \frac{\exp(\sum_{i=0}^m \beta_i x_i)}{1+\exp(\sum_{i=0}^m \beta_i x_i)},$$
where $Y_i=1$ means student $i$ completed its program, $m$ is the number of predictors, $\beta's$ the parameters and $x_i's$ the predictor values. To predict the major completed, a generalization of the logistic regression, the multinomial logistic regression is used with the following parametrization :
$$P(Y_i = p) = \frac{\exp(\sum_{i=0}^m \beta_i^{(p)} x_i)}{1+\exp(\sum_{l=1}^k \sum_{i=0}^m \beta_i^{l} x_i)},$$
where $Y_i =p$ means the student $i$ completed the program $p$ and where $k$ is the number of programs.
Finally, here is a short example of code to fit random forests, get predictions for new observations and produce variable importance plots using the R language :
#Importing the randomForest package
require(randomForest)
#Fitting the random forest with 200 trees
#using bootstraps without replacement.
Fit <- randomForest(x=X,y=as.factor(Y),importance=TRUE,ntree=200,
replace=FALSE,sampsize=round(0.63*nrow(X)) )
#Prediction class labels for new observations newX
predictions <- predict(Fit,newX)
#Production variable importance plot
importance(Fit,type=1)
|
{
"pile_set_name": "ArXiv"
}
|
3.truecm
**The short-time Dynamics of the Critical Potts Model**
2.0truecm
**L. Schülke and B. Zheng**
0.7truecm
Universität-GH Siegen, D-57068 Siegen, Germany
1.truecm
March 1995
2.7truecm
1.truecm [PACS: 64.60.Ht, 02.70.Lq, 05.70.Jk, 82.20.Mj]{}
for years it has been known that there exist universality and scaling behaviour for statistical systems at criticality in equilibrium or near equilibrium, more or less due to the [*infinite* ]{} spatial and time correlation length. Recently it has been observed that universality and scaling may also be present far from equilibrium. One of the examples is that the Ising model, initially in very high temperature, is suddenly quenched to the critical temperature and then evolves with the dynamics of model A. According to an argument of Janssen et al. with two-loop $\epsilon$-expansion [@jan89], besides the well-known long-time universal behaviour, there exists another universal stage in the earlier time, termed [*critical initial slip*]{}, which sets in right after the microscopic time scale. The characteristic behaviour of such process is that, if a non-zero but [*small*]{} initial magnetization $m_0$ is generated in the system, the anomalous dimension of the operator $m_0$ gives rise to the critical increase of the magnetization $$M(t) \sim m_0 \, t^\theta,
\label{cis}$$ with $\theta$ being a new dynamic critical exponent. Detailed scaling analysis reveals [@die93] that the characteristic time scale for the critical initial slip is $t_0 \sim m_0 ^ {-z/x_0}$, where $x_0$ is the scaling dimension of $m_0$, and related to $\theta$ by $x_0 = \theta z + \beta \nu$. Interesting enough, it was pointed out that the exponents $\beta$, $\nu$ and $z$ should be valued the same as those in the equilibrium or long-time stage of the relaxation. Previously $\theta$ has been measured with Monte Carlo simulation in two dimensions somehow [*indirectly*]{} from the power law decay of the autocorrelation [@hus89; @bra91], and recently in three diemensions [*directly*]{} from the power law increase of the magnetization [@li94]. They are in good agreement with the result from the $\epsilon$-expansion. Furthermore, based on the scaling relation in the initial stage of the time evolution, a new promising way for measuring the exponents $z$, $\beta$ and $\nu$ from the finite size scaling has been proposed [@li94a]. This indicates a possible broad application of the short-time dynamics. Therefore more and deeper understanding of this phenomenon becomes urgent.
As far as we know, even though analytical perturbative calculations for the critical initial slip can be extended to the $O(N)$ vector model [@jan89; @die93], the numerical results are so far limited to the Ising model [@hus89; @bra91; @men94; @li94; @li94a]. The purpose of this letter is to report systematic Monte Carlo simulations of the short-time behaviour of the two dimensional Potts model at criticality relaxing from high temperature states. A refined measurement of the exponent $\theta$, $z$ and $\beta / \nu$ from the power law behaviour of the physical observables is presented. It relates the above mentioned indirect and direct measurements of $\theta$ each other and provides a consistent test of the scaling relation for the Potts model. Finite size effects and the week dependence of the measurement of $\theta$ on $m_0$ are discussed and the spatial correlation length is computed.
From the scaling analysis, the autocorrelation has the initial behaviour $$A(t) \sim t^{-d/z+\theta}
\label{auto}$$ in case of $m_0 = 0$. Most of the previous measurements of $\theta$ were based on this relation. A disadvantage is that one has to input $z$ to obtain $\theta$. Since $z$ is one order of magnitude bigger than $\theta$, a small relative error of $z$ will induce a big error in $\theta$. This becomes more severe when $\theta$ is getting smaller. A direct measurement from the power law increase of the order parameter can improve this situation [@li94]. As we will see later, for the Potts model we did observe the same initial increase of the order shown in (\[cis\]). The week dependence of the practical measurement of $\theta$ on $m_0$ appears, however, more visible in the Potts model than in the Ising model. Therefore in this letter we consider a correction of $\theta$ for finite $m_0$. Since $\theta$ for the Potts model is smaller than that for the 2D Ising model, its measurement from the auto-correlation is harder.
On the other hand, traditionally the dynamic exponent $z$ is defined and measured from the long-time exponential decay of the time correlation or the magnetization of the systems [@wil85; @wan91]. Due to the critical slowing down this is somehow difficult. From the above discussion, however, it is easy to realize that with $\theta$ in hand, one can obtain $z$ quite rigorously from the power law decay (\[auto\]) of the autocorrelation. Since the measurement is carried out at the beginning of the time evolution, it is efficient. This is an alternative way to measure $z$ from the short-time behaviour of the system. Compared with the method proposed in [@li94a], the advantage is that the measurement can be carried out in a single lattice rather than by comparing two lattices. Finally the static exponent $\beta / \nu$ can be obtained from the power law increase of the second moment $$M^{(2)}(t) \sim t ^ {(d-2\beta / \nu)/z}.
\label{m2}$$ Since $2\beta / \nu$ is one order of magnitude smaller than $z$, its measurement is quite sensitive to the error of $z$.
For the study of the short-time dynamics, usually quite a big lattice is used. In this letter, the finite size effects will be discussed. It turns out that for the measurement of $\theta$ the finite size effect is not so big. Furthermore, the spatial correlation is measured and found to be very small compared with the lattice size. This indicates that the mechanism for the universality and scaling in short-time dynamics should be different from that in the equilibrium or near equilibrium.
The Hamiltonian for the $q$ state Potts model $$H=J \sum_{<ij>} \delta_{\sigma_i,\sigma_j},\qquad \sigma_i=1,...,q
\label{hami}$$ with $<ij>$ representing nearest neighbors. In this letter we only consider the three state case. It is well known that for the three state Potts model the critical point locates at $J_c=\log(1+\surd 3)$. As in case of the Ising model, initially the Potts model is prepared to be in a random state with a sharp magnetization $m_0$. Then it is released to the evolution with the heat-bath algorithm at the critical temperature. We measure the time evolution of the magnetization, the second moment, the auto-correlation and the spatial correlation respectively $$M(t)= \frac{3}{2}\frac{1}{N}\,
\left<\sum_i \left(\delta_{\sigma_i(t),1}-\frac{1}{3}\right)\right>,$$ $$M^{(2)}(t)= \frac{9}{4}\frac{1}{N^2}\,
\left<\left[\sum_i
\left(\delta_{\sigma_i(t),1}-\frac{1}{3}\right)\right]^2\right>,$$ $$A(t)=\frac{1}{N}\,
\left<\sum_i
\left(\delta_{\sigma_i(0),\sigma_i(t)}-\frac{1}{3}\right)\right>,$$ $$C(x,t)=\frac{1}{N}\,
\left<\sum_i
\left(\delta_{\sigma_i(t),\sigma_{i+x}(t)}-\frac{1}{3}\right)\right>,$$ where the average is taken over the independent intial configurations. Except for the magentization $M(t)$, the above definitions are restricted here to the case of $m_0=0$. In spite of the lack of the analytic analysis, we assume that all the scaling properties including the increase of the order for the Ising model are valid also for the Potts model and test them by numerical simulation. In the same time the related critical exponents will be determined.
In Fig. 1, as an example, the time evolution of the magnetization with the initial value $m_0=0.08$ for different lattice size $L$ is displayed in a double log-scale to present the power law increase. It is remarkable that the power law increase starts also from the very beginning of the time evolution $t=1$ as it is in the three dimensional Ising model. $\theta$ can be estimated from the slope of the curves. It is clearly seen that $\theta$ converges to a definite value when the lattice size $L\geq 36$. In other words, for the measurement of $\theta$ the finite size effect is already quite small for a lattice size $L=36$. In comparison to this, to observe the power law decay for the auto-correlation one needs much bigger lattices, as will be seen later. In Tab. 1, $\theta$ for $L=72$ measured from $t=1$ to $t=15$ for different initial magnetizations has been summarized. The total number of samples for the independent initial configurations is $80,000$ for bigger $m_0$ and $480,000$ for smaller $m_0$. The errors are estimated by dividing the data into four or six groups, respectively. Different from the Ising model, the measured $\theta$ shows slightly more dependence on $m_0$. Therefore, according to its definition, a linear extrapolation of $\theta$ to the fixed point $m_0=0$ is carried out. This leads to the value $$\theta=0.0815(27).$$
$$\begin{array}{|c|l|l|l|l|l|}
\hline
m_0 &\qquad 0.10 &\quad 0.08 &\quad 0.06 &\quad 0.04 &\quad 0.00\\
\hline
\theta & 0.1076(08) & 0.1036(12) & 0.0980(06) & 0.0925(14) & 0.0815(27) \\
\hline
\end{array}$$
$$\begin{array}{|c|r|r|r|r|}
\hline
L &\qquad 72 &\quad 144 &\quad 288 &\quad \infty\\
\hline
-d/z+\theta & -0.8510(08) & -0.8387(09)& -0.8335(09) & -0.8283(20)\\
(d-2\beta/\nu)/z & 0.7921(11) & 0.7881(14) & 0.7875(28) & 0.7878(16)\\
\hline
\end{array}$$
Now we set $m_0=0$ and measure the auto-correlation. In Fig. 2, the dependence of the auto-correlation on $L$ is presented. Obviously at $L=36$ no power law behaviour is observed. The convergence to a power law behaviour only starts around $L=144$. It is clear that the regime presenting power law grows when the lattice size increases. It is interesting that, somehow different from $M(t)$, the first time steps apparently deviate slightly from the power law. In Tab. 2 the corresponding values for $-d/z+\theta$ measured from $t=5$ to $t=50$ are given. We stop the measurement at $t=50$ due to the obvious bigger finite size effect and statistical errors. The total samples for $L=144$ is $40,000$ and for $L=288$ is $16,000$. if we only intend to obtain $z$ by taking $\theta$ as input, we should already be satisfied with the lattice size since the results from $L=144$ and $L=288$ are already so close. However, in order to get the better $\beta/\nu$ later by inputting the $z$ measued here, we perform the linear extrapolation for $-d/z+\theta$ over $1/L$ to $L=\infty$ and obtain $$z=2.1983(81).$$ Compared to the values of $z$ distributed between $z=2.2$ and $z=2.7$ from different numerical measurments [@a; @TANG87; @b; @c], our result supports the relative small $z$ [@a; @TANG87].
In Fig. 3, the power law increase of the second moment $M^{(2)}(t)$ is shown. The measurement of $(d-2\beta/\nu)/z$ from $t=5$ to $t=50$ are also listed in Tab. 2. From the data we can see that for $L=144$ and $L=288$, the finite size effects are already less prominent than the statistical errors. Therefore the value $0.7878(22)$ of $(d-2\beta/\nu)/z$ at $L=\infty$ is only a simple average of them. Here we get $$2\beta/\nu=0.2682(73),$$ which is in good agreement with the exact value $4/15\approx 0.2667$ [@bax82]. Such coincidence provides a strong support for scaling in the short-time dynamics.
Finally we have also measured the correlation length $\xi(t)$ from the spatial correlation function $C(x,t)$. for example, for $L=288$, $\xi(t=96) \approx 6.0$ and it is much smaller than the lattice size $L$.
In conclusion, by observing the power law behaviour of the magnetization $M(t)$, the second moment $M^{(2)}(t)$ and the auto-correlation $A(t)$, we confidently confirm the scaling properties for the Potts model in the short-time dynamics and obtain the related critical exponents $\theta$, $z$ and $\beta/\nu$. This is the first measurment of $\theta$ for the Potts model. Our way to determine $z$ is efficient. Such an investigation for models in other universality classes should be carried out.
1\. truecm
[*Acknowledgement:*]{} The authors would like to thank K. Untch for the help in maintiaining our Workstations.
2\. truecm
[99]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Wind tunnel experiments were conducted to study the impact of atmospheric stratification on flow and dispersion within and over a regular array of rectangular buildings. Three stable and two convective incoming boundary layers were tested with a Richardson number ranging from $-$1.5 to 0.29. Dispersion measurements were carried using a fast response flame ionisation detector. The results show that the stratification effect on the plume width is significantly lower than the effect on the vertical profiles. Stable stratification did not affect the plume central axis inside the canopy, but in the unstable case the axis appeared to deviate from the neutral case direction. Above the canopy both stratification types caused an increase in the plume deflection angle compared to the neutral case. Measured mean concentrations in stable stratification were up to two times larger in the canopy compared to the neutral case, while in convective conditions they were to three times smaller. The proportionality between the vertical turbulent fluxes and the vertical mean concentration gradient was also confirmed in the stratified cases. The high-quality experimental data produced during this work may help developing new mathematical models and parametrisation for non-neutral stratified conditions, as well as validating existing and future numerical simulations.'
address: 'EnFlo, Department of Mechanical Engineering Sciences, University of Surrey, Guildford, Surrey GU2 7XH, UK'
author:
- Davide Marucci
- Matteo Carpentieri
bibliography:
- 'StratEnFlo\_array\_dispersion.bib'
title: Dispersion in an array of buildings in stable and convective atmospheric conditions
---
Stable boundary layer ,Convective boundary layer ,Wind tunnel ,Array of cuboids ,Dispersion
Introduction
============
Atmospheric stratification can have a significant impact on pollutant dispersion in urban areas, but there are still many uncertainties in quantifying its effect, mainly because of the difficulties of studying non-neutral conditions in the laboratory and the field. Urban dispersion models generally discard stratification effects based on the fact that in cities, due to their large aerodynamic roughness length, mechanically-generated turbulence tend to dominate over buoyancy effects [@Britter2003]. This seems a sensible assumption, but it is largely unsupported by observations. @Wood2010, for example, found that either stable or convective conditions represent a large majority of cases in a large urban area.
Nevertheless, laboratory studies in non-neutrally stratified conditions are very rare, especially when dealing with large urban building arrays. The case of stable and unstable incoming flow over either an aligned or staggered array of cubes has been investigated by @Uehara2000 and @Kanda2016, respectively. The former focused on a cross section downstream a block, with just one vertical profile scanning the entire boundary layer depth. Moreover, neither heat fluxes nor pollutant concentration measurements were attempted. @Kanda2016 expanded further with measurements of heat fluxes and mean concentration for a point source release, but only one full-height vertical profile was acquired and no concentration fluctuations and fluxes were sampled. Moreover, only one stable and one unstable cases were considered. The concentration and turbulence measurements in and above the canopy revealed important effects of the stratification, encouraging further studies in this direction. In particular, the plume depth and width were affected by stratification, being both smaller in the SBL case and larger in the CBL one, compared to the NBL reference.
Slightly more abundant are the numerical studies, especially involving large eddy simulations [LES, @Inagaki2012; @Park2013; @Xie2013; @Boppana2014]. @Tomas2016 simulated the effect of stable stratification on flow and dispersion from a line source over an array of aligned cubes. They found that under a weak SBL (bulk Richardson number based on the boundary-layer depth, $Ri_\delta=0.15$) the depth of the internal boundary layer (IBL) after 24 rows of cubes was 14% shallower compared to a NBL, while the turbulent kinetic energy (TKE) was reduced by 21%. On the other hand, the area-averaged street concentration level in a SBL was found to be 17% larger than for the NBL thanks to the decreased streamwise advection and pollutant trapping in the IBL.
@Shen2017 simulated a SBL developing over an array of aligned cubes. Their model was validated using results from @Kanda2016. Different plan area densities ($\lambda_p$) were investigated, ranging from isolated roughness to skimming flow regimes. A point-source ground-level pollutant release was also considered. Results showed that the reduced advection velocity in the SBL is the cause for the larger concentration in the canopy. @Jiang2018 employed the same array of aligned cubes but with a weaker CBL case (bulk Richardson number based on the cubes’ height, $Ri_H = -0.15$) and a line source. Results showed that a primary recirculation region was formed inside the canopy, similar to the one observed in bi-dimensional street canyons [see, e.g., @Cheng2011 in this regard]. The turbulent pollutant fluxes were found to considerably contribute to the pollutant transport into the “canyon”, especially in the side ends of the streets, while no inflow due to turbulence was detected vertically from the top section. On the other end, turbulent fluxes were found to be the main contributor for pollutant going out of the “canyon” from the top surface.
The work presented in this paper is part of the StratEnFlo project, funded by the UK Engineering and Physical Research Council (EPSRC). It was a first attempt to bridge the identified gap in the literature about the lack of experimental data in non-neutral conditions. Initially, new methodologies were developed and optimised to simulate either stable or convective conditions in a meteorological wind tunnel, producing a boundary layer that was thick enough for urban studies [@Marucci2018]. The non-neutral boundary layers produced in that first phase were then applied to a single heated/cooled street canyon [@Marucci2019] and to an array of rectangular buildings [@Marucci2019flow]. The latter, in particular, studied the effects of several incoming SBLs and CBLs on the flow over and within the urban array (using a wind direction of 45$^\circ$), finding that the modifications on the flow and turbulence fields caused by even the weak stratification levels tested were significant. The experiments designed by @Marucci2019flow also included dispersion measurements, but results were not discussed in that manuscript.
[@Sessa2018; @Sessa2019] employed the dataset produced in the present study (but with 0$^\circ$ wind direction) to validate their LES simulation for a rectangular array of buildings with different levels of SBL (ranging from $Ri_\delta$ 0.21 to 1.0). Pollutant release from either a linear or a point source was also modelled. Mean velocity, Reynolds stresses and mean concentrations were in good agreement with the wind tunnel experiments. The mean concentration below the canopy in case of line source for $Ri_H = 1$ was twice as large as the one for $Ri_H = 0.2$ , while for the same stratification cases the concentration from the point source was four times larger. This was partially attributed to simultaneous decrease of both lateral and vertical scalar spreading in the case of point source release. The vertical turbulent fluxes from the line source release in several streamwise locations confirmed the decrease of the vertical scalar mixing for increasing stratification. They also observed a reduction with increasing stratification of the height where the vertical flux became negligible.
This paper reports the results of the dispersion experiments mentioned by @Marucci2019flow, with a detailed analysis of the tracer concentration measurements and a discussion on their significance in terms of urban pollution. Section \[sec:methods\] describes the employed facilities and the experimental settings, as well as the urban model used for this study. The flow characteristics and approaching flow conditions, reported in detail by [@Marucci2019flow], are summarised in section \[sec:flow\]. Results and discussion about the plume characteristics are reported in section \[sec:plume\], while section \[sec:flux\] analyses the mass flux results in more details. Conclusions are reported in section \[sec:Conclusion\].
Experimental methodology {#sec:methods}
========================
The EnFlo meteorological wind tunnel at the University of Surrey is an open-circuit suction boundary-layer wind tunnel with a working section size of 20 m$\times$3.5 m$\times$1.5 m. A turbulent boundary layer was generated using two sets of Irwin spires [@Irwin1981], one for the SBL study and one for the CBL, and roughness elements covering the floor upstream of the model [see, e.g. @Marucci2018; @Marucci2019flow for more details]. A vertical inlet temperature profile can be imposed when working in stratified conditions and the wind tunnel floor can be either cooled or heated depending on the atmospheric conditions to be studied. The optimised techniques to generate either stable or convective boundary layers in this wind tunnel have been fully described by @Marucci2018.
The nominal reference velocity ($U_{REF}$) was used as a target for the closed-loop system controlling the two fans at the outlet of the wind tunnel, based on the measurements by an ultrasonic anemometer placed 5 m downstream of the inlet section, 1 m from the wind tunnel centre line (laterally) and 1 m high. The coordinate system used in this paper is aligned with the urban array model, originating at the centre of the wind tunnel turntable (14 m downstream of the inlet). When the wind direction was set to 0$^\circ$ the $x$-axis was aligned with the tunnel centre line, the $y$-axis was in the lateral direction and the $z$-axis was the vertical one.
The model used in this study was originally developed for the DIPLOS project [see @Castro2017; @Fuka2018; @Hertwig2018] and includes more than 350 rectangular blocks with dimensions $H\times2H\times H$ (width$\times$length$\times$height) regularly spaced (spacing $H = 70$ mm). This geometry is regular, yet is more complex than the classical cubical array and typical street canyon features start to show up [@Castro2017], especially in non aligned configurations (i.e. when the wind direction is not aligned with the streets). For this reason all the experiments reported here were carried out using a 45$^\circ$ model rotation. In order to validate LES numerical results [@Sessa2018], the data set also includes some experiments with 0$^\circ$, but results are not reported here.
In Fig. \[fig:UrbanArrayModel\] a photo and a schematic of the employed urban array model are displayed. Al the experiments reported here were performed using a wind direction of 45 degrees. Dispersion experiments were carried out by using a tracer gas released from a circular source (diameter 22 mm) located at ground level at the centre of the street canyon created by the long edge of a building close to the centre of the model. The tracer was a mixture of propane (not exceeding 1.8%) in air with an exit velocity maintained low, at $0.03U_{REF}$, in order to simulate a passive emission.
![Urban array in the wind tunnel and schematics of the model centre. The source location is indicated by the black dot.[]{data-label="fig:UrbanArrayModel"}](UrbanArrayModel){width="\linewidth"}
The measurement setup is described in a detailed manner by @Marucci2018, @Marucci2019 and @Marucci2019flow. Temperatures, concentrations and two components of velocity were measured simultaneously using, respectively, a fast-response cold-wire probe (CW), a fast-response flame ionisation detector (FFID) and a laser Doppler anemometer (LDA). The LDA target acquisition frequency was set to 100 Hz, while both temperatures and concentrations were sampled at 1000 Hz. Given the irregular nature of the LDA measurements and the different frequencies, a resampling and synchronisation of the three signals was necessary for computing heat and mass fluxes [@Marucci2019].
Each measurement point was sampled for 2.5 minutes, following previous experiments in neutral [@Castro2017] and non-neutral [@Marucci2019] conditions. The standard errors for first and second order statistics was evaluated at each measurement point and deemed satisfactory for high-quality experiments [see, in particular, @Marucci2019flow]. As far as concentration measurements are concerned, in stable conditions standard errors for mean concentrations ($\overline{C}$) were below 10%, while variance ($\overline{c'^2}$) values were generally 20%. Standard errors were, as expected, higher for neutral and convective conditions, suggesting that longer averaging times might be needed for the CBL cases in future experiments. Standard errors for covariance values ($\overline{u'c'}$, $\overline{v'c'}$ and $\overline{w'c'}$) were generally between 10 and 25%, with little sensitivity to different stratification conditions. In the previous discussion and throughout the paper, capital letters and overbars represent a time averaged value, while small letters and the prime symbol identify fluctuating components.
Approaching flow and boundary layer over the array {#sec:flow}
==================================================
Five different non-neutral boundary layers were generated in this study (3 SBLs and 2 CBLs), and they were compared with two neutral reference cases. Two NBLs were required as the non-stratified cases were reproduced using two sets of spires, matching the ones used in the corresponding stratified case (one for stable flows and one for convective). The different heights used for the spires is the main reason why some of the quantities in the reference neutral cases differ from each other. The measured and nominal properties in the five cases are summarised in Tab. \[table:aerpar\]. The nominal Richardson number for each experiment ($Ri_\delta^{app}$) is the desired value in the approach flow, which sometimes differ slightly from the actual value measured over the array ($Ri_\delta$, also reported in the table. The two types of bulk Richardson numbers used in this paper ($Ri_\delta$ and $Ri_H$) can be calculated as
$$\label{bulkRi}
Ri_\delta = \frac{g\left(\Theta_\delta - \Theta_0\right)\delta}{\Theta_0 U_\delta^2}, \ \
Ri_H = \frac{g\left(\Theta_H - \Theta_0\right)H}{\Theta_0 U_H^2}$$
where $\Theta$ symbols represent temperatures, $U$ velocities, the subscripts $\delta$ and $H$, respectively, the boundary-layer depth and the buildings’ height, $g$ is the gravitational acceleration and $\Theta_0$ is a reference temperature measured close to the floor (at $z=10$ mm).
\[table:aerpar\]
Stable boundary layers were generated by imposing a non-uniform inlet temperature profile, cooling the floor at a desired temperature and adjusting the maximum inlet temperature ($\Delta\Theta_{MAX}$ is defined as the difference between this maximum temperature and the floor temperature) and reference velocity ($U_{REF}$) to set the required stratification strength [@Marucci2018]. It should be noted that $Ri_\delta^{app}$ in the table is the nominal (or desired) bulk Richardson number of the approaching flow, which sometimes differs slightly from the one actually measured (also reported in the table). Convective boundary layers were generated by setting a uniform inlet temperature profile capped by a linear inversion of roughly 10$^\circ$ C/m starting from 1 m upwards, heating the floor using an optimised layout for the heating panel mats and adjusting $\Delta\Theta_{MAX}$ and $U_{REF}$ [@Marucci2018].
Surface aerodynamic (friction velocity $u_\ast$, roughness length $z_0$, displacement height $d$, BL detpth $\delta$) and thermal (scaling temperature $\theta_\ast=-\left( \overline{w'\theta'} \right)_0 / u_\ast$, thermal roughness length $z_{0h}$, thermal displacement height $d_h$) were estimated as described in details by @Marucci2019 and @Marucci2019flow, by fitting the logarithmic profiles and the vertical shear stress profiles. Other values reported in the table are a reference temperature close to the floor ($\Theta_0$), the temperature at the boundary-layer height ($\Theta_\delta$), a velocity scale valid on the mixed layer of a CBL, defined as [@Kaimal1994]:
$$w_\ast = \left[ \frac{g}{\Theta_0}\left(\overline{w'\theta'}\right)_0 \delta\right]^{1/3}$$
the Monin-Obukhov length ($L$), the bulk Richardson numbers measured at the boundary-layer depth ($Ri_\delta$) and building height ($Ri_H$), the Reynolds number ($Re_\delta$) and roughness Reynolds number ($Re_\ast$).
A full analysis of the boundary layer flow, turbulence and temperature fields over the urban array in the five stratification cases considered here is reported by @Marucci2019flow.
Plume characteristics {#sec:plume}
=====================
Stable stratification
---------------------
In Fig. \[fig:ConcSBLcont\] contour plots of pollutant mean concentration are shown for the NBL and a SBL case ($Ri_\delta^{app} = 0.21$) both inside ($z/H = 0.5$) and above ($z/H = 1.5$) the urban canopy. The tracer was released from a ground level source located at $x/H = -1$ and $y/H = -1.5$. The plume central axis – defined as the straight line that minimises the distance from the mean values in the Gaussian fit of the lateral profiles (see equation \[gaussianCurveLat\]) – does not seem to be affected by the stable stratification inside the canopy. As a matter of fact, its axis appears to deviate from the free-stream wind direction due to channelling effect by about 14.7$^\circ$ both in neutral and stable atmospheric conditions. The channelling is caused by the presence of the small street canyons and it is even more evident in the first $2H$ downstream of the source, where the plume axis is almost coincident with the long street centreline. Above the canopy the plume axis still presents a deflection from the free-stream wind direction, despite the fact that the flow field is already completely aligned with the tunnel axis [@Marucci2019flow]. The angles are slightly different, though (8.6$^\circ$ for NBL and 10.8$^\circ$ for SBL). Since the actual wind direction is already aligned with $45^\circ$ at $z/H = 1.5$ and above, the different plume angle is just a result of the different distribution of concentrations closer to the ground. In facts, pollutant concentrations in the canopy remain larger further away from the source in case of stable stratification. It would be interesting to compare these results to cases with a different Richardson number, but unfortunately we do not have enough data to estimate the plume direction for other stratification levels.
![Contour plots of non-dimensional mean concentration for NBL and SBL inside and above the canopy for wind direction 45$^\circ$. Black line is plume centreline, yellow line is free-stream wind direction.[]{data-label="fig:ConcSBLcont"}](ConcSBLcont.pdf){width="\linewidth"}
The plume width does not appear to be significantly affected by the applied stratification inside the canopy, with just a small reduction. A similar statement can be made for the plume above. This can be better appreciated from the lateral profiles of mean concentration shown in Fig. \[fig:LateralPlumeSBL\], where the values for two other levels of stability are plotted as well.
![Lateral profiles of mean concentration inside and above the canopy for four levels of stability.[]{data-label="fig:LateralPlumeSBL"}](LateralPlumeSBL.pdf){width="0.9\linewidth"}
The mean concentration values, on the contrary, show a clear effect of the different stratification levels. In all the graphs shown in Fig. \[fig:LateralPlumeSBL\], the concentration – both inside and immediately above the canopy – appears larger in the SBL and increasing with $Ri_\delta$ up to about twice as large. The only exception is in the upper region closer to the source, in which the trend is inverted. This behaviour is expected and due to the reduced vertical displacement of the flow under a SBL.
The plume vertical depth is smaller under stable stratification, as shown in Fig. \[fig:VerticalPlumeSBL\]. It is also possible to note how all the SBL cases seem to behave similarly above 1.5$H$, showing the same plume depth reduction of up to 30% compared to the NBL. Within the canopy, the concentration level appears approximately constant with height, at least down to the lowest measured position (0.5$H$). All measured profiles show a similar behaviour with different levels of stratification (Fig. \[fig:VerticalPlumeSBL\]), confirming that the modification induced by the stable boundary layer are independent from the particular location within the urban array. The chosen positions are indeed different in terms of mixing properties, with three of them at street intersections, one in a “long” street canyon and one in a “short” street canyon, yet the changes due to different levels of stratification seem to apply to all of them in a similar way.
![Vertical profiles of mean concentration approximately along the plume axis for four levels of stability. The star on the map at bottom-right corresponds to the source, while the other marks show the locations of the five vertical profiles.[]{data-label="fig:VerticalPlumeSBL"}](VerticalPlumeSBL.pdf){width="1\linewidth"}
![Plume axes reference system.[]{data-label="fig:PlumeAxisScheme"}](PlumeAxisScheme.pdf){width="0.6\linewidth"}
![$\sigma_h$ for SBL and NBL varying the distance from the source at $z/H$ of 0.5 (a) and 1.5 (b).[]{data-label="fig:SigmahSBL"}](SigmahSBL.pdf){width="1\linewidth"}
![$\sigma_z$ for SBL and NBL varying the distance from the source.[]{data-label="fig:SigmazSBL"}](SigmazSBL.pdf){width="0.6\linewidth"}
In order to better quantify the effect on the width and depth of the plume, a fitting was attempted with a Gaussian distribution. The following curve
$$\label{gaussianCurveLat}
\overline{C} = Ae^{-\frac{(y_{plume}-\mu)^2}{2\sigma_h^2}}$$
in which $A$, $\mu$ and $\sigma_h$ are free fitting parameters, was fitted by means of a non-linear least squares method to profiles extrapolated from the contour plots, perpendicular to the axis of the plume indicated in Fig. \[fig:ConcSBLcont\]. On this regard, two axes were defined, $x_{plume}$ which coincides with the plume axis, and $y_{plume}$, perpendicular to the former, as shown in Fig. \[fig:PlumeAxisScheme\]. The Gaussian fit was remarkably satisfactory for all measurement profiles, at all distances from the source. In Fig. \[fig:SigmahSBL\] the values obtained for $\sigma_h$ (representative of the plume width along $y_{plume}$) are displayed for the neutral reference and the $Ri_{\delta}^{app} = 0.21$ case for five $x_{plume}$ locations (the origin of the plume reference system was chosen so that $x_{plume}$ represented the distance of the lateral profiles from the source). The trend of $\sigma_h$ shows that inside the canopy the plume width is only very slightly reduced by the stable stratification, and only far from the source. Above, instead, a difference (but still very small) is discernible throughout the plume.
The $\sigma_z$ plot (Fig. \[fig:SigmazSBL\]) – obtained using the Gaussian fit on a similar equation as Eq. \[gaussianCurveLat\], but with $\sigma_h$ replaced by $\sigma_z$ and $y_{plume}$ by $z$ – confirms that the plume depth is very similar in the three considered stability cases, starting to differ only after 10$H$ from the source. It is possible to note that the values of $\sigma_z$ appeared to be more sensitive to the stable stratification than $\sigma_h$. This is in agreement with what observed by [@Briggs1973] in field experiments over urban roughness. On the contrary, [@Kanda2016] found the plume depth only slightly affected, while the width was sensibly reduced by the application of the stable stratification. A complete explanation of this peculiar behaviour was not given, but @Kanda2016 mentioned possible uncertainties due to small variations in depth and width.
The lateral concentration fluctuation profiles at 0.5 and 1.5$H$ (Fig. \[fig:VerticalPlumevarianceSBL\]) have a similar trend to the mean concentration, varying with stratification in the same manner. The behaviour of the vertical profile, though, is different up to $z/H = 2$, where the fluctuations present an increase to a maximum above the canopy, followed by a reduction further above. Nevertheless, the amplification or reduction of the variance values following the stratification is similar to what experienced by the mean concentrations.
![Vertical profiles of concentration variance approximately along the plume axis for four levels of stability. The star on the map at bottom-right corresponds to the source, while the other marks show the locations of the five vertical profiles.[]{data-label="fig:VerticalPlumevarianceSBL"}](VerticalPlumevarianceSBL.pdf){width="1\linewidth"}
Unstable stratification
-----------------------
Fig. \[fig:ConcCBLcont\] shows contour plots of pollutant mean concentration for the NBL and a CBL case ($Ri_{\delta}^{app} = -1.5$) both inside ($z/H = 0.5$) and above ($z/H = 1.5$) the canopy. The same source location as for the stable cases has been used ($x/H = -1$, $y/H = -1.5$). Differently from the considered SBL cases, the plume central axis here appears modified by the unstable stratification also inside the canopy, with an angle increment of 20% respect to the wind direction. The same percentage increase is found for the region above the canopy. The data from the weaker stratification ($Ri_{\delta}^{app} = -0.5$, not shown in the figure) lead to a remarkably similar result for the plume direction above the canopy, while the value within the urban model is close to the neutral reference case.
![Contour plots of non-dimensional mean concentration for NBL and CBL inside and above the canopy for wind direction 45$^\circ$. Black line is plume centreline, yellow line is free-stream wind direction.[]{data-label="fig:ConcCBLcont"}](ConcCBLcont.pdf){width="\linewidth"}
When comparing the mean concentration values the unstable stratification effect appears opposite to what measured for the SBL. In this case, the concentration levels within the canopy are reduced almost everywhere (up to three times), as a consequence of the increased vertical exchange. This fact is better appreciable in Fig. \[fig:LateralPlumeCBL\], where the lateral profiles of the two cases are shown, together with a case of intermediate instability. The results for the latter lays between the NBL and the stronger instability case. Fig. \[fig:SigmahCBL\] displays the computed values of $\sigma_h$, representative of the plume width. The trend shows here a clearer increase inside the canopy (after 9$H$), compared to the NBL. Above the canopy a difference is discernible throughout the plume, as it was for the SBL. The results for the intermediate instability case lie again between the NBL and the strongest instability.
![Lateral profiles of mean concentration inside and above the canopy for three levels of instability. The star on the maps at bottom corresponds to the source, while the other marks show the locations of the measurement points along the lateral profiles.[]{data-label="fig:LateralPlumeCBL"}](LateralPlumeCBL.pdf){width="0.8\linewidth"}
The plume depth starts differing from $x/H = 1$, as discernible in the vertical profiles of mean concentration in Fig. \[fig:VerticalPlumeCBL\]. The plots clearly show lower concentrations within the canopy, compared with the neutral case (as already mentioned in the analysis of the later profiles), and higher concentrations further up. The plume, then, appears deeper, indicating that the pollutant tracer is able to penetrate deeper into the BL above the canopy, reaching a depth of more than 7$H$ at the farthest measured location, even though with very low concentration values. Such a trend is expected, since the enhanced vertical exchange due to the buoyancy forces contributes to clean the air inside the canopy, facilitating the exchange with the region above. The $\sigma_z$ plot in Fig. \[fig:SigmazCBL\] confirms this behaviour, with the parameter showing a clear and progressive increment after the application of unstable stratification, more evident than the variation in the plume width. Again this result is in accordance with [@Briggs1973] and in contrast with [@Kanda2016].
![Vertical profiles of mean concentration approximately along the plume axis for three levels of instability. The star on the map at bottom-right corresponds to the source, while the other marks show the locations of the five vertical profiles.[]{data-label="fig:VerticalPlumeCBL"}](VerticalPlumeCBL.pdf){width="1\linewidth"}
![$\sigma_h$ for CBL and NBL varying the distance from the source at $z/H$ of 0.5 (a) and 1.5 (b).[]{data-label="fig:SigmahCBL"}](SigmahCBL.pdf){width="1\linewidth"}
![$\sigma_z$ for CBL and NBL varying the distance from the source.[]{data-label="fig:SigmazCBL"}](SigmazCBL.pdf){width="0.6\linewidth"}
The concentration variance (Fig. \[fig:VerticalPlumevarianceCBL\]) seems to behave like described for the stable cases, varying according to the mean concentration levels.
![Vertical profiles of concentration variance approximately along the plume axis for three levels of instability. The star on the map at bottom-right corresponds to the source, while the other marks show the locations of the five vertical profiles.[]{data-label="fig:VerticalPlumevarianceCBL"}](VerticalPlumevarianceCBL.pdf){width="1\linewidth"}
Vertical pollutant fluxes {#sec:flux}
=========================
Fig. \[fig:wcSBL\] shows the graphs of vertical turbulent and total pollutant fluxes with varying stable and unstable stratification levels at a location at the centre of an intersection. For the SBL cases, inside the canopy the turbulent fluxes are close to zero (and slightly negative), while the total ones experience a peak at about 0.5$H$ (the lowest measured position), meaning that the mean pollutant fluxes are predominant there. In general, the total vertical fluxes follow the trend of the mean concentration profile, also when different levels of stratification are involved. Despite this, the turbulent fluxes experience a steep peak at roof level (or slightly above), reaching values similar to the mean fluxes. This is an important aspect because the roof level is critical in the exchange between the canopy and the upper region. Moreover, the total pollutant flux at roof level is not seen to be affected by the stratification, at least at the centre of the intersection. The fact that the total fluxes inside the canopy are larger in the stably-stratified cases despite the reduced vertical turbulence [see @Marucci2019flow] is indicative of the predominance of the mean fluxes over the turbulent ones. Above the canopy, however, both the total and turbulent flux appear to be reduced by stratification. In the CBL case the vertical velocity fluctuations are enhanced everywhere [@Marucci2019flow]. On the other hand, the concentration levels are reduced inside and above the canopy until a point (that in the case of Fig. \[fig:VerticalPlumeCBL\]b is at about $2H$) after which the concentration starts being larger than the NBL, hence making the plume deeper. In this situation, the vertical turbulent pollutant flux appears generally increased inside the canopy and above 1.5$H$. In the region immediately above the roof level, instead, a steep gradient seems to advantage the neutral case. That said, inside the canopy the turbulent flux remains irrelevant compared to the mean values except, again, at roof level and above, where they have the same order of magnitude.
![Vertical profiles of turbulent and total vertical pollutant flux varying the stable (a, b) and unstable stratification (c, d) at the centre of an intersection ($x/H = 1$, $y/H = -6$).[]{data-label="fig:wcSBL"}](wc.pdf){width="1\linewidth"}
An interesting point to analyse is the similitude between vertical turbulent pollutant flux and concentration gradient
$$K_z \frac{\partial \overline{C}}{\partial z} = -\overline{w'c'}$$
where $K_z$ is a constant of proportionality (called “eddy diffusivity”). Such behaviour was demonstrated by [@Dezso-Weidinger2003], confirmed by [@Carpentieri2012] for neutral stratification and it is normally used in models to compute vertical turbulent pollutant fluxes (as e.g. SIRANE, see [@Soulhac2011]). Nevertheless, its validity in the SBL and CBL cases was still questioned. In Fig. \[fig:wcGrad\] profiles of vertical turbulent pollutant fluxes are plotted and compared with the concentration gradient profiles obtained from a Gaussian fit of the mean concentration. The proportionality in this case is evident, though the constant of proportionality seems to vary. In particular, it tends to increase with unstable stratification and decrease with stable, ranging from 0.009 to 0.06. A variability depending on the location and mechanical turbulence was found by [@Carpentieri2012] and it is confirmed here (the constant reaching a value of 0.14 in case of stronger stratification, see Tab. \[table:Kz\]). Of course, the analysis in this case is based on very specific locations at the centre of the intersection or the street canyons. The numerical simulation results by @Fuka2018 on the neutral case show that the eddy diffusivity can even be negative at certain locations.
![Vertical profiles of vertical turbulent pollutant fluxes ($x/H = 1$, $y/H = -6$) with varying stratification. The blue line is the gradient of dimensionless concentration over z/H obtained by a Gaussian fit of the mean concentration vertical profile.[]{data-label="fig:wcGrad"}](wcGrad.pdf){width="0.8\linewidth"}
\[table:Kz\]
In Fig. \[fig:KzVSL\] the values of the mean $K_z$ from Tab. \[table:Kz\] are plotted against $Ri_\delta$ and $\delta/L$. A parametrisation is attempted by means of a polynomial fitting of the second order (also shown in the figure)
$$K_z\left(\delta/L\right) = 0.0202\left(\delta/L\right)^2 - 0.0425\left(\delta/L\right) + 0.0306$$
$$K_z\left(Ri_\delta\right) = -0.0064 Ri_\delta^2 - 0.0839 Ri_\delta + 0.0294$$
![Mean value of $K_z$ at three locations plotted against $Ri_\delta$ or $\delta/L$. Dotted lines are obtained by fitting the experimental data with a polynomial curve.[]{data-label="fig:KzVSL"}](KzVSL.pdf){width="\linewidth"}
Conclusion {#sec:Conclusion}
==========
Wind tunnel experiments were conducted to study the impact of atmospheric stratification on flow and dispersion within and over a regular array of rectangular buildings at a 45$^\circ$ wind angle. Three stable and two convective incoming boundary layers were tested with a Richardson number ranging from $-$1.5 to 0.29. Dispersion measurements were carried out using propane released from a point source within the urban model as tracer gas, sampled using a fast FID probe. Simultaneous velocity and temperature measurements were also taken [@Marucci2019flow]. The dispersion plume was sampled in and above the canopy by means of lateral and vertical profiles.
The results of the pollutant dispersion measurements show that the stratification (either stable or unstable) effect on the plume width is significantly lower than the effect on the vertical profiles (as also indicated by [@Briggs1973], but in contrast with the results by [@Kanda2016]). Stable stratification did not affect the plume central axis inside the canopy, but in the unstable case the axis appeared to deviate from the neutral case direction. Above the canopy both stratification types caused an increase in the plume deflection angle compared to the neutral case. Measured concentrations in stable stratification were up to two times larger in the canopy compared to the neutral case, the opposite for the convective stratification (which are up to three times lower). Vertical turbulent pollutant fluxes have been found to be only slightly affected by stratification, but without significant changes in the general trend. Mean pollutant fluxes in the canopy remain predominant close to the source, even though at roof level and above turbulent and mean fluxes have the same order of magnitude. The proportionality between the vertical turbulent fluxes and the vertical mean concentration gradient (base of the K-theory) is confirmed also in the stratified cases.
The experimental data produced during this work, to the authors’ knowledge, are the most comprehensive available so far for urban flow and dispersion studies in presence of atmospheric stratification and they may help developing new mathematical models and parametrisation, as well as validating existing and future numerical simulations.
The tested boundary layer stratification levels ranged from weakly stable to weakly unstable. Despite the fact that more extreme conditions may create more dramatic effects on the aerodynamic and dispersion properties, it should be noted that in urban areas extreme stratifications are normally quite uncommon (excluding locations at larger latitudes were very stable conditions may occur even in rural or urban areas). @Wood2010 showed, for example, that in London during a long experimental campaign, the most frequent cases are the ones characterised by lower stratification level, with the region in the range $-1<z'/L<1$ occurring for about 75% of the times, both during night and day (where the reference height $z'$ represents the difference between the measurement height, 190.6 m, and the displacement height over the city). Unfortunately, the boundary layer depth for each of these cases was not indicated @Wood2010, so a comparison with the wind tunnel data is hard, but considering a typical scaling ratio of 1/200 the resulting Monin-Obukhov length values at full scale for the experimental data in the present work are of the order of $\pm200$ m (hence approximately in the range of $-1<z'/L<1$ compared to the London data, and so covering 75% of the actual cases).
Future experiments might include different wind directions and different urban geometries. Given the significant impact of stratification on the vertical spread of the pollutant plume, it would be particularly interesting to apply the methodology developed in this paper to urban geometries that include very tall buildings [@Fuka2018; @Hertwig2019; @Aristodemou2018].
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are grateful for the financial support by the EPSRC (grant EP/P000029/1) and by the Department of Mechanical Engineering Sciences (University of Surrey).
Data availability {#data-availability .unnumbered}
=================
Wind tunnel data are available at <https://doi.org/10.6084/m9.figshare.8320007>.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'With the help of the general theory of the Heun equation, this paper completes previous work by the authors and other groups on the explicit representation of the massive gravitino propagator in four-dimensional de Sitter space. As a result of our original contribution, all weight functions which multiply the geometric invariants in the gravitino propagator are expressed through Heun functions, and the resulting plots are displayed and discussed after resorting to a suitable truncation in the series expansion of the Heun function. It turns out that there exist two ranges of values of the independent variable in which the weight functions can be divided into dominating and sub-dominating family.'
author:
- 'Giampiero Esposito$^{1}$ [^1] Raju Roychowdhury$^{2,1}$ [^2]'
---
[<span style="font-variant:small-caps;">On the complete analytic structure of the massive gravitino propagator in four-dimensional de Sitter space</span>]{}
Introduction
============
The investigation of Green functions has always been at the heart of important developments in quantum field theory and quantum gravity [@DeWi65]. On the other hand, in recent years, developments in cosmology and string theory have led to renewed interest in supergravity theories in anti-de Sitter [@Witten98] and de Sitter space [@Witten01].
Thus, in our recent paper [@raju2], we performed a two-component spinor analysis of geometric invariants leading to the gravitino propagator in four-dimensional de Sitter spacetime, following the two-spinor language [@penrose] pioneered by Penrose. In that paper we also wrote down all the 10 different weight functions multiplying the invariants which occur in the massive gravitino propagator, relying upon the work by Anguelova and Langfelder [@anguelova]. It was also found there, that algebraically one can write down 8 weight functions denoted by $\alpha, \beta, \gamma, \delta, \varepsilon, \theta, \tau, \omega$ in terms of a pair denoted by $(\pi, \kappa)$, in case of de Sitter space. Going one step further, we also expressed $\kappa$ in terms of $\pi$ and $\pi'$ where $\pi$ was defined in this fashion: $\pi (z) =
\sqrt{z} \, \tilde{\pi} (z)$ and $\tilde{\pi} (z)$ satisfies the Heun differential equation [@handbook; @heundiff], whose solutions, denoted by ${\rm Heun}(a,q;b,c,d,e;z)$, with properly defined arguments, have in general four singular points, i.e. $z_{0}=0,1,a,\infty$.
In this paper we have explicitly written down, first, the algebraic expression of all the 9 weight functions $\kappa,\alpha, \beta,
\gamma, \delta, \varepsilon, \theta, \tau, \omega$ in terms of $\pi(z)$ and $\pi'(z)$, where $z$ is defined as $z = \cos^2\frac{\mu}{2R}$, with $\mu(x,x')$ being the geodesic distance between $x$ and $x'$ as defined in [@raju2]. Finally, we will draw a few two-dimensional plots of these weight functions and classify their parameter space with respect to $z$ in the region of our choice.
The plan of this paper is as follows. In Sec. II we set up all symbols, basically recalling all relevant definitions of use in this paper from our previous one [@raju2]. Sec. III contains the explicit massive spin-3/2 propagator in four dimensions with all the ten invariant structures properly defined, along with the multiplicative weight functions written in terms of the $(\pi, \kappa)$ pair. In Sec. IV we give a crash course on Heun’s differential equations and write down several properties of the Heun function before showing that $\tilde{\pi}(z)$ satisfies a Heun equation with properly defined arguments that we will list there. Sec. V and the appendix are devoted to build a dictionary of all the 9 weight functions $\kappa,\alpha, \beta, \gamma, \delta, \varepsilon,
\theta, \tau, \omega$ written in terms of $\pi(z)$ and $\pi'(z)$ only, where prime denotes derivative with respect to $z$ instead of being the derivative with respect to $\mu$, the geodesic distance function. Then in the last Sec. VI we display several two-dimensional plots showing the functional behavior of each of the 10 weight functions with respect to $z$. These show that there exist two ranges of values of $z$ in which the weight functions can be divided into dominating and sub-dominating family. Moreover, it appears helpful to have the result of a lengthy calculation completely worked out. Eventually, we give further details on the plots in the section devoted to concluding remarks.
In light of recent mathematical developments in [@2009], it might be possible to expand the Heun functions in the gravitino propagator as a combination of finitely or infinitely many hypergeometric functions, which in turn occur in the more familiar formulae for bosonic propagators in de Sitter space [@1987]. Thus, our work might help relating fermionic and bosonic propagators in four-dimensional de Sitter space through special-function techniques, double-checking the expectations from supersymmetry.
A review of a few useful definitions
====================================
It has been more than two decades since Allen and co-authors used intrinsic geometric objects to calculate correlation functions in maximally symmetric spaces; their results, here exploited, were presented in two papers [@allen1; @allen2]. In this section we would like to review, first, the elementary maximally symmetric bi-tensors which have been discussed previously by Allen and Jacobson [@allen1]. More recently, the calculation of the spinor parallel propagator has been carried out in arbitrary dimension [@mueck].
A maximally symmetric space is a topological manifold of dimension $n$, with a metric which has the maximum number of global Killing vector fields. This type of space looks exactly the same in every direction and at every point. The simplest examples are flat space and sphere, each of which has $\case{1}{2}n(n+1)$ independent Killing fields. For $S^n$ these generate all rotations, and for $\mathbb{R}^n$ they include both rotations and translations.
We consider a maximally symmetric space of dimension $n$ with constant scalar curvature $n(n-1)/R^2$. For the space $S^n$, the radius $R$ is real and positive, whereas for the hyperbolic space $H^n$, $R=il$ with $l$ positive, and in the flat case, $\mathbb{R}^n$, $R=\infty$. If we further consider two points $x$ and $x'$, which can be connected uniquely by a geodesic, with $\mu(x,x')$ being the geodesic distance between $x$ and $x'$, then $n^a(x,x')$ and $n^{a'}(x,x')$ are the tangents to the geodesic at $x$ and $x'$, and are given in terms of the geodesic distance as follows: $$\label{ndef}
n_{a}(x,x') = \nabla_{a}\mu(x,x') \quad \text{and} \quad
n_{a'}(x,x') = \nabla_{a'} \mu(x,x').$$ Furthermore, on denoting by $g^{a}_{\;b'}(x,x')$ the vector parallel propagator along the geodesic, one can then write $n^{b'} = -g^{b'}_{\;a} n^a$. Tensors that depend on two points, $x$ and $x'$, are bitensors [@Synge]. They may carry unprimed or primed indices that live on the tangent space at $x$ or $x'$.
These geometric objects $n^a$, $n^{a'}$ and $g^a_{\;b'}$ satisfy the following properties [@allen1]:
$$\begin{aligned}
\label{dn}
\nabla_a n_b &= A(g_{ab} -n_a n_b), \\
\label{dnprime}
\nabla_a n_{b'} &= C(g_{ab'} +n_a n_{b'}), \\
\label{dg}
\nabla_a g_{bc'} &= -(A+C) (g_{ab} n_{c'} +
g_{ac'} n_b),\end{aligned}$$
where $A$ and $C$ are functions of the geodesic distance $\mu$ and are given by [@allen1] $$\label{AC}
A = \frac1R \cot \frac{\mu}R \quad \text{and} \quad
C = -\frac1{R\sin(\mu/R)},$$ for de Sitter spacetime and thus they satisfy the relations $$\label{ACrel}
dA/d\mu =-C^2, \quad dC/d\mu =-AC \quad \text{and} \quad C^2-A^2
=1/R^2.$$ Last, with our convention the covariant gamma matrices satisfy the property $$\{\Gamma^\mu,\Gamma^\nu\} =2 I g^{\mu\nu}.
\label{(3.5)}$$ In our previous work [@raju2], we followed the conventions for two-component spinors, as well as all signature and curvature conventions, of Allen and Lutken [@allen2], and hence we used dotted and undotted spinors instead of the primed and unprimed ones of Penrose and Rindler [@penrose]. In our work a primed index indicates instead that it lives in the tangent space at $x'$, while the unprimed ones live at $x$. The fundamental object to deal with is the bispinor $D_A^{\;A'}(x,x')$ which parallel transports a two-component spinor $\phi^A$ at the point $x$, along the geodesic to the point $x'$, yielding a new spinor $\chi^{A'}$ at $x'$, i.e. $$\label{def}
\chi^{A'}=\phi^{A} \; D_{A}^{\;A'}(x,x').$$ Complex conjugate spinors are similarly transported by the complex conjugate of $D_A^{\;A'}(x,x')$, which is $\overline{D}_{\dot{A}}^{\;{\dot{A}'}}(x,x')$. A few elementary properties of $D_A^{\;A'}$ were listed in Sec. IV of [@raju2]. It is worth mentioning that the covariant derivatives of the spinor parallel propagator were defined to be $$\label{defcovder}
\nabla_{A\dot{A}}D_B^{\;B'}= (A+C)\left[
\frac{1}{2}n_{A\dot{A}}D_B^{\;B'}-n_{B\dot{A}}D_A^{\;B'}\right],$$ where $A$ and $C$ are defined in (2.3).
The two basic massive two-point functions for spin-1/2 particle, were defined by $$\label{defP}
P^{A{\dot{B}}'} \equiv \langle\phi^{A}(x)
\overline{\phi}^{{\dot{B}}'}(x')\rangle = f(\mu)D^A_{\;A'}n^{A'{\dot{B}}'},$$ $$\label{defQ}
Q_{\dot{A}}^{{\;\dot{B}}'} \equiv \langle
\overline\chi_{\dot{A}}(x)\overline{\phi}^{{\dot{B}}'}(x')
\rangle = g(\mu)\overline{D}_{\dot{A}}^{\;{\dot{B}'}}.$$ and in de Sitter space they turned out to be [@raju2]: $$\label{2ptfn1}
P^{A{\dot{B}}'}_{(F)} = \lim_{\epsilon \to 0^{+}}f_{DS}
(Z+i\epsilon)D^A_{\;A'}n^{A'{\dot{B}}'},$$ $$\label{2ptfn2}
Q^{\dot{A}{\dot{B}}'}_{(F)} = \lim_{\epsilon \to 0^{+}}
g_{DS}(Z+i\epsilon)\overline{D}^{\dot{A}{\dot{B}}'},$$ where $(F)$ stands for the Feynman Green functions with $f_{DS}$ and $g_{DS}$ defined in this fashion: $$\label{soln1}
f_{DS} = N_{DS}(1-Z)^{1/2}F(a,b;c;Z),$$ $$\label{soln2}
g_{DS}= -iN_{DS}2^{-3/2}m|R|Z^{1/2}F(a,b;c+1;Z).$$ Moreover, after doing some algebra one can rewrite the final answer for the constant $N_{DS}$ as $$\label{NDSfinalform}
N_{DS} = \frac{-i|Rm|(1-m^{2}R^{2})}{8\sqrt{2}\pi|R|^{3}\sinh\pi|Rm|}.$$ We also note that $F(a,b;c;Z)$ and $F(a,b;c+1;Z)$ are two independent solutions of the Hypergeometric equation [@abra; @erdelyi] : $$\label{hypergeometric}
H(a,b,c;Z)w(Z) = 0,$$ where $H(a,b,c)$ is the hypergeometric operator $$\label{hypergeoop}
H(a,b,c;Z) = Z(1-Z)\frac{d^2}{dZ^2}+[c-(a+b+1)Z]\frac{d}{dZ}-ab.$$
Massive spin-3/2 propagator
===========================
In this section we consider the propagator of the massive spin-3/2 field. Let us denote the gravitino field by $\Psi^{\alpha}_{\lambda} (x)$. In a maximally symmetric state $|\, s \rangle$ the propagator is $$\label{correlator}
S^{\alpha \beta^{\prime}}_{\lambda \nu^{\prime}}
(x, x^{\prime}) = \langle s\,| \Psi^{\alpha}_{\lambda} (x)
\Psi^{\beta^{\prime}}_{\nu^{\prime}} (x^{\prime}) |\,s \rangle .$$ The field equations imply that $S$ satisfies $$\label{EoM}
(\Gamma^{\mu \rho \lambda} D_{\rho} - m \,
\Gamma^{\mu \lambda})^{\alpha}{}_{\gamma}
S_{\lambda \nu^{\prime}}{}^{\gamma}{}_{\beta^{\prime}} =
\frac{\delta (x-x^{\prime})}{\sqrt{-g}}
g^{\mu}{}_{\nu^{\prime}} \, \delta^{\alpha}{}_{\beta^{\prime}}.$$
The ten gravitino invariants
----------------------------
It is very convenient to decompose the gravitino propagator in terms of independent structures constructed out of $n_\mu, n_{\nu'}, g_{\mu\nu'}$ and $\Lambda^\alpha_{~\beta'}$ [@raju2]. Thus, the propagator can be written in geometric way following Anguelova et al. [@anguelova] (see also [@Basu]): $$\begin{aligned}
\label{ansatz}
S_{\lambda \nu^{\prime}}{}^{\alpha}{}_{\beta^{\prime}}
&=& \alpha (\mu) \, g_{\lambda \nu^{\prime}}
\Lambda^{\alpha}{}_{\beta^{\prime}} +
\beta (\mu) \, n_{\lambda} n_{\nu^{\prime}}
\Lambda^{\alpha}{}_{\beta^{\prime}} +
\gamma (\mu) \, g_{\lambda \nu^{\prime}} (n_{\sigma}
\Gamma^{\sigma} \Lambda)^{\alpha}{}_{\beta^{\prime}} \nonumber \\
&& + \delta (\mu) \, n_{\lambda} n_{\nu^{\prime}} (n_{\sigma}
\Gamma^{\sigma} \Lambda)^{\alpha}{}_{\beta^{\prime}} +
\varepsilon (\mu) \, n_{\lambda} (\Gamma_{\nu^{\prime}}
\Lambda)^{\alpha}{}_{\beta^{\prime}} +
\theta (\mu) \, n_{\nu^{\prime}} (\Gamma_{\lambda}
\Lambda)^{\alpha}{}_{\beta^{\prime}}
\nonumber \\
&& + \tau (\mu) \, n_{\lambda} (n_{\sigma} \Gamma^{\sigma}
\Gamma_{\nu^{\prime}} \Lambda)^{\alpha}{}_{\beta^{\prime}}
+ \omega(\mu)\, n_{\nu^{\prime}} (n_{\sigma} \Gamma^{\sigma}
\Gamma_{\lambda} \Lambda)^{\alpha}{}_{\beta^{\prime}}
\nonumber \\
&& + \pi (\mu) \, (\Gamma_{\lambda}
\Gamma_{\nu^{\prime}} \Lambda)^{\alpha}{}_{\beta^{\prime}}
+ \kappa (\mu)\, (n_{\sigma} \Gamma^{\sigma}
\Gamma_{\lambda} \Gamma_{\nu^{\prime}} \Lambda)^{\alpha}{}_{\beta^{\prime}}. \end{aligned}$$
The weight functions multiplying the invariants
-----------------------------------------------
A rather tedious but straightforward calculation gives a system of $10$ equations for the $10$ coefficient functions $\alpha, ..., \kappa$ in (\[ansatz\]) as found in (see equations (3.6)-(3.15) in [@anguelova]). It was also found there that one can easily express the algebraic solutions for $\alpha, \beta, \gamma, \delta, \varepsilon, \theta, \tau, \omega$ in terms of the $(\pi, \kappa)$ pair in case of de Sitter space, i.e. (hereafter we set $n=4$ in the general formulae of [@anguelova], since only in the four-dimensional case the two-component-spinor formalism can be applied) $$\begin{aligned}
\label{alg}
\omega &=& \frac{2mC \kappa + ((A+C)^{2}-m^2) \pi}
{(m^{2}+R^{-2})}, \nonumber\\
\theta &=& \frac{((A-C)^{2}-m^{2}) \kappa - 2mC \pi}
{(m^{2}+R^{-2})}, \nonumber\\
\tau &=& \frac{2mC \kappa + ((A+C)^{2}-m^2) \pi}
{(m^{2}+R^{-2})}, \nonumber\\
\varepsilon &=& \frac{-([(A-C)^2 + 2/R^2] +m^2)
\kappa + 2mC \pi}{(m^{2}+R^{-2})}, \nonumber\\
\alpha &=& - \tau - 4\pi , \nonumber\\
\beta &=& 2 \omega , \nonumber\\
\gamma &=& \varepsilon - 2 \kappa , \nonumber\\
\delta &=& 2\varepsilon + 4 (\kappa -\theta) , \end{aligned}$$ where we have used the relation $C^2 - A^2 = 1/R^2$.
Furthermore, from (\[alg\]) we can immediately see that $$\label{Sym}
\tau = \omega \qquad {\rm and} \qquad \varepsilon + \theta
= - 2 \kappa .$$
Heun’s differential equation: a primer
======================================
The canonical form of the general Heun differential equation is given by ([@heundiff], [@kamke]) $$\label{ae}
{{{d^2}y}\over{dz^2}}+\left({{\gamma}\over{z}}+{{\delta}\over{z-1}}
+{{\epsilon}\over{z-a}}\right)
{{dy}\over{dz}}+{{{\alpha}{\beta}z-q}
\over{z(z-1)(z-a)}}y=0$$ The four regular singular points of the equation are located at $z=0,1,a,\infty$. Here $a\in\mathbb{C}$, the location of the fourth singular point, is a parameter ($a\neq0,1$), and $\alpha,\beta,\gamma,\delta,\epsilon\in\mathbb{C}$ are exponent-related parameters.
The solution space of the Heun differential equation is specified uniquely by the following Riemann $P$-symbol:
\[eq:Psymbol\] P{
[ccccc]{} 0&1&d&&\
0&0&0&& ;z\
1-&1-&1-&&
}.
This does not uniquely specify the equation and its solutions, since it omits the accessory parameter $q\in\mathbb{C}$. The exponents are constrained by $$\label{eq:Pconstraint}
\alpha+\beta-\gamma-\delta-\epsilon+1 = 0.$$ This is a special case of Fuchs’s relation, according to which the sum of the $2n$ characteristic exponents of any second-order Fuchsian equation on $\mathbb{CP}^1$ with $n$ singular points must equal $n-2$ [@Poole36].
There are $2\times4=8$ local solutions of (4.1) in all: two per singular point. If $\gamma$ is not a nonpositive integer, the solution at $z=0$ belonging to the exponent zero will be analytic. When normalized to unity at $z=0$, it is called the local Heun function, and is denoted $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ [@heundiff]. It is the sum of a Heun series, which converges in a neighborhood of $z=0$ [@heundiff; @Snow52]. In general, $Hl(a,q;\alpha,\beta,\gamma,\delta;t)$ is not defined when $\gamma$ is a nonpositive integer.
If $\epsilon=0$ and $q=\alpha\beta d$, the Heun equation loses a singular point and becomes a hypergeometric equation. Similar losses occur if $\delta=0$, $q=\alpha\beta$, or $\gamma=0$, $q=0$. This paper will exclude the case when the Heun equation has fewer than four singular points. The case, in which the solution of (4.1) can be reduced to quadratures, will also be ruled out. If $\alpha\beta=0$ and $q=0$, the Heun equation (4.1) is said to be trivial. Triviality implies that one of the exponents at $z=\infty$ is zero (i.e., $\alpha\beta=0$), and is implied by absence of the singular point at $z=\infty$ (i.e., $\alpha\beta=0$, $\alpha+\beta=1$, $q=0$).
Reducing Heun to hypergeometric
-------------------------------
The transformation to Heun ($\mathfrak{H}$) or hypergeometric ($\mathfrak{h}$) of a linear second-order Fuchsian differential equation with singular points at $z=0,1,d,\infty$ (resp. $z=0,1,\infty$), and with arbitrary exponents, is accomplished by certain linear changes of the dependent variable, called F-homotopies (see [@erdelyi] and [@heundiff §[A]{}2 and Addendum, §1.8].) If an equation with singular points at $z=0,1,a,\infty$ has dependent variable $u$, carrying out the substitution $\tilde u(z)=z^{-\rho}(z-1)^{-\sigma}(z-a)^{-\tau} u(t)$ will convert the equation to a new one, with the exponents at $z=0,1,d$ reduced by $\rho,\sigma,\tau$ respectively, and those at $z=\infty$ increased by $\rho+\sigma+\tau$. By this technique, one exponent at each finite singular point can be shifted to zero.
In fact, the Heun equation has a group of F-homotopic automorphisms isomorphic to $({\mathbb Z}_2)^3$, since at each of $z=0,1,a$, the exponents $0,\zeta$ can be shifted to $-\zeta,0$, i.e., to $0,-\zeta$. Similarly, the hypergeometric equation has a group of F-homotopic automorphisms isomorphic to $({\mathbb Z}_2)^2$. These groups act on the $6$ and $3$-dimensional parameter spaces, respectively. For example, one of the latter actions is $(a,b;c)\mapsto(c-a,c-b;c)$, which is induced by an F-homotopy at $z=1$. From this F-homotopy follows Euler’s transformation [@Andrews99 §2.2] $$\label{eq:flip}
{}_2F_1(a,\,b;\,c;\,z)= (1-z)^{c-a-b}{}_2F_1(c-a,\,c-b;\,c;\,z),$$ which holds because ${}_2F_1$ is a local solution at $z=0$, rather than at $z=1$. If the singular points of the differential equation are arbitrarily placed, transforming it to the Heun or hypergeometric equation will require a Möbius (i.e., projective linear or homographic) transformation, which repositions the singular points to the standard locations. A unique Möbius transformation maps any three distinct points in $\mathbb{CP}^1$ to any other three; but the same is not true of four points, which is why ($\mathfrak{H}$) has the singular point $a$ as a free parameter.
The cross-ratio orbit {#subsec:crossratio}
---------------------
The characterization of Heun equations that can be reduced to the hypergeometric equation will employ the cross-ratio orbit of $\{0,1,d,\infty\}$, defined as follows. If $A,B,C,D\in\mathbb{CP}^1$ are distinct, their cross-ratio is $$(A,B;C,D){\stackrel{\rm{def}}{=}}\frac{(C-A)(D-B)}{(D-A)(C-B)}\in\mathbb{CP}^1\setminus\{0,1,\infty\},$$ which is invariant under Möbius transformations. Permuting $A,B,C,D$ yields an action of the symmetric group $S_4$ on $\mathbb{CP}^1\setminus\{0,1,\infty\}$. The cross-ratio is invariant under interchange of $A,B$ and $C,D$, and also under simultaneous interchange of the two points in each pair. Thus, each orbit contains no more than $4!/4=6$ cross-ratios. The possible actions of $S_4$ on $s\in\mathbb{CP}^1\setminus\{0,1,\infty\}$ are generated by $s\mapsto1-s$ and $s\mapsto 1/s$, and the orbit of $s$ comprises $$s,\quad 1-s,\quad 1/s,\quad 1/(1-s),\quad s/(s-1),\quad (s-1)/s,$$ which may not be distinct. This is called the cross-ratio orbit of $s$; or, if $s=(A,B;\allowbreak C,D)$, the cross-ratio orbit of the unordered set $\{A,B,C,D\}\subset\mathbb{CP}^1$. Two sets of distinct points $\{A_i,B_i,C_i,D_i\}$ ($i=1,2$) have the same cross-ratio orbit iff they are related by a Möbius transformation.
Reminder of some of the properties of Heun’s function
-----------------------------------------------------
Our aim will be to find an integral representation of the Heun function as a Frobenius’ solution of the Heun equation, given in another form as follows [@heundiff]: $$\begin{aligned}
\label{Heunnew}
&&z(z-1)(z-a)y^{\prime \prime}(z) + \left\{\gamma (z-1)(z-a)
+\delta z(z-a)+ \epsilon z(z-1)\right\} y^{\prime} (z) \nonumber \\
&&+ (\alpha\beta\, z-q) y(z) = 0,\end{aligned}$$ The Frobenius’ solution, noted $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ is the entire solution defined for the exponent zero at the point $z=0$. It admits the power series expansion $$\label{Heunseries}
Hl(a,q;\alpha,\beta,\gamma,\delta;z) \equiv
\sum_{n=0}^{\infty} c_{n}z^{n},$$ with $|z|<1$ and $c_{0}=1$, $c_{1}=\frac{q}{\gamma a}$ and $\gamma \neq0,-1,-2,.....$
The recursion relation is as follows: $$\begin{aligned}
\label{Recursion}
&&a(n+2)(n+1+\gamma)c_{n+2} \nonumber\\
&&= \Bigr[q+(n+1)(\alpha + \beta - \delta +(\gamma
+ \delta -1)a)+(n+1)^{2}(a+1)\Bigr]c_{n+1}\nonumber \\
&& - (n+\alpha)(n+\beta)c_{n}= 0 \;\;\;\;\; n\geq 0.\end{aligned}$$ The function $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ is normalised with the relation $$Hl(a,q;\alpha,\beta,\gamma,\delta;0)=1.$$ It admits the following important particular cases ([@heundiff], p9, formula(1.3.9)): $$\begin{aligned}
\label{properties}
Hl(1,\alpha\beta;\alpha,\beta,\gamma,\delta;z)
= {}_2F_1(\alpha,\beta,\gamma;\,z) \;\;\;\;\forall
\delta \in\mathbb{C}\nonumber\\
Hl(0,0;\alpha,\beta,\gamma,\delta;z) = {}_2F_1(\alpha,\beta,\alpha
+\beta-\delta+1;\,z) \;\;\;\; \forall \gamma \in\mathbb{C}\nonumber\\
Hl(a,a\alpha\beta;\alpha,\beta,\gamma,\alpha
+\beta-\gamma+1;z) = {}_2F_1(\alpha,\beta,\gamma;\,z),\nonumber\\\end{aligned}$$ where ${}_2F_1(\alpha,\beta,\gamma;\,z)$ is the usual notation for the Gauss hypergeometric function.
Application of Heun’s equation to our problem
---------------------------------------------
Finally we come to the punch line, why do we need these all and how does the Heun equation indeed find an application to our problem? The answer to this goes along the following line: On using (\[Sym\]) the differential equations for $\kappa$ and $\pi$, the equations (3.14) and (3.15) of [@anguelova], acquire the form $$\begin{aligned}
\label{kp}
-(A+C) \theta + \kappa^{\prime} + \frac{1}{2} (A-C)
\kappa + m \pi &=& 0 , \nonumber \\
(C-A) \omega + \pi^{\prime} + \frac{1}{2} (A+C) \pi
+ m \kappa &=& 0 ,
$$ where $\theta$ and $\omega$ are given in (\[alg\]). Clearly one can solve algebraically the second equation for $\kappa$. By differentiating the result one obtains also $\kappa^{\prime}$ in terms of $\pi$, $\pi^{\prime}$ and $\pi^{\prime \prime}$, and substitution of these in the first equation yields a second order ODE for $\pi (\mu)$. Now let us look at the system (\[kp\]) in case of de Sitter spacetime. On inserting $A$ and $C$ from (\[AC\]) below and passing to the globally defined variable $z = \cos^2
\frac{\mu}{2R}$ (see Sec. III), we obtain the following differential equation for $\pi$: $$\label{pisol}
\left[P_{2}\frac{d^2}{dz^2}+ P_{1}\frac{d}{dz}+ P_{0}\right]\pi = 0,$$ where $P_{2}$ in (\[pisol\]) is a quartic polynomial in $z$, i.e. $$\label{P0}
P_{2} = 4 \left[m^{2} R^{2}+1 \right] z^4
-4(2 m^{2} R^{2}+3) z^3
+4(m^{2}R^{2}+2)z^{2}.$$ Similarly, $P_{1}$ in (\[pisol\]) is a cubic polynomial in $z$, $$\label{P1}
P_{1} = 16 \left[m^{2}R^{2}+1\right] z^3
-12 \left[2m^{2}R^{2}+5 \right]z^2
+ 8 \left(m^{2}R^{2}+2\right) z.$$ Last, $P_{0}$ in (\[pisol\]) is a quadratic polynomial in $z$, i.e. $$\label{P2}
P_{0} = \left(4m^{4}-19m^{2}
+32 m^{2}R^{2}+9\right) z^{2}
- \left(4m^{4}-14m^{2}+32m^{2}R^{2}+21\right)z
-3m^{2}R^{2}-6.$$ On making the substitution $\pi (z) = \sqrt{z} \, \tilde{\pi} (z)$, (\[pisol\]) becomes an equation of the type $$\begin{aligned}
\label{Heun}
&&z(z-1)(z-a)y^{\prime \prime}(z) + \left\{ (b+c+1)z^2 -
\left[b+c+1+a(d+e)-e\right]z +ad
\right\} y^{\prime} (z) \nonumber \\
&&+ (bc\, z-q) y(z) = 0.\end{aligned}$$ Written in canonical form it reads as follows: $$\label{aeapplication}
{{{d^2}y}\over{dz^2}}+\left({d\over{z}}+{e\over{z-1}}
+{(b+c+1)-(d+e)\over{z-a}}\right){{dy}\over{dz}}
+{{bc z-q}\over{z(z-1)(z-a)}}y=0,$$ where the parameters in (\[aeapplication\]) take the values $$\begin{aligned}
a &=& \frac{(m^{2}R^{2}+2)}{(m^{2}R^{2}+1)}, \nonumber\\
b &=& 2+imR, \nonumber\\
c &=& 2-imR, \nonumber\\
d &=& e = 3, \nonumber\\
q &=& -\frac{(m^{4}R^{4}+7m^{2}R^{2}+10)}{(m^{2}R^{2}+1)}.
\label{(6.40)}\end{aligned}$$ The equation (\[Heun\]) is known as Heun’s differential equation [@handbook; @heundiff]. Its solutions, here denoted by ${\rm Hl}(a,q;b,c,d,e;z)$, have in general four singular points as we said before, i.e. $z_{0}=0,1,a,\infty$. Near each singularity the function behaves as a combination of two terms that are powers of $(z-z_0)$ with the following exponents: $\{0, 1-d\}$ for $z_0 = 0$, $\{0, 1-e\}$ for $z_0=1$, $\{0, d+e-b-c\}$ for $z_0=a$, and $\{b,c\}$ (that is, $z^{-b}$ or $z^{-c}$) for $z\to \infty$.
We now insert into the second of Eq. (4.12) the first of Eq. (3.4), finding eventually $$\label{kappaform}
\kappa=f^{-1} \left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi
-(m^{2}+R^{-2})\pi' \right \},$$ where $$f \equiv m (m^{2}+R^{-2}+2C(C-A)),$$ and $\pi$ and $\pi'$ are meant to be expressed through the Heun function ${\rm Hl}(a,q;b,c,d,e;z)$. Eventually, we will show in the next section that all weight functions can be therefore expressed through such Heun function. The material covered in the present section and in the previous two is not new, and most of it is appropriate only for a physics-oriented choice of four-dimensional de Sitter space.
Dictionary of weight functions for the gravitino propagator
===========================================================
Here we will explicitly list all the weight functions as functions of $z = \cos^2\frac{\mu}{2R}$, in order to analyze their qualitative behavior as a function of $z$ and de Sitter radius $R$ in the next section. Let us recall a few definitions in de Sitter space, where $A$ and $C$ are functions of the geodesic distance $\mu$ and are given by [@allen1] $$\label{AC}
A = \frac1R \cot \frac{\mu}R \quad \text{and} \quad
C = -\frac1{R\sin(\mu/R)},$$ Since all other weight functions $\alpha, \beta, \gamma, \delta,
\varepsilon, \theta, \tau, \omega$ can be written in terms of the $(\pi, \kappa)$ pair, and in the last section we have seen $\kappa$ can also be expressed in a form like (\[kappaform\]), it is evident that all other 9 weight functions including $\kappa$, i.e. $\alpha, \beta, \gamma, \delta,
\varepsilon, \theta, \tau, \omega,\kappa$ can be expressed in terms of $\pi(\mu)$ and $\pi'(\mu)$ only.
We can also express $\pi$ as a function of $z$ and $R$ only as $\pi = \pi(z)=\pi(\mu= \pm 2R {\rm cos}^{-1} \sqrt{z})$. Similarly, by using a few of the familiar trigonometric identities, one can transform $\pi'(\mu)$ as $$\label{piprimez}
\pi'(\mu) = \mp\frac{1}{R}\sqrt {z(1-z)} \pi'(z).$$ One can also write down the expressions of $(A+C)$ and $(A-C)$ in terms of $z$ and $R$ only as follows: $$\begin{aligned}
\label{ApmC}
&&A+C = -\frac{1}{R} \sqrt{\frac{1-z}{z}}, \nonumber\\
&&A-C = \frac{1}{R} \sqrt{\frac{z}{1-z}}.\end{aligned}$$ Another function appearing quite frequently in our evaluation of all the weight functions is $f$, which can be also expressed as a function of $z$ and $R$ only as follows: $$\label{f}
f = m(m^{2}+R^{-2}+R^{-2}(1-z)^{-1}).$$ Now we start by listing all the weight functions in terms of $\pi(z)$ and $\pi'(z)$, bearing in mind that $$\label{pifinalform}
\tilde{\pi} (z) = {\rm Hl}(a,q;b,c,d,e;z),$$ $$\label{pitildefinalform}
\pi (z) = \sqrt{z}\;\;{\rm Hl}(a,q;b,c,d,e;z),$$ where ${\rm Hl}(a,q;b,c,d,e;z)$ is the Heun function with arguments as defined before. One has therefore the lengthy formulae for all other weight functions written down in Eqs. (A1)–(A8) of the appendix.
Qualitative behaviors of the weight functions
=============================================
Now using the series expansion (\[Heunseries\]) defined before one can numerically study the behavior of each weight function, by taking the first 10 terms of the infinite series (4.8). Indeed, dealing with an infinite number of terms is impossible, and one has therefore to resort to approximations, by truncating such a series. On taking less than 10 terms, we have found minor departures from the pattern outlined below in figures 1 to 9, whereas on taking 15 terms, the pattern in such figures is essentially confirmed.
We draw for example all these weight functions in a two-dimensional plot vs $z$, in the range (0,1). The plots, which also include ${\tilde \pi}(z)$, are as follows.
As one can see, for values of $z < 0.1$, the main contribution to the gravitino propagator results from the weight functions $\alpha(z),\beta(z), \tau(z)=\omega(z)$, whereas the other weight functions are sub-dominating. By contrast, when $z \in ]0.8,1[$, the dominating contribution to the gravitino propagator results from the weight functions $\gamma(z),\delta(z),\varepsilon(z),\theta(z),\pi(z)$, while the others remain sub-dominating.
Concluding remarks
==================
Our paper has obtained the complete analytic structure of massive gravitino propagators in de Sitter space. In Sec. VI we have plotted all weight functions $\alpha,\beta,\gamma,\delta,\epsilon,\theta,
\tau=\omega,\pi,\kappa$ occurring in the gravitino propagator (jointly with ${\tilde \pi}$) as a function of $z$ in a two-dimensional plot where $z=\cos^{2} (\mu /2R)$, $\mu$ being the geodesic distance between the points $x$ and $x'$, and $R$ is the de Sitter radius. Although the series (4.8) has been truncated, it remains true that Sec. VI is the first attempt to display a supersymmetric propagator in de Sitter via Heun functions. As we already said in Sec. I, further interest, from the point of view of mathematical methods, arises from the possibility to expand Heun functions in terms of hypergeometric functions [@2009]. As we said before, direct implications of our findings on the current understanding of the propagation of gravitinos in de Sitter space are as follows: there exist two ranges of values of $z$ in which the weight functions can be divided into dominating and sub-dominating family. In other words, when $z$ is smaller than $0.1$, the weight functions $\alpha,\beta,\tau=\omega$ are dominating while the others are sub-dominating. By contrast, when $z$ is very close to $1$, the weight functions $\gamma,\delta,\varepsilon,\theta,\pi$ take much larger values.
The plot range is between $0$ and $1$ for $z$, which is indeed the only admissible region, since the squared $\cos$ function lies always between $0$ and $1$. Note that the plot of $\tilde {\pi}$ is basically nothing but the plot of the Heun function with properly defined coefficients, and the plot of $\pi$ is $\sqrt{z}$ times the Heun function.
The numerical analysis of Sec. VI, as we already said therein, has been performed by taking only the first $10$ terms of the infinite series representing the Heun function, by applying the Frobenius’ method. If one goes on by taking more terms, one can get even more accurate results, but roughly the qualitative features remain the same. The task of plotting Heun functions is technical but not easy, since the modern computer packages still run into difficulties. Thus, our efforts can be viewed as preparing the ground for a more systematic use of Heun functions in fundamental theoretical physics. The flat-space limit is instead a considerable simplification, since the functions $A$ and $C$ in (5.1) are then found to reduce to $A={1\over \mu}, C=-{1\over \mu}$, and the formulae in the appendix are therefore considerably simplified.
It also remains to be seen whether the familiarity acquired with Heun functions will prove useful in studying gravitino propagators in other backgrounds relevant for modern high energy physics.
Explicit form of the weight functions
=====================================
The weight functions obtained in Sec. V read, explicitly, $$\begin{aligned}
\label{alphafinalform}
&&\alpha(z)=-2mC(m^{2}+R^{-2})f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&& -((m^{2}+R^{-2})[(A+C)^{2}-m^{2}]-4)\pi(z),\end{aligned}$$ $$\begin{aligned}
\label{betafinalform}
&&\beta(z)=4mC(m^{2}+R^{-2})^{-1}f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&&+2(m^{2}+R^{-2})^{-1}\left[(A+C)^{2}-m^{2}\right]\pi(z),\end{aligned}$$ $$\begin{aligned}
\label{gammafinalform}
&&\gamma(z)=-(m^{2}+R^{-2})^{-1}\left[(A-C)^{2}
-m^{2}\right]f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&&-2mC(m^{2}+R^{-2})^{-1}\pi(z),\end{aligned}$$ $$\begin{aligned}
\label{gammafinalform}
&&\delta(z)=-6(m^{2}+R^{-2})^{-1}\left[(A-C)^{2}
-m^{2}\right]f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&&+12mC(m^{2}+R^{-2})^{-1}\pi(z),\end{aligned}$$ $$\begin{aligned}
\label{epsilonfinalform}
&&\varepsilon(z)=-(m^{2}+R^{-2})^{-1}\left[(A-C)^{2}
+\frac{2}{R^{2}}+m^{2}\right]f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&&+2mC(m^{2}+R^{-2})^{-1}\pi(z),\end{aligned}$$ $$\begin{aligned}
\label{thetafinalform}
&&\theta(z)=(m^{2}+R^{-2})^{-1}\left[(A-C)^{2}
-m^{2}\right]f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&&-2mC(m^{2}+R^{-2})^{-1}\pi(z),\end{aligned}$$ $$\begin{aligned}
\label{taumegafinalform}
&&\tau(z)=2mC(m^{2}+R^{-2})^{-1}f^{-1}(z)\times \nonumber\\
&&\left \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z)
\pm(m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \right \} \nonumber\\
&&+(m^{2}+R^{-2})^{-1}\left[(A+C)^{2}-m^{2}\right]\pi(z) = \omega(z),\end{aligned}$$ $$\begin{aligned}
\label{kappafinalform}
&&\kappa(z)=f^{-1}(z)\times\nonumber\\
&&\biggr \{ \left[(A-C)((A+C)^{2}-m^{2})
-{1\over 2}(A+C)(m^{2}+R^{-2})\right] \pi(z) \nonumber \\
& \pm & (m^{2}+R^{-2})\frac{\sqrt{z(1-z)}}{R}\pi'(z) \biggr \}.\end{aligned}$$ These exhaust all the weight functions multiplying the invariant structure present in the gravitino propagator, written explicitly in terms of a Heun function and its derivative.
The authors are grateful to the Dipartimento di Scienze Fisiche of Federico II University, Naples and INFN for hospitality and financial support. We also want to thank Ebrahim Karimi for his much valuable input regarding our Mathematica computations. One of us (G.E.) dedicates this work to Maria Gabriella.
De Witt, B.S.: Dynamical Theory of Groups and Fields. Gordon & Breach, New York (1965) Witten, E.: Adv. Theor. Math. Phys. [**2**]{}, 253 (1998) Witten, E.: hep-th/0106109 Esposito, G., Roychowdhury, R.: arXiv:0902.2098 \[hep-th\] Penrose, R., Rindler, W.: Spinors and Space-Time. I. Cambridge University Press, Cambridge (1984). Anguelova, L., Langfelder, P.: J. High Energy Phys. JHEP [**03**]{}, 057 (2003) Handbook of exact solutions for ordinary differential equations. CRC Press, Boca Raton (1995). Ronveaux, A. (eds.): Heun’s Differential Equations. Oxford University Press, Oxford (1995). Sokhoyan, R.S., Melikdzanian, D.Yu., Ishkhanyan, A.M.: arXiv:0909.1286 \[math-ph\] Allen, B.: Nucl. Phys. B [**292**]{}, 813 (1987) Allen, B., Jacobson, T.: Commun. Math. Phys. [**103**]{}, 669 (1986) Allen, B., Lutken, C.A.: Commun. Math. Phys. [**106**]{}, 201 (1986) Mück, W.: J. Phys. A [**33**]{}, 3021 (2000) Synge, J.L.: Relativity: The General Theory. North–Holland, Amsterdam (1960) Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover, New York (1964) Erdelyi, A.: Higher Transcendental Functions. Krieger, Malabar (1981) Basu, A., Uruchurtu, L.I.: Class. Quantum Gravit. [**23**]{}, 6059 (2006) Kamke, E.: Differentialgleichungen, Lösungsmethoden und Lösungen. Vol. 1. Chelsea, New York (1974) Poole, E.G.C.: Linear Differential Equations. Oxford University Press, Oxford (1936) Snow, C. Hypergeometric and [Legendre]{} Functions with Applications to Integral Equations of Potential Theory, 2nd Edition, no. 19 in Applied Mathematics Series, National Bureau of Standards, Washington DC (1952) Andrews, G.E., Askey, R., Roy, R.: Special Functions, Vol. 71 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge (1999)
[^1]: Electronic address: [email protected]
[^2]: Electronic address: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Motivated by coronal mass ejection studies, we construct general relativistic models of a magnetar magnetosphere endowed with strong magnetic fields. The equilibrium states of the stationary, axisymmetric magnetic fields in the magnetar magnetosphere are obtained as solutions of the Grad-Shafranov equation in a Schwarzschild spacetime. To understand the magnetic energy buildup in the magnetar magnetosphere, a generalized magnetic virial theorem in the Schwarzschild metric is newly derived. We carefully address the question whether the magnetar magnetospheric magnetic field can build up sufficient magnetic energy to account for the work required to open up the magnetic field during magnetar giant flares. We point out the importance of the Aly-Sturrock constraint, which has been widely studied in solar corona mass ejections, as a reference state in understanding magnetar energy storage processes. We examine how the magnetic field can possess enough energy to overcome the Aly-Sturrock energy constraint and open up. In particular, general relativistic (GR) effects on the Aly-Sturrock energy constraint in the Schwarzschild spacetime are carefully investigated. It is found that, for magnetar outbursts, the Aly-Sturrock constraint is more stringent, i.e., the Aly-Sturrock energy threshold is enhanced due to the GR effects. In addition, neutron stars with greater mass have a higher Aly-Sturrock energy threshold and are more difficult to erupt. This indicates that magnetars are probably not neutron stars with extreme mass. For a typical neutron star with mass of $1-2 M_{\odot}$, we further explore the effects of cross-field current effects, caused by the mass loading, on the possibility of stored magnetic field energy exceeding the Aly-Sturrock threshold.'
author:
- Cong Yu
date: '$\qquad\qquad\qquad\qquad\qquad \qquad\qquad$ Apr. 13 2011'
title: Magnetic Energy Buildup for Relativistic Magnetar Giant Flares
---
INTRODUCTION
============
After the discovery of soft gamma repeaters and anomalous X-ray pulsars (Mazets et al. 1979; Mereghetti & Stella 1995), magnetar models of these sources are proposed to explain the relevant phenomena (Duncan & Thompson 1992; Thompson, Lyutikov & Kulkarni 2002). Magnetars are believed to be neutron stars with strong magnetic field, $\sim 10^{14} - 10^{15}$G (Duncan & Thompson 1992). The magnetar outbursts, such as giant flares, occur with huge release of magnetic energy $\sim 10^{44} - 10^{46}$ ergs. The energy for magnetar outbursts is widely accepted to be supplied by the star’s magnetic field. However the physical process by which the energy is stored and released is one of the great puzzles in high energy astrophysics. Two possibilities exist for the location where the magnetic energy is stored prior to an eruption: in the magnetar crust or in the magnetosphere. For the former possibility, a giant flare may be caused by a sudden untwisting of the magnetar interior magnetic field (Thompson & Duncan 2001). Subsequently, a sudden and brittle fracture of the crust leads to the giant flare. In this crust scenario, the energy stored in the external twist is limited by the tensile strength of the crust. Alternatively, based on the short timescale of the giant flare rise time, $\sim
0.25\mathrm{ms}$ (Palmer et al. 2005), the second possibility $-$ the magnetospheric storage model, was proposed by Lyutikov (2006). The energy released during an eruption is stored slowly (on a longer timescale than the timescale of giant flare) in the magnetar magnetosphere prior to the eruption. An abrupt reconfiguration and dissipation of the magnetic field due to a loss of confinement (Flyer et al. 2004) or a dynamical instability (Lyutikov 2003; Komissarov et al. 2007) produces the giant flare. This mechanism has the feature that the energy stored in the external twist may not be limited by the tensile strength of the crust, but instead by the total external magnetic field energy.
The magnetospheric storage model of magnetar giant flare shares similar magnetic energy buildup process to solar eruptions, such as coronal mass ejections (CMEs). In this model, the energy released during an eruption is stored in the magnetospheric magnetic field before the eruption. Large-scale eruptive CMEs often give rise to the opening up of magnetic field lines that were originally closed. The processes of magnetic fields opening up have been extensively investigated in the CME studies (Barnes & Sturrock 1972; Aly 1984; Mikic & Linker 1994). It is physically reasonable to assume that the preeruption closed state must possess more magnetic energy than the posteruption open state. As will be discussed in detail below, requiring the magnetic field to open imposes an extreme energy constraint on theories for CMEs. This energy requirement on solar CMEs has been under extensive theoretical studies in the past decades (Aly 1984; Sturrock 1991; Wolfson & Dlamini 1997; Zhang & Low 2005). The energy storage processes take place quasi-statically on a long timescale. When the magnetic field reaches a threshold, due to the instability or loss of confinement, the field erupts suddenly on a much shorter dynamical timescale. Analogous processes of magnetic field opening up are believed to occur in magnetar giant flares (Woods et al. 2001; Thompson et al. 2002; Beloborodov 2009). All these features of the storage model are in good agreement with the observations of magnetar giant flares (Lyutikov 2006).
The similarity between solar eruptions and magnetar giant flares (Lyutikov 2003) motivates this study on the energy buildup process in the magnetar magnetosphere. We note that there are important differences between solar eruptions and magnetar outbursts. For situations in the magnetar magnetosphere (Beloborodov & Thompson 2007), the location where the magnetic energy buildup occurs is quite near the neutron star surface ($\sim 1-2 R_{\mathrm{NS}}$). General relativistic (GR) effects near the neutron star surface are important (Ciolfi et al. 2009). General relativistic effects are currently, however, not taken into account in relevant energy storage processes. In this work we will investigate these processes with GR spacetime curvature effects considered. More specifically, we will ignore effects of magnetar rotation since they are slow rotators and describe the background geometry of the magnetar magnetosphere with the Schwarzschild metric. The virial theorem is a helpful tool for us to understand the energy properties in the magnetar magnetosphere. The flat spacetime magnetic virial theorem (Chandrasekhar 1961) has been extensively exploited in astrophysical researches (Aly 1984; Zhang & Low 2005). Attempts to get the GR virial theorem have been made by Chandrasekhar (1967), but he just considered a hydrostatic system, with the effects of magnetic fields completely ignored. In this study we establish the magnetic virial theorem in the Schwarzschild metric, which helps us to better understand GR effects on the physical behaviors of the magnetic energy buildup. For the magnetospheric storage model, an important question for the giant flare energetics is: Can the magnetospheric magnetic field store enough energy before an eruption? For the magnetic energy alone to power a magnetar giant flare, the energy must be sufficient to open up the magnetic field. However, for the nearly force-free magnetic field exterior to a sphere, a well-known result by Aly (1984, 1991) and Sturrock (1991) suggests that the energy of a fully open field is the upper limit on the energies of all the force-free fields in simple geometries[^1]. Thus the transition from a closed field configuration to an open one (which is actually required for a realistic eruption) is not energetically favored. Due to this Aly-Sturrock constraint, the initial magnetic field before eruption must have energy in excess of the threshold set by the Aly-Sturrock energy constraint. This Aly-Sturrock constraint is widely discussed in the solar CMEs studies. But its implications for magnetars are only briefly mentioned in Lyutkov (2006). Furthermore, GR effects on this important Aly-Sturrock constraint have not been considered in prior works. One purpose of this work is to clarify how GR effects influence the Aly-Sturrock constraint. The Aly-Sturrock constraint constitutes a bottleneck for the storage model of magnetar giant flares. There are a number of ways, however, to avoid the Aly-Sturrock constraint. A deviation from a perfectly force-free initial state might make a difference. In this scenario, it is expected that the cross-field electric currents are viable source of energy for the eruption. Detailed calculations about solar CMEs by Low & Smith (1993) suggested that a non-force-free magnetic field with cross field currents due to the mass loading of plasma can store more energy than the Aly-Sturrock field. Such mass loading effects are further discussed by Wolfson & Dlamini (1997) and Zhang & Low (2004). The mass loading of plasma in a non-force-free magnetic field acts like a rigid wall to confine the magnetic field, in other words, it would act as a lid that allows the magnetic energy to increase above the limit, and when the lid is suddenly removed, the field springs outward (Fan & Low 2003). By analogy, it is possible that in the magnetar magnetosphere, the mass loading plays the same role to compress the magnetic field. Consequently, the magnetic field can store magnetic energy above the Aly-Sturrock constraint. But no theoretical calculations were performed to corroborate this idea. In this work we will provide such a demonstration. Another possibility for the magnetic energy to exceed the Aly-Sturrock constraint is the formation of detached field lines from the magnetar surface (magnetic bubble or magnetic flux rope, e.g., Low & Smith 1993; Flyer et al. 2004), which will be further discussed in Yu et al. (2011, in prep).
This paper is organized as follows: in §2 we introduce the generalized magnetic virial theorem in the Schwarzschild metric. In §3 we will discuss how the the Aly-Sturrock field energy is affected by general relativistic effects. We will explore the cross-field effects caused by the mass loading on the magnetic energy storage in §4. Conclusions and discussions are given in §5.
Generalized Virial Theorem in Schwarzschild Spacetime
=====================================================
The virial theorem is of vital importance for understanding the magnetic energy storage in the magnetar magnetosphere. In the flat spacetime, it was proposed by Chandrasekhar (1961) and has been used widely in solar physics researches (e.g., Low & Smith 1993). We focus in this paper on the physical behavior near the magnetar surface, GR effects should be incorporated. Because observed magnetars have a very slow rotation rate, we ignore the rotation effects and adopt the Schwarzschild metric as the background spacetime. In this section we establish the virial theorem in the Schwarzschild metric including effects of magnetic fields. We consider a steady state magnetosphere around magnetars. The metric $g_{\mu\nu}$ of Schwarzschild geometry reads (Misner, Thorne & Wheeler 1973) $$ds^{2} = g_{\mu\nu}dx^{\mu}dx^{\nu} = - \alpha^2 dt^2 +
\alpha^{-2} dr^2 + r^2 d\theta^2 + r^2\sin^{2}\theta d\phi^2 \ .$$ The factor of $\alpha$ is defined as $$\label{alphafactor}
\alpha(r) = \sqrt{1-\frac{2 r_g}{r}} \ ,$$ where $r_g = G \mathcal{M}_{\mathrm{ns}}/c^2$ is the gravitational radius, $G$ is the gravitational constant, $\cal{M}_{\mathrm{ns}}$ is the mass of the neutron star, and $c$ is the speed of light.
A plasma containing only a perfect fluid and an electromagnetic field, is described by the energy-momentum tensor (Anile 1989) $$\label{energymomentum}
T^{\mu\nu} = T^{\mu\nu}_{\mathrm{fluid}} +
T^{\mu\nu}_{\mathrm{EM}} = \left( p+\rho + b^2 \right) u^{\mu}
u^{\nu} + \left( p + \frac{b^2}{2} \right) g^{\mu \nu} - b^{\mu}
b^{\nu} \ ,$$ where $p$ is the isotropic pressure, $\rho = \rho_0 +
\frac{p}{(\gamma-1)}$ is the energy density (including that due to the rest mass $\rho_0$) and $b^2 = b_{\mu} b^{\mu}$. A polytropic equation of state is adopted and we take $\gamma = 4/3$ throughout this paper. Here the Einstein summation rule is assumed and Greek letters take on the values $t$, $r$, $\theta$, and $\phi$. The magnetic field 4-vector is $$b^{\mu} =^*F^{\mu\nu} u_{\nu} \ ,$$ where $^*F^{\mu\nu}$ is the Maxwell tensor and $u_{\nu}$ is the four velocity of the comoving observer (Anton et al. 2006). The plasma is assumed to be in magnetostatic equilibrium, thus the four velocity is $ u^{\mu} = \left((-g_{tt})^{-1/2},0,0,0
\right)$. Under such circumstances, the condition $$\nabla_{\nu} T^{\mu\nu} = 0 \ ,$$ reduces to $$\label{derivevirial}
g^{\mu\nu}\frac{\partial \left( p + \frac{b^2}{2} \right)}{\partial x^{\nu}} + \frac{(p+\rho+b^2)}{2}g^{\mu\nu}\frac{\partial \ln(-g_{tt})}{\partial x^{\nu}} + \frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^{\nu}}\left( \sqrt{-g} \ b^{\mu}b^{\nu}\right) +\Gamma^{\mu}_{\lambda\sigma}b^{\lambda}b^{\sigma} = 0 \ , $$ where $g$ is the determinant of the metric $g_{\mu\nu}$ and the explicit expressions of the connection coefficients $\Gamma^{\mu}_{\lambda\sigma}$ (Weinberg 1972) are given in Appendix A. A re-arrangement of various terms of the above equation using Gauss theorem leads to the following generalized virial theorem in the Schwarzschild spacetime (Details are given in Appendix A), $$E + (3\gamma - 4) U = \int_{\partial \mathrm{V}} \alpha^2\left(\frac{ B^2}{2}+p\right) (\mathbf{r\cdot} d \mathbf{S}) $$ $$\label{virial}
- \int_{\partial \mathrm{V}} \alpha^2 (\mathbf{B \cdot r}) (\mathbf{B\cdot}d \mathbf{S}) + \int_{\mathrm{V}} \frac{\left( 1 - \alpha^2 \right)}{2} \left( B_r^2 + B^2 + \frac{5\gamma - 4}{\gamma -1} p \right) dV \ .$$ In this equation, $\mathbf{r}$ is the position vector. Here $d\mathbf{S}$ is a surface area element directed outwards and $d
V$ is a volume element, both measured by a locally inertial observer. The factor of $\alpha$ is given in equation (\[alphafactor\]). Note that the total energy $E$ is the sum of the magnetic, internal and gravitational potential energy, namely $$E = M + U + W \ ,$$ where $$\label{Mdefinition}
M = \int \frac{B^2}{2} \ dV \ ,$$ $$U = \int\frac{p}{\gamma - 1} \ dV \ ,$$ $$W = - \int\frac{\rho_0 G \mathcal{M}_{\mathrm{ns}}}{r} \ dV \ ,$$ are the magnetic, internal and gravitational potential energy, respectively. Here we have absorbed a $4\pi$ factor into the definition of the magnetic fields throughout this paper. In the above equations, the magnetic field $\mathbf{B}$ in the “ordinary" orthogonal basis (defined in Section 4) is used. The relation between $\mathbf{B}$ and the magnetic field 4-vector $b^{\mu}$ is given explicitly in Appendix A. Note that $B_r$ is the radial component of $\mathbf{B}$ and $B^2$ = $B_r^2 + B_{\theta}^2 + B_{\phi}^2$. Throughout this paper, we mainly work with the magnetic field $\mathbf{B}$. This choice is made mainly for the convenience of comparison between the results in the curved spacetime and the flat spacetime. The last integral on the right hand side in equation (\[virial\]) appears owing to general relativistic effects. This term disappears when taking the flat spacetime limit, i.e., $\alpha^2$ $\rightarrow$ 1. Note also that this equation becomes the usual virial theorem in the flat spacetime as $\alpha^2$ $\rightarrow$ 1 (Chandrasekhar 1961). In particular[^2], for the magnetically dominated force-free field, we arrive at $$\label{fffvirial}
M = \int_{\partial \mathrm{V}} \frac{\alpha^2 B^2}{2} \left( \mathbf{r\cdot} d \mathbf{S} \right) - \int_{\partial \mathrm{V}} \alpha^2 \left( \mathbf{B \cdot r}\right) \left(\mathbf{B\cdot}d \mathbf{S} \right) + \int_{\mathrm{V}} \frac{(1 - \alpha^2)}{2} \left( B_r^2 + B^2
\right) dV \ .$$ Assuming that the magnetic field vanishes sufficiently rapidly at large distances, we find that the energy of the force-free fields in the exterior $r>r_0$ of the neutron star is $$\label{Mdefinition2}
M = \pi r_0^3 \int \alpha^2 \left(B_r^2 - B_{\theta}^2\right)\bigg|_{r=r_0} \sin\theta d\theta + \int_{\mathrm{V}} \frac{r_g}{r} \left(2 B_r^2 + B_{\theta}^2
\right) dV \ ,$$ where $r_0$ is the radius of the neutron star.
We note that, in a flat spactime, the second term on the right hand side of the above equation disappears and the total magnetic energy of a force-free magnetic field in the exterior region $r>r_0$ of a sphere is uniquely determined by the field values at the the boundary $r=r_0$. However this is no longer the case for the curved spacetime, since additional terms proportional to $r_g$ appear on the right hand side of this equation. Close observation of equation (\[Mdefinition2\]) shows that, when GR effects are ignored, no force-free field that is completely detached from the solar surface (i.e., $B_r = 0$ at $r = r_0$ in the exterior region $r \ge r_0$) can exist (Low 2001). However, such completely detached field configurations in the general relativistic magnetar magnetosphere, due to the spacetime curvature, may be in the equilibrium state[^3]. This suggests that, besides the normal flux at the magnetar surface, the GR spacetime curvature provides additional self-confining effects. As a result, it needs more work to be done to open the magnetic field in the curved spacetime than in the flat spacetime. It is conceivable that when the magnetar mass increases, this effect becomes more evident (see Figure \[ratio\]). Such GR effects have important implications for the magnetic energy storage process in the magnetar magnetosphere. In the next section, we will quantitatively calculate their influences on the Aly-Sturrock constraint.
Aly-Sturrock Constraint For Magnetic Field Energy
=================================================
To discuss the magnetic energy in the magnetar magnetosphere, it is beneficial to introduce the potential field $\mathbf{B}_{\mathrm{pot}}$ in the Schwarzschild metric which satisfies (Uzdensky 2004) $$\label{fff}
\nabla \times (\alpha\mathbf{B}) = 0 \ ,$$ and the boundary condition $$\label{bc1}
r = r_0 \ , B_r = F(\theta) \ .$$ In this paper we mainly discuss the dipole field and its relevant open state. The explicit expression of the dipole field can be found in Appendix B. In this case the above boundary becomes $B_r
= C \cos\theta$, where $C$ is a constant. Note that the potential field now involves the spactime curvature term $\alpha$ in equation (\[fff\]). This is quite different from the flat spacetime definition (Komissarov 2004). Note that, as $\alpha$$\rightarrow$$1$, this potential field definition reduces to its flat spacetime form. The associated magnetic energy of the potential field is designated as $M_{\mathrm{pot}}$. For the force-free field in the magnetosphere, there exists one interesting energy reference state, the Aly-Sturrock state (Aly 1984,1991; Sturrock 1991). Imagine all force-free magnetic fields complying with the boundary condition (\[bc1\]), with one end of each line of force anchored to the star’s surface and the other out to infinity. Among all these fields, the one with the lowest energy is potential everywhere except for a current sheet at the equator (Aly 1984,1991; Sturrock 1991). This lowest energy state is the Aly-Sturrock state. Call this magnetic field configuration $\mathbf{B}_\mathrm{open}$. The total energy of this state is designated as $M_{\mathrm{open}}$. The well-known Aly-Sturrock conjecture claims that for any fully closed force-free field[^4] with the boundary condition (\[bc1\]), its total energy $M_{\mathrm{FF}}$ satisfies the following relation, $$\label{alysturrock}
M_{\mathrm{pot}} < M_{\mathrm{FF}} < M_{\mathrm{open}} \ .$$ This first half of this inequality means a current-free potential field is the lowest energy state. And the second half suggests that the opening up process of an initial closed force-free magnetic field requires considerable amount of work to be done on the magnetic field. Of particular interest is whether the pre-eruption magnetic energy $M$ can exceed the threshold set by the Aly-Sturrock field. This is crucial for the magnetically driven outbursts.
Some numerical experiments have recently demonstrated the validity of this conjecture (Antiochos, DeVore & Klimchuk 1999; Hu 2004). Due to the importance of the Aly-Sturrock constraint for the magnetic eruption, it is worthwhile to reconsider this problem when GR effects are important. Note that this Aly-Sturrock state is unique (Aly 1984; Sturrock 1991) and can be constructed by the following technique. Modify the boundary condition (\[bc1\]) to $$\label{bc2}
r = r_0 \ , B_r = |F(\theta)| \ .$$ After getting the field with this boundary condition and reversing the directions of those lines at the boundary $r=r_0$ where $B_r <
0$ of this field, we could get the Aly-Sturrock state (see also Low & Smith 1993). We have calculated the fully open field $\mathbf{B}_{\mathrm{open}}$ and the relevant energy $M_{\mathrm{open}}$ numerically. The details to obtain the Aly-Sturrock field and the magnetic energy $M_{\mathrm{open}}$ are discussed in Appendix C. In Figure \[Alyexample\], an illustrative example of the fully open Aly-Sturrock field is shown. The current sheet at the equator is shown by a thick solid line.
Dependence of $M_{\mathrm{open}}$ on Neutron Star Masses
---------------------------------------------------------
To investigate the spacetime curvature effects on the Aly-Sturrock constraint, we calculate the Aly-Sturrock threshold $M_{\mathrm{open}}$ for different magnetar masses. Throughout this paper we take the neutron star radius $r_0 = 1$, so for a neutron star mass of $1-3$ $M_{\odot}$, $r_g$ ranges from $0.15-0.45$ (For simplicity, we keep the neutron star radius fixed at 10 $\mathrm{km}$, though this is not the case in reality). In Figure \[ratio\], we show the variation of $M_{\mathrm{open}}$ (in units of $M_{\mathrm{pot}}$) with the neutron star mass. This figure shows that the more massive the magnetar, the higher the threshold is. For instance, for the dipole field with $r_g = 0.15$ (1 $M_{\odot}$), the energy of the fully open Aly-Sturrock field is $M_{\mathrm{open}}$ = $1.80 M_{\mathrm{pot}}$; when $r_g =
0.21$ (1.4 $M_{\odot}$), the energy becomes $M_{\mathrm{open}}$ = $1.88 M_{\mathrm{pot}}$. Consequently, it is more difficult for more massive neutron stars to surpass the Aly-Sturrock energy threshold. From this figure, we also note that as $r_g \rightarrow
0$ the Aly-Sturrock threshold approaches the flat spacetime limit $M_{\mathrm{open}} = 1.662 M_{\mathrm{pot}}$. This is consistent with our physical expectation.
This increase with mass of the Aly-Sturrock energy threshold stems entirely from the spacetime curvature self-confining effects mentioned in Section 2 and this behavior is quite different from the solar eruption in flat spacetime, in which the Aly-Sturrock field energy ($\sim$ 1.662 $M_{\mathrm{pot}}$) is independent of the star mass. It should be emphasized that in the magnetar outbursts, the Aly-Sturrock energy constraint is more stringent than for the solar CME-type eruptions. From Figure 2, we can infer that magnetars are probably not neutron stars with extreme mass $\sim 3
M_{\odot}$, as the Aly-Sturrock threshold could hardly be reached. For typical neutron star masses ($\sim 1-2 M_{\odot}$, $r_g \sim
0.15 - 0.3$), it is necessary to seek initial magnetic fields which possess magnetic energy in excess of the threshold set by the Aly-Sturrock energy $M_{\mathrm{open}}$. One possibility is due to the mass loading effects. The estimated ejected mass loading is about $10^{22}\mathrm{g}$ (Lyutikov 2006). This mass loading can be balanced by pressure forces in the vertical direction. The pressure gradient in the horizontal direction, however, requires magnetic forces associated with cross-field currents, i.e., $\mathbf{J} \times \mathbf{B} \neq 0$, to maintain the equilibrium state. The deviations from a strictly force-free magnetic fields, i.e., the cross-field contribution, are worth further investigations (Low & Smith 1993, Wolfson & Dlamini 1997). Physically speaking, the mass loading would act as a lid over the magnetic field. The field can be compressed globally by a sufficient amount of plasma. As a result, the energy of the compressed magnetic field increases as the total load increases, and eventually the magnetic energy exceeds $M_\mathrm{open}$. In other words, the cross field current densities provide additional sources of magnetic free energy which may be enough to enable the magnetic field to clear the threshold $M_\mathrm{open}$.
Axisymmetric Magnetostatic Magnetosphere with Cross-Field Currents
==================================================================
In this section we explore the cross-field effects on the magnetic energy storage properties in the magnetar magnetosphere. Similar investigations in solar CMEs have been carried out by Zhang & Low (2004). Specifically, we will focus on the question whether the magnetic energy in the magnetosphere can exceed the Aly-Sturrock threshold. In what follows, we consider that the magnetar magnetosphere evolves quasi-statically on sufficiently slow timescale that we can treat the magnetosphere as being essentially in magnetostatic equilibrium. A steady state axisymmetric, purely poloidal magnetic field in the Schwarzschild metric can be written as $${\bf B} = {\bf B}_{\rm pol} = \nabla \Psi \times \nabla\phi \ ,$$ where $\Psi(r,\theta)$ is the poloidal magnetic stream function. The “ordinary" orthogonal basis is used, where ${\bf
e}_{\hat{\mu}} = g_{\mu\mu}^{-1/2}\partial_{\mu}$ (Weinberg 1972, no summation rule over $\mu$ is used in this equation), namely, $${\bf e}_{\hat{r}} = \alpha\partial_r \ , {\bf e}_{\hat{\theta}} =
\frac{1}{r}\partial_{\theta} \ , {\bf e}_{\hat{\phi}} =
\frac{1}{r\sin\theta}\partial_{\phi} \ .$$ The poloidal magnetic field components are (Uzdensky 2004) $$\label{brbt}
{\bf B} = \frac{1}{r\sin\theta} \left( \frac{1}{r}\frac{\partial
\Psi}{\partial \theta} , \ - \alpha \frac{\partial \Psi}{\partial
r} \right) \ . $$ To account for the the cross-field current effects induced by the mass loading, we must go beyond the force-free approximations (Yu 2011) and turn to the full magnetohydrodynamic (MHD) equation (\[derivevirial\]). This equation decomposes into the following two equations $$\label{gs}
\frac{\partial }{\partial r}\left(\alpha^2 \frac{\partial \Psi}{\partial r} \right) + \frac{\sin\theta}{r^2}\frac{\partial}{\partial \theta}\left( \frac{1}{\sin\theta}\frac{\partial \Psi}{\partial\theta} \right) + r^2\sin^2\theta\frac{\partial p}{\partial \Psi} = 0 \ ,$$ $$\label{equili2}
g^{rr}\frac{\partial p}{\partial r} + ( p + \rho) \frac{G \mathcal{M}_{\mathrm{ns}}}{r^2} = 0 \ , $$ for balance across and along the magnetic field (Low & Smith 1993). A simple solution to equation (\[equili2\]) reads $$\label{linear1}
p = \frac{P(\Psi)}{r^{m+1}} \ , $$ $$\label{linear2}
\rho_0 = \frac{1}{G\mathcal{M}_{\mathrm{ns}}} \frac{P(\Psi)}{r^{m}}\left[ m+1 - \left(2m + 2 + \frac{\gamma}{\gamma-1} \right)\frac{r_g}{r} \right] \ , $$ where $P(\Psi)$ is a free function of the magnetic stream function $\Psi$ and $m$ is a constant.
To keep the problem mathematically tractable, we take the free function $P$ to be linear in $\Psi$. Subsequently, equation (\[linear1\]) and (\[linear2\]) become $$\label{linear3}
p = \frac{\lambda (\Psi + \Psi_0) }{r^{m+1}} \ , $$ $$\label{linear4}
\rho_0 = \frac{1}{G\mathcal{M}_{\mathrm{ns}}} \frac{\lambda(\Psi + \Psi_0)}{r^{m}}\left[ m+1 - \left(2m + 2 + \frac{\gamma}{\gamma-1} \right)\frac{r_g}{r} \right] \ , $$ where $\Psi_0$ and $\lambda$ are constants. Substitute equation (\[linear3\]) into equation (\[gs\]), we obtain the following linear Grad-Shafranov equation $$\frac{\partial }{\partial r}\left(\alpha^2 \frac{\partial \Psi}{\partial r} \right) + \frac{\sin\theta}{r^2}\frac{\partial}{\partial \theta}\left( \frac{1}{\sin\theta}\frac{\partial \Psi}{\partial\theta} \right) + \lambda \frac{\sin^2\theta}{r^{m-1}} = 0 \ .$$ The general solution to the above equation can be written as $$\label{casestudy}
\Psi = f_m(r)\sin^2\theta + \Psi_{\mathrm{pot}} \ ,$$ where $\Psi_{\mathrm{pot}}$ is an arbitrary potential stream function satisfying (Ghosh 2000) $$\frac{\partial }{\partial r}\left(\alpha^2 \frac{\partial \Psi_{\mathrm{pot}}}{\partial r} \right) + \frac{\sin\theta}{r^2}\frac{\partial}{\partial \theta}\left( \frac{1}{\sin\theta}\frac{\partial \Psi_{\mathrm{pot}}}{\partial\theta} \right) = 0 \ .$$ This equation can be readily solved by the variable separation method (see Appendix B). The function $f_m(r)$ satisfies the following second order ordinary differential equation (ODE) $$\label{ODE}
\left( 1 - \frac{2 r_g}{r} \right) f^{\prime\prime} + \frac{2
r_g}{r^2} f^{\prime} - \frac{2}{r^2} f + \frac{\lambda}{r^{m-1}} =
0 \ ,$$ where prime denotes derivatives with respect to $r$. The particular solutions can be readily obtained analytically. For $m=3, 4, 5, 6, 7,$ and $8$, the radial function $f_m$ are given explicitly in Appendix D. The simple linear solutions given by the above equations can not be expected to describe the magnetar magnetosphere in realistic details. However, the solution given by equation (\[casestudy\]) with $\Psi_{\mathrm{pot}} = 0$ can be used to obtain a physical estimate of how much energy can be stored in the magnetosphere prior to eruptions.
The magnetic energies for different values of $m$ and $r_g$ are listed in Table 1. We find that, for $r_g = 0.15$ and $0.21$ , the field configurations are able to sustain magnetic energy higher than the Aly-Sturrock threshold as $m \ge 8$.
[cccccccc]{} $m$ & $r_g = 0.15$ & $r_g = 0.21 $ & $r_g = 0.3 $\
3 & 2.19 & 2.30 & 2.55\
4 & 1.00 & 1.00 & 1.00\
5 & 1.15 & 1.13 & 1.10\
6 & 1.44 & 1.38 & 1.30\
7 & 1.77 & 1.67 & 1.52\
8 & 2.12 & 1.98 & 1.76\
9 & 2.48 & 2.29 & 2.00\
10 & 2.84 & 2.61 & 2.25\
The gravitational radius $r_g$ is taken as $0.15$, $0.21$ and $0.3$, which correspond to magnetar mass of $1.0$, $1.4$ and $2.0$ $M_{\odot}$. The Aly-Sturrock energy thresholds, shown in Figure 2 for the three values of the magnetar mass, are $M_{\mathrm{open}}
= 1.80 M_{\mathrm{pot}}$, $M_{\mathrm{open}} = 1.88
M_{\mathrm{pot}}$ and $M_{\mathrm{open}} = 2.06 M_{\mathrm{pot}}$, respectively. According to this table, we note that, when $m \ge
8$ for $r_g = 0.15, 0.21$ and $m \ge 10$ for $r_g = 0.3$, the magnetic energy in the magnetosphere could be higher than the Aly-Sturrock threshold.
Simple estimation shows that the total magnetic energy of a magnetar with magnetic field $\sim 10^{14} - 10^{15}$ G is approximately $ 10^{46}-10^{48}$ ergs. Given the actual giant flare energy release $\sim 10^{44} - 10^{46}$ ergs, we know that a few percent of the magnetic energy in excess of the Aly-Sturrock threshold is needed to release during a giant flare. This energy requirement can be fulfilled as $m$ reaches a critical value. For instance, we find that for $r_g = 0.15$ the magnetic energy $M$ with $m = 8$ is approximately 15 percent above the Aly-Sturrock threshold $M_{\mathrm{open}}$, which is enough to drive magnetar giant flares. In Figure 3, we show the $m=8$ solution with $r_g = 0.15$ $$\Psi = \frac{f_8(r)}{f_8(r_0)}\sin^2\theta \ , \ \rho_0 = \frac{1}{G\mathcal{M}_{\mathrm{ns}}} \frac{\lambda \Psi }{r^{m}}\left[ m+1 - \left(2m + 2 + \frac{\gamma}{\gamma-1} \right)\frac{r_g}{r}
\right] \ ,$$ where the stream flux is normalized to unity at $r=r_0$ and $\theta = \pi/2$ and we have set the constant $\Psi_0 = 0 $ in equation (\[linear4\]). The left panel in this figure shows the magnetic field lines and the right one shows the contour of the density departure[^5] from an arbitrary, spherically symmetric distribution. In this particular state, the magnetic energy $M$ is $2.12 M_{\mathrm{pot}}$, which is greater than the Aly-Sturock state $M_{\mathrm{open}} = 1.80 M_{\mathrm{pot}}$.
The solution with $m=3$ and $\Psi_{\mathrm{pot}} = 0$ is a purely radial magnetic field, $$B_r = \lambda \frac{\cos\theta}{r^2} \ , B_{\theta} = 0 \ .$$ This solution has been extensively discussed in the Blandford-Znajek process (e.g. Blandford & Znajek 1977) related to relativistic astrophysical jets. But in our discussion this state itself is of no particular interest as we are more concerned with the initial closed state. To introduce the closed field structures,
we add a dipole field to the $m=3$ purely radial magnetic field, i.e., $$\label{m3mix}
\Psi = \lambda \frac{f_3(r)}{f_3(r_0)} \sin^2\theta \pm
\Psi_{\mathrm{dipole}} \ ,$$ where the stream function is also normalized. The magnetic fields with $r_g = 0.15$ are shown in Figure 4. The left panel in this figure corresponds to the “$+$" sign, which approximately models the effects of the neutron star wind (Bucciantini et al. 2006). Such configurations are also discussed by Low & Tsinganos (1986) and applied to model the effects of solar wind. Note that when $\lambda$ increases to $5.0$, the magnetic energy in the left panel is about $1.83 M_{\mathrm{pot}}$, exceeding the corresponding Aly-Sturock energy $M_{\mathrm{open}}$ by about $2\%$, which suggests this state may support a giant flare. If $\lambda$ is even increased, more magnetic energy can be obtained. The right panel takes the “$-$" sign in the above equation. Though the right panel shows a state that is physically unacceptable, it is worth pointing out that the energy of the state with detached field lines ($\sim 2.86 M_{\mathrm{pot}}$) is much higher than the energy in the left panel. This also suggests that, when there are detached fields in the magnetosphere, the stored magnetic energy can be much larger than those configurations whose field lines are all anchored to the magnetar surface. This possibility to bypass the Aly-Sturrock constraint has been discussed by Flyer et al. (2004) for solar CMEs and will be further discussed for magnetar giant flares (Yu et al. in prep).
Conclusions and Discussions {#sec:diss}
===========================
We construct general relativistic models of non-rotating neutron stars endowed with strong magnetic fields. The equilibrium states of axisymmetric force-free magnetic fields in magnetar magnetospheres are found as solutions of the Grad-Shafranov equations in the Schwarzschild geometry. A newly derived general relativistic magnetic virial theorem in presented in this work. Based on this magnetic virial theorem, we carefully examine the GR effects on the well known Aly-Sturrock energy threshold. We found that this energy threshold increases with the magnetar mass. As a result, it is more difficult for massive magnetars to erupt. By this observation, we conclude that magnetars are probably not neutron stars with extreme mass. The non-force-free magnetic field induced by the mass loading is further investigated as a possibility to bypass the Aly-Sturrock constraint for typical magnetar mass around $\sim 1.4 M_{\odot}$.
We mainly discuss dipolar surface boundary conditions in this paper. This is the case for magnetar’s large scale fields. But observations show a striking feature that the emergence of a strong four-peaked pattern in the light curve of the 1998 August 27 event from SGR 1900+14, which was shown in data from the Ulysses and Beppo-SAX gamma-ray detectors (Feroci et al. 2001). These remarkable data may imply that the geometry of the magnetic field was quite complicated in regions close to the star where GR effects are important. As a result, complex boundary conditions should be important for the outburst of magnetars. Effects of different boundary conditions on the energy buildup in magnetars are worth further investigations (Antiochos et al. 1999).
For simplicity, we have neglected the relativistic wind from the neutron star surface. Actually, the wind from the neutron star (e.g. Bucciantini et al. 2006) may cause part of the magnetic field lines to be in the open states before eruption. Similar effects have been explored in solar CMEs (Low & Smith 1993; Wolfson 1993). It is interesting to investigate the effects of neutron star wind on the magnetic energy storage properties. Helicity has been discussed extensively in solar physics (Zhang & Low 2005). CMEs are believed to be the unavoidable products of the coronal evolution as a result of magnetic helicity accumulation (Zhang et al. 2006). But helicity in the GR regime is not a well-explored issue. Finding a self-consistent definition of helicity in the curved spactime and investigating the relevant helicity properties are interesting topics for further explorations.
The field topology change from a closed state to an open state must be accompanied by the magnetic reconnection. After a certain threshold is reached, the dynamical instability sets in. The gradual quasi-static evolution of the magnetar’s magnetosphere will be replaced by the dynamical evolution of the field. This naturally explains the problem as to how a very slow buildup of the external shear (over an interval of $\sim$100 yr) could lead to the sudden release of external magnetic energy on a much shorter timescale (Lyutikov 2006). The magnetic energy dissipation in the strongly magnetized plasma is caused by the tearing mode instability (Lyutikov 2003, Komissarov et al. 2007). Relativistic tearing instability induced reconnections in the nonlinear regime need further studies to better understand the magnetar outburst behaviors.
Our theoretical models can not address the nonlinear dissipation processes that occur during giant flares. However, current GRMHD simulations provide a unique opportunity to study the dynamical outburst physics. The models constructed in this work are likely to be useful as initial states in GRMHD numerical simulations to explore the dynamics of magnetic eruptions (Gammie et al. 2003, Yu 2011).
We thank the anonymous referee for important comments and suggestions that improve this paper greatly. The research is supported by the Natural Science Foundation of China (Grant 10873033, 10703012, 10778702 and 10973034), the Western Light Young Scholar Program and the 973 Program (Grant 2009CB824800). The computation is performed at HPC Center, Kunming Institute of Botany, CAS, China.
Derivation of Virial Theorem in Schwarzschild Metric
====================================================
The four equations expressing conservation of energy momentum are $$\nabla_{\nu} T^{\mu\nu} = 0 \ , $$ where the Einstein summation rule is assumed and Greek letters take on the values $t$, $r$, $\theta$, and $\phi$. The four velocity for a plasma in magnetostaic equilibrium is $$u^{t} = (- g_{tt})^{-1/2}, \ u^r=u^{\theta}=u^{\phi} = 0 \ .$$ Given the energy-momentum tensor in equation (\[energymomentum\]), the covariant derivative can be expanded as follows, $$\nabla_{\nu} T^{\mu\nu} = g^{\mu\nu}\frac{\partial}{\partial x^{\nu}}\left(p+\frac{b^2}{2}\right) + \Gamma^{\mu}_{\beta\sigma}(p + \rho + b^2)u^{\beta} u^{\sigma} $$ $$- \frac{1}{\sqrt{-g}}\frac{\partial (\sqrt{-g}\ b^{\mu} b^{\nu})}{\partial x^{\nu}} - \Gamma^{\mu}_{\beta\sigma} b^{\beta} b^{\sigma} \ .$$ The radial component of the above equation becomes (note that the connection coefficients $\Gamma^{\mu}_{tt} = -\frac{1}{2}
g^{\mu\nu}\frac{\partial g_{tt}}{\partial x^{\nu}} $) $$g^{rr} \frac{\partial }{\partial r}\left( p + \frac{b^2}{2}\right) + g^{rr}\frac{1}{2 g_{tt}}\left( p + \rho + b^2 \right) \frac{\partial g_{tt}}{\partial r} - \left( \frac{1}{\sqrt{-g}} \frac{\partial }{\partial r}\left(\sqrt{-g}\ b^r b^r \right) \right.$$ $$\label{radial}
\left.
+ \frac{1}{\sqrt{-g}} \frac{\partial }{\partial \theta}\left(\sqrt{-g}\ b^r b^{\theta} \right) + \Gamma^r_{rr} b^r b^r + \Gamma^r_{\theta\theta} b^{\theta} b^{\theta} + \Gamma^r_{\phi\phi} b^{\phi} b^{\phi} \right) = 0 \ ,$$ where $$\Gamma^r_{rr} = - \frac{r_g}{r(r - 2 r_g )} \ , \ \Gamma^r_{\theta\theta} = - (r - 2 r_g) \ , \ \Gamma^r_{\phi\phi} = - (r - 2 r_g) \sin^2\theta \ .$$ The “ordinary" component of the magnetic field $\mathbf{B}$ in the orthogonal basis (Weinberg 1972) is related to the magnetic field 4-vector $b^{\mu}$ by $$B_r = \sqrt{g_{rr}} \ b^{r} = \sqrt{g^{rr}} \ b_{r} \ , B_{\theta} = \sqrt{g_{\theta\theta}} \ b^{\theta} = \sqrt{g^{\theta\theta}} \ b_{\theta} \ , B_{\phi} = \sqrt{g_{\phi\phi}} \ b^{\phi} = \sqrt{g^{\phi\phi}} \ b_{\phi} \ , $$ and $$b^2 = b_{\mu} b^{\mu} = B^2=B_r^2 + B_{\theta}^2 + B_{\phi}^2 \ .$$ Multiplying the equation (\[radial\]) by $r$ and expressing the magnetic field 4-vector $b^{\mu}$ by the “ordinary" magnetic fields $\mathbf{B}$ in equation (\[radial\]), we may arrive at $$\alpha^2 r \frac{\partial }{\partial r}\left( p + \frac{B^2}{2}\right) + \frac{r_g}{ r}\left( p + \rho + B^2 \right) - \frac{1}{r^2}\frac{\partial}{\partial r}\left( r^3 \alpha^2 B_r^2 \right) - \frac{\alpha}{\sin\theta}\frac{\partial}{\partial \theta}\left( \sin\theta B_r B_{\theta} \right) $$ $$+ \frac{r_g}{ r} B_r^2 + \alpha^2 (B_r^2 + B_{\theta}^2+
B_{\phi}^2) = 0 \ .$$ Performing the volume integral with the usage of Gauss’s theorem, the above equation can be re-arranged to give the generalized virial theorem, equation (\[virial\]) in the main text.
Dipole Field Boundary Conditions and Separable Solutions for Potential Fields
==============================================================================
To get the dipole field boundary conditions, we need to obtain the current-free potential field. To be self-contained, we describe the separable solutions of the homogenous Grad-Shfranov equation, which are also the building blocks for the Aly-Sturrock fully open field. The homogenous GS equation reads $$\frac{\partial}{\partial r} \left[ \left( 1 - \frac{2 r_g}{r}
\right) \frac{\partial \Psi}{\partial r} \right] +
\frac{\sin\theta}{r^2}\frac{\partial }{\partial\theta}
\left(\frac{1}{\sin\theta} \frac{\partial
\Psi}{\partial\theta}\right) = 0 \ . \label{potential}$$ Separable solutions of the above equation are of the form $$\Psi(r,\theta) = R(r)\Theta(\theta) \ .$$ Substitute the above equation into equation (\[potential\]), we obtain $$\label{theta}
\frac{d}{d\theta}\left( \frac{1}{\sin\theta} \frac{d \Theta}{d
\theta}\right) = - \lambda \frac{\Theta}{\sin\theta} \ ,$$ $$\label{R}
\frac{d}{d r}\left[ \left( 1- \frac{2 r_g}{r} \right) \frac{d R}{d
r}\right] = \lambda \frac{R}{r^2} \ ,$$ where $\lambda$ is the separation constant. The lowest order of solution are the special case with $\lambda = 0$, which can be obtained by setting $\lambda = 0$ in the above two equations. The solutions are then $$\Theta(\theta) = {\rm a} \cos\theta + {\rm b} \ ,$$ $$R(r) = {\rm c} \ ,$$ where a,b, and c are constants. This solution is the Schwarzschild monopole. The order of the solution is denoted by the ordinal number $m$ ($m=1$ corresponds to the dipole field), related to the constant $\lambda$ by $\lambda = m(m+1)$. Equations (\[theta\]) and (\[R\]) become (Ghosh 2000): $$(1 - \mu^2) \frac{d^2 \Theta}{d\mu^2} + m(m+1) \Theta = 0 \ ,
\label{mu}$$ $$(1 - z^2) \frac{d^2 R}{dz^2} - 2 \frac{d R}{d z}+ m(m+1) R = 0 \ ,
\label{Jacobi}$$ where $\mu = \cos\theta$ and $z = r/r_g - 1$.
The solution of equation (\[mu\]) is $$\Theta (\mu) = (1 - \mu^2) \frac{d P_{m}(\mu)}{d \mu} \ ,$$ where $P_{m}(\mu)$ is the Legendre polynomial. The solutions of equation (\[Jacobi\]) are $$R(r) = r^2 \left\{ \begin{array}{l}
{\cal P}^{(0,2)}_{m-1}(z) \\
{\cal Q}^{(0,2)}_{m-1}(z) \\
\end{array} \right. \ ,$$ where ${\cal P}^{(0,2)}_{m-1}(z)$ and ${\cal Q}_{m-1}^{(0,2)}(z)$ are Jacobi polynomial and Jacobi functions of the second kind, respectively. For $r\gg r_g$, the Jacobi polynomial and Jacobi function’s asymptotic behaviors are (Szeg$\mathrm{\ddot{o}}$ 1939) $${\cal P}^{(0,2)}_{m-1}(z) \sim r^{m-1} \ , \ {\cal
Q}^{(0,2)}_{m-1}(z) \sim r^{-m-2} \ .$$ The superscripts in the Jacobi polynomial and Jacobi function will be suppressed hereafter, as the values remain the same throughout this study. The explicit expressions for the Jacobi polynomials and Jacobi functions can be found in Gradshteyn & Ryzhik (1980).
Of particular interest is the dipole configuration ($m=1$) determined by $$\Psi = \left[(1-\mu^2)\frac{P_1(\mu)}{d\mu}\right] r^2 {\cal
Q}_0(z) = \left[ \frac{r^2}{2}\ln\left(\frac{r}{r - 2 r_g}\right)
- r r_g - r_g^2 \right] \sin^2\theta \ .$$ This solution can be used as boundary conditions.
Determination of the Aly-Sturrock Field
=======================================
To appreciate the Aly-Sturrock constraint on the availability of free magnetic energy, we need to determine the Aly-Sturrock state numerically. Following Low & Smith (1993), the boundary conditions of Aly-Sturrock fully opened field can be obtained by flipping the flux function according to the boundary condition (\[bc2\]) $$\Psi_{\mathrm{modify}} = \left\{ \begin{array}{ll}
\Psi(r_0, \theta) & 0\leq \theta \leq \pi/2 \\
2\Psi(r_0,\pi/2) - \Psi(r_0, \theta) & \pi/2\leq \theta \leq \pi \\
\end{array} \right. \ .$$ Specifically, for the original dipole boundary condition, the modified boundary condition becomes $$\label{mbc}
\Psi_{\mathrm{modify}}(r_0,\theta) = B_0 A_1 \times \left\{\begin{array}{ll}
\sin^2\theta & 0\leq \theta \leq \pi/2 \\
2 - \sin^2\theta & \pi/2\leq \theta \leq \pi \\
\end{array} \right. \ ,$$ where $$A_1 = \frac{r_0^2}{2}\ln\left(\frac{r_0}{r_0 - 2 r_g} \right) -
r_0 r_g - r_g^2 \ ,$$ and $r_0$ is the magnetar radius. The solutions to the homogeneous Grad-Shafranov equation are of the form $$\Psi(r, \theta) = \sum_{n=1}^{\infty} a_{n} \left(r^2 {\cal
Q}_{n-1}(r) \right) \left( \sin^2\theta \frac{d P_{n}(\mu)}{d\mu}
\right) + \alpha_0 + \alpha_1 \cos\theta \ ,$$ where $\mu = \cos\theta$ and $\mathcal{Q}_{n-1}(r)$ is the Jacobi function of the second kind. It is clear that $$\alpha_0 = B_0 A_1 \ , \ \alpha_1 = - B_0 A_1 \ .$$ We define the following flux function as $$\Psi^{*}(r, \theta) = \Psi(r,\theta) - \alpha_0 - \alpha_1
\cos\theta \ .$$ The problem becomes to determine the coefficient $a_n$ $$\label{psidcmp}
\Psi^{*}(r, \theta) = \sum_{n=1}^{\infty} a_{n} \left(r^2 {\cal
Q}_{n-1} \right) \left( \sin^2\theta \frac{d P_{n}(\mu)}{d\mu}
\right) \ ,$$ subject to the modified boundary condition (\[mbc\]) $$\Psi^{*}(r_0, \theta) = \Psi(r_0,\theta) - \alpha_0 - \alpha_1 \cos\theta$$ $$= B_0 A_1 \times \left\{\begin{array}{ll}
\sin^2\theta -1 + \cos\theta & 0\leq \theta \leq \pi/2 \\
1 - \sin^2\theta + \cos\theta & \pi/2\leq \theta \leq \pi \\
\end{array} \right. .$$ According to the orthogonality of associated Legendre polynomials $ P_{n}^{1}(\mu) $, we have that $$a_n = - \frac{1}{r_0^2 {\cal Q}_{n-1}(r_0)} \frac{2n+1}{2n(n+1)}
\int^{\pi}_0 \Psi^{*}(r_0,\theta) P_{n}^{1}(\mu) d\theta \ ,$$ Note that $\Psi^{*}(r_0,\theta)$ is an odd function of $\theta$ in the integration range. When $n$ is an odd integer, the coefficients $a_n$’s vanish. The non-zero coefficients $a_{n}$’s ($n=2m$) can be written as $$a_{2m} = - \frac{B_0 A_1} {r_0^2 {\cal Q}_{2m-1}(r_0) }
\frac{4m+1}{2m(2m+1)} \int^{\pi/2}_{0} (\sin^2\theta -1 +
\cos\theta) P_{2m}^1(\cos\theta) d\theta \ , $$ where $P_{2m}^1(\cos\theta)$ is the associated Legendre polynomial. After some manipulations, we arrive at $$a_{2m} = \frac{B_0 A_1} {r_0^2 {\cal Q}_{2m-1}(r_0) }
\frac{4m+1}{m(2m+1)} \frac{(-1)^{m-1}(2m-2)!}{2^{2m}(m-1)!(m+1)!}
\equiv \frac{c_{2m}}{{\cal Q}_{2m -1}(r_0) }\ .$$ The radial and $\theta$ components of the magnetic field, according to equation (\[brbt\]), are $$B_r = \sum_{m=1}^{\infty} a_{2m} {\cal Q }_{2m-1}(r) \left[ 2m
(2m+1) P_{2m}\right] - \frac{\alpha_1}{r^2} \ ,$$ and $$B_{\theta} = -\sqrt{1 - \frac{2 r_g}{r}} \sum_{m=1}^{\infty}
a_{2m} \left[ 2{\cal Q }_{2m-1}(r) + r {\cal Q
}^{\prime}_{2m-1}(r) \right] \sin\theta \frac{ d P_{2m}(\mu)
}{d\mu} \ ,$$ where prime denotes derivative with respect to $r$. The magnetic energy of the open field, according to equation (\[Mdefinition2\]), is $$M_{\mathrm{open}} = \pi r_0^3 \left(1 - \frac{2 r_g}{r_0} \right)\left\{2 B_0^2 \frac{A_1^2}{r_0^4} + \sum_{m=1}^{\infty} c_{2m}^2
\frac{4m(2m+1)}{4m+1} \times \right.$$ $$\left. \left[ 2m(2m+1) - \left( 1 - \frac{2 r_g}{r_0} \right)
\left( 2 + \frac{ r_0 {\cal Q }^{\prime}_{2m-1}(r_0)}{{\cal
Q}_{2m-1}(r_0)}\right)^2 \right] \right\} +$$ $$2 \left( 4\pi r_g B_0^2 A_1^2 \int_{r_0}^{\infty} \frac{1}{r^3} dr + 4\pi r_g \sum_{m=1}^{\infty} c_{2m}^2 \frac{\left[2m(2m+1)\right]^2}{4m+1} \int_{r_0}^{\infty} \left(\frac{\mathcal{Q}_{2m-1}(r)}{\mathcal{Q}_{2m-1}(r_0)}\right)^2 r\ dr \right) $$ $$+ \ 2\pi r_g \sum_{m=1}^{\infty} c_{2m}^2\frac{4m(2m+1)}{4m+1}\int_{r_0}^{\infty} (r - 2r_g) \left( \frac{2 {\cal Q}_{2m-1}(r) + r {\cal Q}^{\prime}_{2m-1}(r)}{{\cal Q}_{2m-1}(r_0)} \right)^2 dr \ . $$ The potential dipole field energy $M_{\mathrm{pot}}$ can be calculated as follows, $$B_r = 2 B_0 g_r(r) \cos\theta \ ,$$ $$B_{\theta} = B_0 g_{\theta}(r) \sin\theta \ ,$$ where $$g_r(r) = \frac{1}{2}\ln\left( \frac{r}{r - 2 r_g}\right) -
\frac{r_g}{r} - \frac{r_g^2}{r^2} \ ,$$ $$g_{\theta}(r) = \sqrt{1 - \frac{2 r_g}{r}} \left[ \frac{2 r_g (r -
r_g)}{r (r - 2 r_g)} - \ln\left( \frac{r}{r - 2 r_g}\right)
\right] \ .$$ The potential dipole field energy is $$M_{\mathrm{pot}} = \frac{1}{2} \int B_0^2 \left( 4 g_r^2(r)
\cos^2\theta + g_{\theta}^2(r)\sin^2\theta \right) dV \ .$$
Solutions for the Ordinary Differential Equation (\[ODE\])
==========================================================
For $m = 3, 4, 5, 6, 7,$ and 8, the functions $f_m(r)$’s are $$f_3 = \frac{\lambda}{2} \ ,$$ $$f_4 = \lambda \frac{2 r r_g + 2 r_g^2 + r^2\ln(r-2r_g) - r^2\ln r
}{8 r_g^3} \ ,$$ $$f_5 = \lambda \frac{6 r^2 r_g + 6 r r_g^2 + 8 r_g^3 + 3 r^3
\ln(r-2r_g) - 3 r^3 \ln r }{48 r r_g^4} \ ,$$ $$f_6 = \lambda \frac{6\, r^3 r_g + 6 r^2 r_g^2
+ 8 r r_g^3 + 12 r_g^4 + 3 r^4 \ln(r-2r_g) - 3 r^4
\ln r }{192 r^2 r_g^5} \ ,$$ $$f_7 = \lambda \frac{30 r^4 r_g + 30 r^3 {r_g}^2 + 40
r^2{r_g}^3 + 60 r{r_g}^4 + 96{r_g}^5 + 15 r^5 \ln \frac{r - 2
r_g}{r}}{2880 r^3 {r_g}^6} \ ,$$ $$f_8 = \lambda \frac{30 r^5 r_g + 30 r^4 {r_g}^2 + 40r^3{r_g}^3 +
60r^2{r_g}^4 + 96r{r_g}^5 + 160 {r_g}^6 + 15 r^6 \ln \frac{r - 2
r_g}{r}}{7680 r^4 {r_g}^7} \ ,$$ respectively. In the calculation of the magnetic energy in the exterior of the neutron star, the stream functions are normalized as $$\Psi = \frac{f_m(r)}{f_m(r_0)}\sin^2\theta \ .$$
[99]{} Aly, J. J., 1984, ApJ, 283, 349
Aly, J. J., 1991, ApJL, 375, 61
Anile, A. M., 1989, Relativistic Fluids and Magnetofluids (New York: Cambridge Univ. Press)
Antiochos, S. K., DeVore, C. R. & Klimchuk, J. A., 1999, ApJ, 510, 485 Anton, L., et al., 2006, ApJ, 637, 296 Barnes, C. W. & Sturrock, P. A., 1972, ApJ, 174, 659 Blandford, R. D., & Znajek, R. L., 1977, MNRAS, 179, 433
Beloborodov, A. M., 2009, ApJ, 703, 1044 Beloborodov, A. M., & Thompson, C., 2007, ApJ, 657, 967 Bucciantini, N., et al., 2006, MNRAS, 368, 1717 Chandrasekhar, S., 1961, Hydrodynamic and Hydromagnetic Stability (Oxford: Oxford Univ. Press)
Chandrasekhar, S., 1967, ApJ, 147, 383 Ciolfi, R., et al., 2009, MNRAS, 397, 913 Duncan, R. C., & Thompson, C., 1992, ApJL, 392, 9
Fan, Y. H., & Low, B. C., 2003, ASPC, 286, 347 Feroci, M., et al., 2001, ApJ, 549, 1021 Flyer, N., Fornberg, B., Thomas, S., & Low, B. C., 2004, ApJ, 606, 1210
Gammie, C. F., McKinney, J. C., & Toth, G., 2003, ApJ, 589, 444 Ghost, P. 2000, MNRAS, 315, 89
Gradshteyn, I. S., & Ryzhik, I. M., 1980, Table of Integrals, Series, and Products (New York: Academic Press)
Hu, Y. Q., 2004, ApJ, 606, 1032 Komissarov, S. S., 2004, MNRAS, 350, 427 Komissarov, S. S., Barkov, M. & Lyutikov M., 2007, MNRAS, 374, 415 Low, B. C., 2001, JGR, 106, 25141-25163 Low, B. C., & Smith, D. F., 1993, ApJ, 410, 412 Low, B. C., & Tsinganos, K., ApJ, 1986, 302, 163 Lyutikov, M., 2003, MNRAS, 346, 540 Lyutikov, M., 2006, MNRAS, 367, 1602
Mazets, E. P., et al., 1979, Nature, 282, 587
Mereghetti, S., & Stella L., 1995, ApJL, 442, 17
Mikic, Z., & Linker, J. A., 1994, ApJ, 430, 898 Misner, C., Thorne, K., & Wheeler, J., 1973, Gravitation, (New York: Freeman)
Palmer, D. M., et al., 2005, Nature, 434, 1107
Sturrock, P. A., 1991, ApJ, 380, 655
Szeg$\mathrm{\ddot{o}}$ G., 1939, Orthogonal Polynomials, Am. Math. Soc., New York
Thompson, C., & Duncan, R. C., 2001, ApJ, 561, 980 Thompson, C., Lyutikov, M., & Kulkarni, S. R., 2002, ApJ, 574, 332
Uzdensky, D. A., 2004, ApJ, 603, 652 Weinberg, S., 1972, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity (New York: Wiley)
Wolfson, R., 1993, ApJ, 419, 382 Wolfson, R. & Dlamini B., 1997, ApJ, 483, 961 Woods, P. M., et al., 2001, ApJ, 552, 748 Yu, C., 2011, MNRAS, 411, 2461 Zhang, M., & Low, B. C., 2004, ApJ, 600, 1043 Zhang, M., & Low, B. C., 2005, ARA&A, 43, 103
Zhang, M., et al., 2006, ApJ, 644, 575
[^1]: Here simple geometries mean that the two ends of all field lines are anchored onto the neutron star surface.
[^2]: Although we will be treating non-force-free magnetosphere in which cross-field effects (caused by mass loading) are important, the discussion here is restricted to magnetically dominated force-free fields. The relevance will become clear as we proceed.
[^3]: See the magnetic field configuration in Figure 8b of Low (2001), which can not maintain equilibrium in flat spacetime. But such configurations can be self-confined by the spacetime curvature effects.
[^4]: Strictly speaking, this condition is not fulfilled since the field lines open at the light cylinder. Fortunately, magnetars are slow rotators, so the light cylinder is quite far away from the neutron star surface and this effects can be negligible. For this reason, we focus in this paper on the non-rotating neutron stars.
[^5]: Note that an arbitrary, spherically symmetric density distribution corresponds to the term that is proportional to the constant $\Psi_0$ in equation (\[linear4\]), which is ignored in this figure.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Correlation inequalities are presented for functionals of a ferromagnetic Potts model with external field, using the representation. These results extend earlier inequalities of Ganikhodjaev–Razak and Schonmann, and yield also GKS-type inequalities when the spin-space is taken as the set of $q$th roots of unity.'
address: 'Statistical Laboratory, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, U.K.'
author:
- 'Geoffrey R. Grimmett'
bibliography:
- 'griffiths.bib'
date: 'first posted 29 July 2007, revised 7 January 2009'
title: |
Correlation inequalities of GKS type\
for the Potts model
---
Introduction {#sec:int}
============
Our purpose in this brief note is to derive certain correlation inequalities for a ferromagnetic Potts model. The main technique is the representation of this model, and particularly the FKG inequality. Some, at least, of the arguments given here are probably known to others. Our results generalize the work of Ganikhodjaev and Razak, who have shown in [@GanR] how to formulate and prove GKS inequalities for the Potts model with a general number $q$ of local states. Furthermore, our Theorems \[mainthm\] and \[s2\] extend the correlation inequalities of Schonmann to be found in [@S88].
The inequalities {#sec:ineq}
================
Let $G=(V,E)$ be a finite graph, and let $J=(J_e: e\in E)$ and $h=(h_v: v \in V)$ be vectors of non-negative reals, and $q \in \{2,3,\dots\}$. We take as local state space for the $q$-state Potts model the set $\sQ=\{0,1,\dots,q-1\}$. The Potts measure on $G$ with parameters $J$ has state space $\Si=\sQ^V$ and probability measure $$\pi(\s) = \frac1Z \exp\left\{ \sum_{e=\la x,y\ra\in E} J_e \de_e(\s) + \sum_{v\in V} h_v \de_v(\s)\right\},$$ for $\s=(\s_v:v\in V)\in\Si$, where $\de_e(\s) = \de_{\s_x,\s_y}$ and $\de_v(\s) = \de_{\s_v,0}$ are Kronecker delta functions, and $Z$ is the appropriate normalizing constant.
We shall make use of the random-cluster representation in this note, and we refer the reader to [@G-RC] for a recent account and bibliography. Consider a random-cluster model on the graph $G^+$ obtained by adding a ‘ghost’ vertex $g$, joined to each vertex $v \in V$ by a new edge $\la g,v\ra$. An edge $e \in E$ has parameter $p_e=1-e^{-J_e}$, and an edge $\la g,v\ra$ has parameter $p_v=1-e^{-h_v}$. With $\phi$ the corresponding random-cluster measure, we obtain the spin configuration as follows. The cluster $C_g$ containing $g$ has spin $0$. To each open cluster of $\om$ other than $C_g$, we allocate a uniformly chosen spin from $\sQ$, such that every vertex in the cluster receives this spin, and the spins of different clusters are independent. The ensuing spin vector $\s=\s(\om)$ has law $\pi$. See [@G-RC Thm 1.3] for a proof of this standard fact, and for references to the original work of Fortuin and Kasteleyn.
Let $f:\sQ\to\CC$. For $\s\in\Si$, let f()\^R=\_[vR]{} f(\_v),RV. \[sigprod\] Thinking of $\s$ as a random vector with law $\pi$, we write $\la f(\s)^R\ra$ for the mean value of $f(\s)^R$. Let $\fq$ be the set of all functions $f:\sQ\to\CC$ such that, for all integers $m,n\ge 0$: $$\begin{gathered}
E(f(X)^m) \text{ is real and non-negative},\label{2}\\
E(f(X)^{m+n}) \ge E(f(X)^m) E(f(X)^n),\label{3}\end{gathered}$$ where $X$ is a uniformly distributed random variable on $\sQ$. That is, $f \in \fq$ if each $S_m=\sum_{x\in\sQ}f(x)^m$ is real and non-negative, and $qS_{m+n} \ge S_m S_n$. For $i\in\sQ$, let $\fq^i$ be the subset of $\fq$ containing all $f$ such that $$f(i)=\max\{|f(x)|: x \in \sQ\}.
\label{1}$$ This condition entails that $f(i)$ is real and non-negative.
\[mainthm\] Let $f\in\fq^0$. For $R\subseteq V$, the mean $\la f(\s)^R\ra$ is real-valued and non-decreasing in the vectors $J$ and $h$, and satisfies $\la f(\s)^R\ra \ge 0$. For $R,S\subseteq V$, we have that $$\la f(\s)^R f(\s)^S\ra \ge \la f(\s)^R\ra \la f(\s)^S\ra.$$ If there is no external field, in that $h\equiv 0$, it suffices for the above that $f \in \fq$.
\[pi\] Let $q \ge 2$. The following functions belong to $\fq^0$.
- $f(x) = \frac12(q-1)-x$.
- $f(x) = e^{2\pi ix/q}$, a $q$th root of unity.
- $f:\sQ\to[0,\oo)$, with $f(x)\le f(0)$ for all $x$.
Case (a) gives us the inequalities of Ganikhodjaev and Razak, [@GanR]. When $q=2$, these reduce to the GKS inequalities for the Ising model, see [@Griff1; @Griff2; @KS]. We do not now if the implications of case (b) were known previously, or if they are useful. Perhaps they are elementary examples of the results of [@Gin70]. In case (c) with $f(x)=\de_{x,0}$, we obtain the first correlation inequality of Schonmann, [@S88].
\[s2\] Let $q\ge 2$, $f_0\in\fq^0$, and let $f_1:\sQ\to\CC$ satisfy . If $f_0$ and $f_1$ have disjoint support in that $f_0f_1\equiv 0$ then, for $R,S\subseteq V$, $$\la f_0(\s)^{R}f_1(\s)^{S}\ra \le \la f_0(\s)^{R}\ra \la f_1(\s)^{S}\ra.$$ If $h \equiv 0$, it is enough to assume $f_0\in\fq$.
Two correlation inequalities were proved in [@S88], a ‘positive’ inequality that is implied by Theorem \[pi\](c), and a ‘negative’ inequality that is obtained as a special case of the last theorem, on setting $f_0(x)=\de_{x,0}$ and $f_1(x) = \de_{x,1}$. We note that Schonmann’s inequalities were themselves (partial) generalizations of correlation inequalities proved in [@DMMR].
Amongst the feasible extensions of the above theorems that come to mind, we mention the classical space–time models used to study the quantum Ising/Potts models, see [@aizenman_nacht; @BjG; @CrI; @grimmett_stp].
Proof of Theorem \[mainthm\] {#sec:pf}
============================
We use the coupling of the and Potts model described in Section \[sec:ineq\]. Let $E^+$ be the edge-set of $G^+$, $\Om^+=\{0,1\}^{E^+}$, and $\om\in\Om^+$. Let $A_g,A_1,A_2,\dots,A_k$ be the vertex-sets of the open clusters of $\om$, where $A_g$ is that of the cluster $C_g$ containing $g$.
Let $R \subseteq V$, and let $f \in \fq^0$. By , $$f(\s)^R = f(0)^{|R\cap A_g|}\prod_{r=1}^{k} f(X_r)^{|R\cap A_r|},$$ where $X_r$ is the random spin assigned to $A_r$. This has conditional expectation $$g_R(\om) := E(f(\s)^R\mid\om) = f(0)^{|R\cap A_g|}\prod_{r=1}^k E(f(X)^{|R\cap A_r|}\mid\om).$$ By and , $g_R(\om)$ is real and non-negative, whence so is its mean $\phi(g_R) = \la f(\s)^R\ra$.
We show next that $g_R$ is a non-decreasing function on the partially ordered set $\Om^+$. It suffices to consider the case when the configuration $\om'$ is obtained from $\om$ by adding an edge between two clusters of $\om$. In this case, by –, $g_R(\om') \ge g_R(\om)$. That $\la\s^R\ra = \phi(g_R)$ is non-decreasing in $J$ and $h$ follows by the appropriate comparison inequality for the measure $\phi$, see [@G-RC Thm 3.21].
Now, $$E(f(\s)^Rf(\s)^S\mid \om) =f(0)^{|R\cap A_g| + |S\cap A_g|}
\prod_{r=1}^k E\bigl(f(X)^{|R\cap A_r|+|S\cap A_r|}\bigmid\om\bigr).$$ By , $$E(f(\s)^Rf(\s)^S\mid \om) \ge g_R(\om) g_S(\om).$$ By the FKG property of $\phi$, see [@G-RC Thm 3.8], $$\la f(\s)^Rf(\s)^S\ra = \phi\bigl(E(f(\s)^Rf(\s)^S\mid\om)\bigr)
\ge \la f(\s)^R\ra \la f(\s)^S)\ra,$$ as required.
When $h \equiv 0$, the terms in $f(0)$ do not appear in the above, and it therefore suffices that $f\in\fq$.
Proof of Theorem \[pi\]
=======================
We shall use the following elementary fact: if $T$ is a non-negative random variable, E(T\^[m+n]{}) E(T\^m)E(T\^n),m,n0. \[triv\] This trivial inequality may be proved in several ways, of which one is the following. Let $T_1$, $T_2$ be independent copies of $T$. Clearly, $$\label{eq:3}
(T_1^m-T_2^m)(T_1^n-T_2^n) \ge 0,$$ since either $0\le T_1\le T_2$ or $0\le T_2 \le T_1$. Inequality follows by multiplying out and averaging.
*Case* (a). Inequality with $i=0$ is a triviality. Since $f(X)$ is real-valued, with the same distribution as $-f(X)$, $E(f(X)^m) =0$ when $m$ is odd, and is positive when $m$ is even. When $m+n$ is even, follows from with $T=f(X)^2$, and both sides of are $0$ otherwise.
*Case* (b). It is an easy calculation that $$E(f(X)^m) = 1\{q\text{ divides }m\},$$ where $1\{F\}$ is the indicator function of the set $F$, and – follow.
*Case* (c). Inequality follows by with $T=f(X)$.
Proof of Theorem \[s2\] {#s2pf}
=======================
We may as well assume that $f_0\nequiv 0$, so that $f_0(0)>0$ and $f_1(0)=0$. We use the notation of Section \[sec:pf\], and write $$\begin{aligned}
F_0(\om) &= f_0(0)^{|R\cap A_{g}|} \prod_{r=1}^k E(f_0(X)^{|R\cap A_r|}\mid\om),\label{mel4}\\
F_1(\om) &= \prod_{r=1}^k E(f_1(X)^{|S\cap A_r|}\mid\om).
\label{mel5}\end{aligned}$$ By , $F_0$ and $F_1$ are real-valued and non-negative. Since $f_0\in\fq^0$, $F_0$ is increasing.
Since $f_0f_1\equiv 0$, $$\begin{aligned}
E\bigl(f_0(\s)^R f_1(\s)^S\bigmid\om\bigr)
= 1_Z(\om) F_0(\om)F_1(\om),\end{aligned}$$ where $1_Z$ is the indicator function of the event $Z= \{S \nlra R\cup\{g\}\}$. Here, as usual, we write $U\lra V$ if there exists an open path from some vertex of $U$ to some vertex of $V$. Let $T$ be the subset of $V$ containing all vertices joined to $S$ by open paths, and write $\om_T$ for the configuration $\om$ restricted to $T$. Using conditional expectation, $$\begin{aligned}
\la f_0(\s)^Rf_1(\s)^S\ra &= \phi\bigl( 1_Z F_0 F_1\bigr)\label{m1}\\
&=\phi\bigl( 1_Z F_1 \phi(F_0\mid T,\,\om_T)\bigr),
\nonumber\end{aligned}$$ where we have used the fact that $1_Z$ and $F_1$ are functions of the pair $T$, $\om_T$ only. On the event $Z$, $F_0$ is an increasing function of the configuration restricted to $V \sm T$. Furthermore, given $T$, the conditional measure on $V \sm T$ is the corresponding measure. It follows that $$\phi(F_0\mid T,\,\om_T) \le \phi(F_0)\quad \text{on}\quad Z,$$ by [@G-RC Thm 3.21]. By , $$\begin{aligned}
\la f_0(\s)^Rf_1(\s)^S\ra &\le \phi\bigl( 1_Z F_1 \phi(F_0)\bigr)\\
&\le \phi(F_0)\phi(F_1)
= \la f_0(\s)^R\ra\la f_1(\s)^S\ra ,\end{aligned}$$ and the theorem is proved.
When $h\equiv 0$, $A_g = \es$ in , and it suffices that $f_0 \in \fq$.
Acknowledgements {#acknowledgements .unnumbered}
================
The author is grateful to Jakob Björnberg, Chuck Newman, and Aernout van Enter for their comments and suggestions.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This study investigates the phase retrieval problem for wide-band signals. We solve the following problem: given $f\in L^2(\R)$ with Fourier transform in $L^2(\R,e^{2c|x|}\,\mbox{d}x)$, we find all functions $g\in L^2(\R)$ with Fourier transform in $L^2(\R,e^{2c|x|}\,\mbox{d}x)$, such that $|f(x)|=|g(x)|$ for all $x\in \R$. To do so, we first translate the problem to functions in the Hardy spaces on the disc via a conformal bijection, and take advantage of the inner-outer factorization. We also consider the same problem with additional constraints involving some transforms of $f$ and $g$, and determine if these constraints force uniqueness of the solution.'
address:
- 'Ph. Jaming, Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400, Talence, France'
- 'K. Kellay, Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400, Talence, France'
- 'R. Perez III , Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400, Talence, France'
- 'Institute of Mathematics, University of the Philippines Diliman, 1101 Quezon City, Philippines'
author:
- 'Philippe Jaming, Karim Kellay & Rolando Perez III'
title: Phase Retrieval for Wide Band Signals
---
[[Keywords: phase retrieval, Hardy spaces]{}]{}
Introduction
============
The phase retrieval problem refers to the recovery of the phase of a function $f$ using given data on its magnitude $|f|$ and a priori assumptions on $f$. These problems are widely studied because of their physical applications in which the quantities involved are identified by their magnitude and phase, where the phase is difficult to measure while the magnitude is easily obtainable. Some physical applications of phase retrieval problems include works related to astronomy [@Da], lens design [@Do], x-ray crystallography [@Mi], inverse scattering [@Sa], and optics [@Se]. More physical examples were given in the survey article of Klibanov [*et. al.*]{} [@Kli] and the book of Hurt [@Hu]. A more recent overview of the phase retrieval problem was given by the article of Grohs [*et. al.*]{} [@GKR], which discussed a more general formulation of the phase retrieval problem using Banach spaces and bounded linear functionals, as well as results related to the uniqueness and stability properties of the problem.
Phase retrieval problems have been given more interest because of progress in the discrete (finite-dimensional) case, starting with the work of Candès [*et. al.*]{} [@CLS] and of Waldspurger [*et. al.*]{} [@WAM], which formulated the phase retrieval problem as an optimization problem and used algorithms to determine the solutions. On the other hand, phase retrieval problems devoted to the continuous (infinite-dimensional) case have been solved in various settings, such as, for one-dimensional band-limited functions [@Aku1; @Aku2; @Wa], for functions in the Hardy space on the disc without singular parts [@Bo], for real-valued band-limited functions from the absolute values of their samples [@Th], for 2$\pi$-periodic time-limited signals from magnitude values on the unit circle [@FL], and for real-valued functions in the Sobolev spaces [@Han]. We refer the reader to [@ADGY] for more discussion and examples of continuous phase retrieval problems, in particular on the stability of the problem and other useful references. Our aim in this paper is to investigate the phase retrieval problem for *wide-band* functions, namely functions with mildly decreasing Fourier transforms.
Before we summarize our results, let us give a quick overview of the phase retrieval problem in the band-limited and narrow band cases: given a band-limited function (i.e. a function with compactly supported Fourier transform) $f\in L^2(\R)$, find all band-limited functions $g\in L^2(\R)$ such that $$|f(x)|=|g(x)|~~~\text{for all }x\in\R.
\label{eq:band1}$$ This problem in the class of compactly supported functions has been solved by Akutowicz [@Aku1; @Aku2] in the mid-1950’s, and independently by Walther [@Wa] in 1963. To solve the problem, they first used the Paley-Wiener Theorem which states that $f$ and $g$ extend into holomorphic functions in the complex plane that are of exponential type, that is, $f,g$ grow like $e^{a|z|}$. Next, they showed that is then equivalent to $$f(z)\overline{f(\bar z)}=g(z)\overline{g(\bar z)}~~~\text{for all }z\in\C.
\label{eq:band2}$$ Observe that is a reformulation of when $z$ is [*real*]{} and is an equality between two holomorphic functions so that it is valid for all $z\in\C$. Finally, they used the Hadamard Factorization Theorem which states that holomorphic functions of exponential type are characterized by their zeros. Now, implies that each zero of $g$ is either a zero of $f$ or a complex conjugate of a such a zero. Thus, it follows that $g$ can be obtained by changing arbitrarily many zeros of $f$ into their complex conjugates in the Hadamard factorization of $g$, and this is called [*zero-flipping*]{}.
McDonald [@Mc] extended this proof to functions that have Fourier tranforms with very fast decrease at infinity. For instance, in the case of Gaussian decrease, if $
|\widehat{f}(\xi)|,|\widehat{g}(\xi)|\lesssim e^{-a|\xi|^2}$, $a>0$, then $f,g$ extend to holomorphic functions of exponential type 2 so that Hadamard factorization can still be used. Thus, the solutions also can be obtained by zero-flipping. Furthermore, this proof extends to functions which satisfy an exponential decay condition of the form $$\label{eq:decay}
|\widehat{f}(\xi)|,|\widehat{g}(\xi)|\lesssim e^{-a|\xi|^\alpha}~~~\text{for }a>0\text{ and }\alpha>1,$$ but breaks down at $\alpha=1$. Therefore, the main goal of this work is to investigate the phase retrieval problem for functions satisfying but for $\alpha=1$, i.e. $|\widehat f(\xi)|,|\widehat g(\xi)|\lesssim e^{-a|\xi|}$. Functions with this decay are sometimes called wide-band signals in the engineering community. Here, the functions $f$ and $g$ only extend holomorphically to an horizontal strip $\ss_a=\{z\in\C\,:|\text{Im} z|<a\}$ in the complex plane so that only holds for $z\in \ss_a$, which implies that Hadamard factorization cannot be used. To overcome this difficulty, we first reduce the problem to the Hardy space on the disc using a conformal bijection. We then exploit the inner-outer-Blashcke factorization in the Hardy space on the disc. The solution is now more evolved than the band-limited case as, aside from zero-flipping, the singular inner function and the outer function also take part in the solution. For our main results, we go back to the strip to solve our initial problem using an equivalent inner-outer-Blaschke factorization on the strip, and provide analogs of the consequences shown on the phase retrieval problem on the disc.
Generally speaking, the solution set of a phase retrieval problem is very large. Thus, additional constraints are considered to reduce the solution set. For instance, Klibanov [*et. al.*]{} provided different examples of supplementary information to force uniqueness, or at least to reduce the solution set. We here follow the same aim of reducing the set of solutions by coupling two phase retrieval problems. For our first coupled problem, we add a condition involving a fixed reference signal $h$: $|g-h|=|f-h|.$ We use its geometric interpretation to show that this problem has exactly two solutions. We will also look at the problem with an additional condition involving the Fourier transforms: $|\widehat{g}|=|\widehat{f}|$. This coupled problem is also known as the Pauli problem. Here, we do not get uniqueness for our case, in fact, we explicitly construct an uncountable family of solutions using Riesz products. Next, we look at the coupled problem with the condition $|Dg|=|Df|$ where $D$ is a derivation operator. Using the special properties of $D,f$ and $g$, we show that this coupled problem has exactly two solutions. Finally, for our last coupled problem, we add the condition $|g|=|f|$ on a segment on the strip. We show the uniqueness of the solution by using our main results.\
This work is organized as follows. Section 2 is a quick review of definitions and results on analysis and Hardy spaces. Section 3 is devoted to the solution of the phase retrieval problem in the wide-band case. We will look at the phase retrieval problem on the unit disc and on the strip. Section 4 is devoted to the coupled phase retrieval problems.
Preliminaries
=============
Notation
--------
For a domain $\Omega\subset\C$, $\hol(\Omega)$ is the set of holomorphic functions on $\Omega$. For $F\in\hol(\Omega)$ we denote by $Z(F)$ the set of zeros of $F$, counted with multiplicity. Write $\overline{\Omega}=\{\bar z:z\in\Omega\}$ and if $F\in\hol(\Omega)$, we denote by $F^*$ the function in $\hol(\overline{\Omega})$ defined by $F^*(z)=\overline{F(\bar z)}$. It will be convenient to denote the conjugation function by $C$, where $C(z)=\bar{z}$ for all $z\in\C$.
The unit disc $\mathbb D$ is defined as $\D=\{z\in\C:|z|<1\}$ and its boundary $\T$ is defined by $\T=\{z\in\C:|z|=1\}$. Let $c>0$ and $\mathcal{S}_c$ be the strip defined as $\mathcal{S}_c:=\{ z \in\mathbb C : |\text{Im} z|<c \}$, and $\mathcal{S}:=\mathcal{S}_1$.
For a nonnegative and locally integrable function $\omega$ on $\mathbb R$, the weighted $L^2$ space on $\R$ is given by $$L^2_{\omega}(\mathbb R)=L^2(\mathbb R, \omega \d t)=\left\{f\text{ is measurable}:||f||^2_{L^2_{\omega}(\mathbb R)}=\int_{\mathbb R}|f(t)|^2\omega(t)\,\d t<+\infty\right\}.$$ Finally, consider a measure space $(X_1,\mathcal A_1,\mu)$, a measurable space $(X_2,\mathcal A_2)$, and a measurable map $\psi:X_1\to X_2$. Recall that the pullback measure of $X_1$ by $\psi$ is given by $$\psi_*\mu(A) = \mu(\psi(A))$$ for $A\in\mathcal A_1$. Equivalently, if $h$ is a function such that $f\circ\psi$ is integrable on $X_1$ with respect to $\mu$, then we have the change of variables formula $$\int_{X_2}h\,\d(\psi_*\mu)=\int_{X_1}h\circ\psi\,\d\mu.$$
Hardy Spaces on the Disc
------------------------
Recall that the Hardy space on the disc $\mathbb D$ is defined as $$H^2(\mathbb D)=\left\{F\in \text{Hol}(\mathbb D): ||F||^2_{H^2(\mathbb D)}=\sup_{0\leq r<1}\dfrac{1}{2\pi}\int_{-\pi}^{\pi}|F(re^{i\theta})|^2~\d\theta<+\infty\right\},$$ and $$H^{\infty}(\D)=\left\{ F\in \hol (\D):||F||_{H^{\infty}(\D)}=\sup_{w\in\D}|F(w)|<+\infty \right\}.$$ We will need the following key facts. First, every $F\in H^2(\D)$ admits a radial limit $F(e^{i\theta})=\lim_{r\to1}F(re^{i\theta})$ for almost every $e^{i\theta}\in \T$ (see e.g [@Ma Lemma 3.10]) with $F\in L^2(\T)$, $\widehat{F}(n)=0$ for $n=-1,-2,...$, and $\log|F|\in L^1(\T)$. Furthermore [@Ma Section 7.6], every function $F\in H^2(\D)$ can be uniquely decomposed as $$F=e^{i\gamma}B_F S_F O_F$$ where $e^{i\gamma}\in\T$, $B_F$ is the Blaschke product formed from the zeros of $F$, $S_F$ is a singular inner function, and $O_F$ is the outer part of $F$. More precisely, the Blaschke product is defined for all $w\in\mathbb D$ as $$\label{eq:B-F}
B_F(w)=\prod_{ \alpha\in Z(F)}b_\alpha(w),$$ where $$b_\alpha(w)=\begin{cases}w&\mbox{if }\alpha=0\\
\dfrac{\alpha}{|\alpha|}\dfrac{\alpha-w}{1-\bar{\alpha}w}&\mbox{if }\alpha\not=0
\end{cases}.$$ The singular part is given by $$\label{eq:S-F}
S_F(w)=\exp\left(\int_{\mathbb T}\dfrac{w+e^{i\theta}}{w-e^{i\theta}}~\d\nu_F\left(e^{i\theta}\right)\right),$$ where $\nu_F$ is a finite positive singular measure (with respect to the Lebesgue measure). Finally, the outer part is determined by the modulus of the radial limit of $F$ $$\label{eq:O-F} O_F(w)=\exp\left(\dfrac{1}{2\pi}\int_{-\pi}^{\pi}\dfrac{w+e^{i\theta}}{w-e^{i\theta}}\log|F\left(e^{i\theta}\right)|~\d\theta\right).$$
Hardy Spaces on the Strip
-------------------------
There are essentially two ways of defining the Hardy space on the strip $\ss$. To start, let us define the conformal bijection $\phi:\mathcal S\longrightarrow \mathbb D$ given by $$\phi(z):=\tanh\left(\frac{\pi}{4}z\right).$$ Observe that $\phi$ has the following properties: $\phi^*=\phi$, $\phi(\R)=[-1,1]$, and $\phi(\partial \ss\cap\C_{\pm})=\T_{\pm}$, where $\C_+,\C_-$ denote the upper and lower halves of $\C$ respectively, and $\T_+,\T_-$ denote the upper and lower halves of $\T$ respectively. On one hand we shall consider the following Hardy spaces defined $$H^2(\mathcal S)=\left\{f\in \text{Hol}(\mathcal S): f\circ\phi^{-1}\in H^2(\mathbb D)\right\},$$ and $||f||_{H^2(\mathcal S)}=||f\circ\phi^{-1}||_{H^2(\mathbb D)}$. It can then be shown [@BK Theorem 2.2] that $H^2(\ss)=H^2_W(\ss)$ isometrically where $$H^2_{W}(\mathcal S)=\left\{f\in \text{Hol}(\mathcal S): ||f||^2_{H^2_W(\mathcal S)}=\sup_{|y|<1}\int_{\mathbb R}\dfrac{|f(t+iy)|^2}{|W(t+iy)|}~\d t<+\infty\right\},$$ and $W(z)=\dfrac{1}{4\cosh^2(\frac{\pi}{4}z)}=\pi\phi'(z)$ for all $z\in\ss$.
Now this last space can be identified to the natural analogue of the Hardy space on the disc: $$H^2_{\tau}(\mathcal S)=\left\{f\in \text{Hol}(\mathcal S): ||f||^2_{H^2_{\tau}(\mathcal S)}=\sup_{|y|<1}\int_{\mathbb R}|f(t+iy)|^2~\d t<+\infty\right\}.$$ More precisely $f\in H^2_\tau (\mathcal S)$ if and only if $W^{1/2}f\in H^2_W(\mathcal S)$ if and only if $\widehat{f}\in L^2(\R,e^{2|\xi|}d\xi)$.\
Finally by using [@BK Theorem 2.1], we obtain the factorization on $H^2_\tau(\ss)$.
\[lem:fact-strip\] Let $f\in H^2_{\tau} (\mathcal S)$. Then the unique inner-outer factorization of $f$ is given by $$f(z)=\dfrac{e^{i\gamma}B_F(\phi(z))S_F(\phi(z))O_F(\phi(z))}{W(z)^{1/2}}$$ for all $z\in\mathcal S$ and for some $\gamma\in\R$. For all $z\in\ss$, the Blaschke product $B_f$ is given by $$\label{eq:B-f}
B_f(z)=\prod_{ \beta\in Z(f)}b_{\phi(\beta)}(\phi(z)),$$ while the singular inner function $S_f$ is given by $$\begin{aligned}
\label{eq:S-f}
\begin{split}
S_f(z)=\exp\Bigg(\int_{\mathbb \partial \mathcal S}\dfrac{\phi(z)+\phi(\zeta)}{\phi(z)-\phi(\zeta)}\,\d\mu_f\left(\zeta\right)\Bigg)\\
\end{split}
\end{aligned}$$ where $\mu_f={\phi^{-1}}_*\,\nu_F$ is the pullback measure of $\nu_F$ on $\partial \mathcal S$, and the outer function $O_f$ is given by $$\begin{aligned}
\label{eq:O-f}
\begin{split}
O_f(z)&=\exp\Bigg(\dfrac{-1}{2\pi i}\int_{\mathbb R}\dfrac{\phi(z)+\phi(x+i)}{\phi(z)-\phi(x+i)}\dfrac{\phi'(x+i)}{\phi(x+i)}\log|W(x+i)^{1/2}f(x+i)|\,\d x\\
&\quad+\dfrac{1}{2\pi i}\int_{\mathbb R}\dfrac{\phi(z)+\phi(x-i)}{\phi(z)-\phi(x-i)}\dfrac{\phi'(x-i)}{\phi(x-i)}\log|W(x-i)^{1/2}f(x-i)|\,\d x\Bigg).
\end{split}
\end{aligned}$$
For $F\in H^2(\mathbb D)$ and $z\in \mathcal S$, by Theorem 2.1 from [@BK] we have $F(\phi(z))=W^{1/2}(z)f(z)$ and equivalently, $$f(z)=\dfrac{F(\phi(z))}{W(z)^{1/2}}=\dfrac{e^{i\gamma}B_F(\phi(z))S_F(\phi(z))O_F(\phi(z))}{W(z)^{1/2}}.$$ Note that this is well-defined on $\mathcal S$ since $W(z)=\pi\phi'(z)\neq 0$ for any $z\in \mathcal S$.
The formulas for the Blaschke product and singular inner function easily follow from $B_f(z)=B_F(\phi(z))$ and $S_f(z)=S_F(\phi(z))$. For the outer function, we need to split the integral since $\phi(\partial\ss\cap\C_\pm)=\T_\pm$. Hence, the outer function is given by $$\begin{aligned}
O_f(z)=O_F(\phi(z))&=\exp\Bigg(\dfrac{1}{2\pi}\int_{0}^{\pi}\dfrac{\phi(z)+e^{i\theta}}{\phi(z)-e^{i\theta}}\log|F\left(e^{i\theta}\right)|\,\d\theta\\
&\qquad+\dfrac{1}{2\pi}\int_{-\pi}^0\dfrac{\phi(z)+e^{i\theta}}{\phi(z)-e^{i\theta}}\log|F\left(e^{i\theta}\right)|\,\d\theta\Bigg)
\end{aligned}$$ for all $z\in\mathcal S$. By applying the substitutions $e^{i\theta}=\phi(x+i),~\theta\in[0,\pi]$ on $\mathbb T^+$ and $e^{i\theta}=\phi(x-i),~\theta\in[-\pi,0]$ on $\mathbb T^-$, we get .
Phase Retrieval in $H_\tau^2(\ss)$
==================================
Reduction of the Problem
------------------------
In this section, we consider $f,g\in L^2(\mathbb R)$ with $\widehat{f},\widehat{g}\in L^2(\mathbb R, e^{2c|\xi|}\d\xi)$ such that $|f(x)|=|g(x)|$ for every $x\in\R$. Our goal is to determine, for a given $f$, all possible $g$’s.
To do so, let us write $f_c(x)=f(cx)$ and $g_c(x)=g(cx)$ so that $f_c,g_c\in L^2(\mathbb R)$ with $\widehat{f_c},\widehat{g_c}\in L^2(\mathbb R, e^{2|\xi|}d\xi)$ and $|f_c(x)|=|g_c(x)|$ for every $x\in\R$ so that it is enough to consider the case $c=1$.
Note that $\widehat{f},\widehat{g}\in L^2(\R, e^{2|\xi|}\d\xi)$ if and only if $f,g\in H^2_\tau(\ss)$. Thus, $f$ and $g$ extend holomorphically to $\ss$ and $|f(x)|=|g(x)|$ for every $x\in\R$ can be written as $$\label{eq:phase}
f(x)\overline{f(\bar x)}=g(x)\overline{g(\bar x)}~~~\text{for all }x\in \R.$$ But now, is an equality between two holomorphic functions on $\R$ so that it is valid also for all $x\in\ss$. In other words, we are now trying to solve the following problem: given $f\in H^2_\tau(\ss)$, find all $g\in H^2_\tau(\ss)$ such that $$\label{eq:h2tau}
f(z)f^*(z)=g(z)g^*(z)~~~\text{for all }z\in \ss.$$
It turns out that this problem is easier to solve when transfering the problem to the disc. Multiplying $W^{1/2}(z)$, $\overline{W^{1/2}(\bar z)}$ to both sides of , we obtain $$(W^{1/2}f)(z)\overline{(W^{1/2}f)(\bar{z})}=(W^{1/2}g)(z)\overline{(W^{1/2}g)(\bar{z})}$$ for all $z\in\ss$. According to [@BK], the functions $F=W^{1/2}f\circ \phi^{-1}$ and $G=W^{1/2}g\circ\phi^{-1}$ are in $H^2(\mathbb D)$. Hence, by applying the substitution $z=\phi^{-1}(w)$ and $\bar{z}=\phi^{-1}(\bar{w})$ to the previous equation, we get $$\label{eq:h2d}
F(w)F^*(w)=G(w)G^*(w)~~~\text{for all }w\in\D.$$ Therefore, we have translated the equality on the strip to an equivalent equality on the disk. Finally, we are now trying to solve the following problem on the disc: given $F\in H^2(\D)$, find all $G\in H^2(\D)$ such that holds for all $w\in\D$. Note that is equivalent to $|F(w)|^2=|G(w)|^2$ for $w\in(-1,1)$.
The Phase Retrieval Problem on the Disc
---------------------------------------
In this section, we look at the equivalent phase retrieval problem on the disc.
Let $F\in H^2(\D)$ and write $F=B_FS_FO_F$ with $B_F,S_F,O_F$ given in equations , and , respectively. The factorization of $F^*$ is given by
$$F^*=e^{i\lambda}B_{F^*}S_{F^*}O_{F^*}=e^{i\lambda}B_F^*S_F^*O_F^*.$$
Since the factorization in $H^2(\D)$ is unique, we have $B_{F^*}=B_F^*,~S_{F^*}=S_F^*$, and $O_{F^*}=O_F^*$. Hence, for all $w\in\D$, the Blaschke product formed from the zeros of $F^*$ is given by $$\label{eq:B-Fstar}
B_{F^*}(w)=B_F^*(w)=\prod_{\alpha\in Z(F)}b_{\bar{\alpha}}(w)=\prod_{\alpha\in \overline{Z(F)}}b_{\alpha}(w).$$ The singular part of $F^*$ is given by $$\label{eq:S-Fstar}
S_{F^*}(w)=S_F^*(w)=\exp\left(\int_{\mathbb T}\dfrac{w+e^{i\theta}}{w-e^{i\theta}}~\d(C_*\nu_F)\left(e^{i\theta}\right)\right),$$ for all $w\in\D$, where $C_*\nu_F$ is the pullback measure of $\T$ by the conjugation function $C$. Finally, for all $w\in\D$, the outer part of $F^*$ is given by $$\label{eq:O-Fstar}
O_{F^*}(w)=O_F^*(w)=\exp\left(\dfrac{1}{2\pi}\int_{-\pi}^{\pi}\dfrac{w+e^{i\theta}}{w-e^{i\theta}}\log|F\left(e^{-i\theta}\right)|~\d\theta\right).$$ We use all of the facts above to prove the following lemma.
\[lem:disc\] Let $F, G\in H^2(\mathbb D)$. Then $$|F(w)|^2=|G(w)|^2~~\text{for all } w\in (-1,1)$$ if and only if
1. the zero sets of $F$ and $G$ satisfy $$Z(F)\cup \overline{Z(F)}=Z(G)\cup \overline{Z(G)};$$
2. the singular measures $\nu_F$ and $\nu_G$, associated with $F$ and $G$ respectively, satisfy $$\nu_F+C_*\nu_F=\nu_G+C_*\nu_G$$ on $\T$; and
3. the radial limits satisfy $$|F\left(e^{i\theta}\right)F\left(e^{-i\theta}\right)|=|G\left(e^{i\theta}\right)G\left(e^{-i\theta}\right) |$$ almost everywhere on $\mathbb T$.
Let $F, G\in H^2(\mathbb D)$. Note that $FF^*$ and $GG^*$ have decompositions given by
$$FF^*=B_FB_{F^*}S_FS_{F^*}O_FO_{F^*}\text{ and }\,GG^*=B_GB_{G^*}S_GS_{G^*}O_GO_{G^*}.$$
Notice that $B_FB_{F^*}$ is again a Blaschke product, $S_FS_{F^*}$ is again a singular inner function, and $O_FO_{F^*}$ is again an outer function. Indeed, for all $w\in\mathbb D$, implies that
$$B_F(w)B_{F^*}(w)=\prod_{ \alpha\in Z(F)\cup \overline{Z(F)}}b_\alpha(w),$$ while implies that $$S_F(w)S_{F^*}(w)=\exp\left(\dfrac{1}{2\pi}\int_{\mathbb T}\dfrac{w+e^{i\theta}}{w-e^{i\theta}}~\d\left(\nu_F+C_*\nu_F\right)(e^{i\theta})\right),$$ and finally, implies that $$O_F(w)O_{F^*}(w)=\exp\left(\dfrac{1}{2\pi}\int_{-\pi}^{\pi}\dfrac{w+e^{i\theta}}{w-e^{i\theta}}\log|F(e^{i\theta})F(e^{-i\theta})|\,\d\theta\right).$$
Thus, writing the same for $GG^*$ and using the uniqueness of the decomposition, $FF^*=GG^*$ implies that $B_FB_{F^*}=B_GB_{G^*}$, which in turn implies that $$Z(F)\cup\overline{Z(F)}=Z(G)\cup\overline{Z(G)}.$$ Furthermore, $FF^*=GG^*$ also implies that $$S_FS_{F^*}=S_GS_{G^*}\text{ and }\,
O_FO_{F^*}=O_GO_{G^*}.$$ Thus, $$\nu_F+C_*\nu_F=\nu_G+C_*\nu_G$$ on $\T$, and by Fatou’s theorem [@Ma Lemma 3.10], we have for almost every $\theta\in\R$ $$\begin{aligned}
\lim_{r\rightarrow 1}(O_F(re^{i\theta})O_{F^*}(re^{i\theta}))&=\lim_{r\rightarrow 1}(O_G(re^{i\theta})O_{G^*}(re^{i\theta})),\\
\intertext{which in turn implies that}
|F\left(e^{i\theta}\right)F\left(e^{-i\theta}\right)|&=|G\left(e^{i\theta}\right)G\left(e^{-i\theta}\right)|
\end{aligned}$$ almost everywhere on $\mathbb T$.
We can now construct such $G$’s to solve the equivalent phase retrieval problem on the disc. Let $\mathcal{N}^+$ denote the Smirnov class, namely those functions holomorphic on $\D$ of the form $f=g/h$, where $g$ and $h$ are bounded and holomorphic on $\D$ and $h$ is outer function. $g$ is outer, then $f$ is called outer function. Note that if $f\in N^+$ then by Fatou’s Theorem [@Du Theorem 1.3], the radial limit $f_*$ almost everywhere on $\mathbb T$ and $\log |f^*|\in L^1(\T)$. The following corollary immediately follows from Lemma \[lem:disc\].
\[cor:discsoln\] Let $F,G\in H^2(\D)$. Then $|F|=|G|$ on $(-1,1)$ if and only if the inner-outer decomposition of $F$ and $G$ are given by $$F=e^{i\gamma}B_FS_FO_F\text{ and }\,G=^{i\gamma'}B_GS_GO_G$$ where
- $\gamma,\gamma'\in\R$;
- $B_F, S_F, O_F$ are given by , , respectively;
- $B_G$ is the Blaschke product associated with the set $A\cup(\overline{Z(F)\backslash A})$ for some $A\subset Z(F)$;
- $S_G$ is the singular inner function associated with the positive singular measure $\nu_G=\nu_F+\rho$, where $\rho$ is an odd real singular measure; and
- $O_G=UO_F$ where $U\in N^+$ is an outer function and $U=1/U^*$ on $\D$.
We write $\rho=\rho_+-\rho_-$, where $\rho_+$ is the positive part while $\rho_-$ is the negative part. Note that the positive part and the negative part have disjoint supports. Since $\rho$ is an odd measure, $C_*\rho=-\rho$ and given $E\subset\T$ such that $E\cap \overline{E}=\emptyset$, we have $\textnormal{supp}\rho_+\subset E$. Thus, we take $\rho_-=C_*\rho_+$. Furthermore, for $\nu_G$ to be positive, we need the condition $C_*\rho_+\leq \nu_F$, or equivalently, $\rho_+\leq C_*\nu_F$.
Let $F,G\in H^2(\D)$ with inner-outer decompositions as defined on Corollary \[cor:discsoln\]. Observe that the properties of the Blaschke product $B_G$ and the singular inner function $S_G$ immediately follow from Lemma \[lem:disc\]. For the outer function, by Lemma \[lem:disc\], we have $$\label{eq:disc-fac}
|O_F(e^{i\theta})O_F(e^{-i\theta})|=|O_G(e^{i\theta})O_G(e^{-i\theta})|$$ almost everywhere on $\T$. Hence, $$\log|O_G(e^{i\theta})|=\log|O_F(e^{i\theta})|+\log|U(e^{i\theta})|$$ almost everywhere on $\T$, where $\log|U(e^{i\theta})|$ is an odd real-valued function of $\theta$ and $\log|U|\in L^1(\T)$. Since $|O_G(e^{i\theta})|=|O_F(e^{i\theta})U(e^{i\theta})|$ almost everywhere on $\T$ and $O_G$ and $O_F$ are outer functions, we get $$O_G(z)=O_F(z)O_U(z),\qquad z\in \D.$$ Hence $ O_U=O_G/O_F\in N^+$. Moreover, implies that $|O_U(e^{i\theta})O_U(e^{-i\theta})|=1$ almost everywhere on $\T$, and so $O_U(z)O_U^*(z)=1$ on $\D$.
We can actually identify the solutions of the phase retrieval problem on the disc in terms of a factorization. Let us consider an analog of the result of McDonald [@Mc Proposition 1].
\[cor:uv-disc\] Let $F,G\in H^2(\D)$. Then $|F|=|G|$ on $(-1,1)$ if and only if there exist $u,v\in \hol(\D)$ such that $F=uv$ and $G=uv^*$.
Let $F,G\in H^2(\D)$. By Corollary \[cor:discsoln\], we have the factorizations $$F=B_FS_FO_F\text{ and }\,G=B_FS_FO_F.$$ First, observe that we can write the Blaschke products $B_F$ and $B_G$ as $$B_F=B_1B_2\text{ and }\,B_G=B_1B_2^*$$ where $B_1$ is the Blaschke product associated with $A\subset Z(F)$ and $B_2$ is the Blaschke product associated with $Z(F)\backslash A$. On the other hand, we can write the singular measures $\nu_F$ and $\nu_G$ as $$\nu_F=\nu_1+\nu_2\text{ and }\,\nu_G=\nu_1+C_*\nu_2$$ where $$\nu_1=\nu_f+\dfrac{\rho_+-C_*\rho_+}{2}\text{ and }\,\nu_2=\dfrac{C_*\rho_+-\rho_+}{2},$$ so that $S_F=S_{\nu_1}S_{\nu_2}$ and $S_G=S_{\nu_1}S_{\nu_2}^*$.
Since $O_G=UO_F$ where $U$ is an outer function, $U\in N^+$ and $UU^*=1$ on $\D$. We write $$O_F=O_FU^{1/2}U^{-1/2}$$ and $$O_G=O_FU^{1/2}U^{1/2}=O_FU^{1/2}(U^{-1/2})^*$$ Therefore, we take $$u=B_1S_{\nu_1}O_FU^{1/2}\text{ and }v=B_2S_{\nu_2}U^{-1/2}.$$
Back to the Strip
-----------------
In this section, we go back to the phase retrieval problem on the strip. Using Corollary \[lem:fact-strip\], we see that Lemma \[lem:disc\] translates to functions on $H^2_{\tau}(\mathcal S)$. By a change of variable and by applying the inner-outer factorization on $H^2_\tau(\ss)$, we have:
\[lem:strip\] Let $f,g\in H^2_{\tau}(\mathcal S)$. Then $$|f(z)|^2=|g(z)|^2~\text{for all } z\in\R$$ if and only if
1. the zero sets of $f$ and $g$ satisfy $$Z(f)\cup \overline{Z(f)}=Z(g)\cup \overline{Z(g)};$$
2. the singular measures $\mu_f$ and $\mu_g$, associated with $f$ and $g$ respectively, satisfy $$\mu_f+C_*\mu_f=\mu_g+C_*\mu_g$$ on $\partial\mathcal S$; and
3. the boundary values satisfy $$|f(x+i)f(x-i)|=|g(x+i)g(x-i)|$$ almost everywhere on $\R$.
We now construct the solutions of the problem on the strip. Let $N^+_\tau(\mathcal{S})$ the Smirnov class of holomorphic functions in the $\ss$ such that $f=F(\phi(z))/W^{1/2}(z)$ where $F\in N^+$. The following result immediately follows from Lemma \[lem:strip\].
\[thm:stripsoln\] Let $f,g\in H^2_\tau(\ss)$. Then $|f|=|g|$ on $\R$ if and only if the inner-outer decomposition of $f$ and $g$ are given by $$f=^{i\gamma}W^{-1/2}B_fS_fO_f\text{ and }\,g=e^{i\gamma'}W^{-1/2}B_gS_gO_g$$ where
- $\gamma,\gamma'\in\R$;
- $B_f,S_f,O_f$ are given by , , respectively;
- $B_g$ is the Blaschke product associated with the set $A\cup(\overline{Z(f)\backslash A})$ with $A\subset Z(f)$;
- $S_g$ is the singular inner function associated with the positive singular measure $\mu_g=\mu_f+\sigma$, where $\sigma$ is an odd real singular measure, given by $\sigma=\sigma_+-C_*\sigma_+$, satisfying $C_*\sigma=-\sigma$ and $\sigma_+\leq C_*\mu_f$; and
- $O_g$ is the outer part of $uO_f$ where $u\in N^+_\tau(\mathcal{S})$ is an outer function and $u=1/u^*$ on $\ss$.
Observe that possible trivial solutions to the problem on the strip are given by: $$(1)\quad g(z)=ce^{i\eta z}f(z)\qquad\mbox{and}\qquad
(2)\quad g(z)=ce^{i\eta z}f^*(z)$$ with $|c|=1$ and $\eta\in\R$. Those trivial solutions are retrieved as follows
– the factor $c$ is $c=e^{i(\gamma-\gamma')}$
– the factor $e^{i\eta z}$ is the factor $u$ of the outer part as $e^{i\eta z}(e^{i\eta z})^*=1$
– the replacement of $f$ by $f^*$ is obtained by taking $A=\emptyset$ for the Blaschke part, $\sigma=C_*\mu_f-\mu_f$ so that $\mu_g=C_*\mu_f$ for the inner part and finally $u=O_{f^*}/O_f$ so that the outer part of $g$ is $Og=uO_f=O_{f^*}$.
Corollary \[cor:uv-disc\] also translates to a result on the strip with a simple change of variable.
\[cor:uv-strip\] Let $f,g\in H^2_\tau(\ss)$. Then $|f|=|g|$ on $\R$ if and only if there exist $u,v\in \hol(\ss)$ such that $f=uv$ and $g=uv^*$.
Finally, we go back to our initial phase retrieval problem. The following result directly follows from Theorem \[thm:stripsoln\].
Let $f\in L^2(\mathbb R)$ and $\widehat{f}\in L^2(\mathbb R, e^{2c|\xi|}d\xi)$. Then there exists $g\in L^2(\mathbb R)$ such that $\widehat{g}\in L^2(\mathbb R, e^{2c|\xi|}d\xi)$, $|f(x)|=|g(x)|$ for all $x\in\R$, and $$g=e^{i\kappa}W^{-1/2}B_gS_gO_g$$ where
- $e^{i\kappa}\in\T$;
- $B_g$ is the Blaschke product associated with the set $A\cup(\overline{Z(f)\backslash A})$ with $A\subset Z(f)$;
- $S_g$ is the singular inner function associated with the positive singular measure $\mu_g=\mu_f+\sigma$, where $\sigma$ is an odd real singular measure, given by $\sigma=\sigma_+-C_*\sigma_+$, satisfying $C_*\sigma=-\sigma$ and $\sigma_+\leq C_*\mu_f$; and
- $O_g$ is the outer part of $uO_f$ where $u\in N^+_\tau(\mathcal{S})$ is an outer function and $u=1/u^*$ on $\ss$.
Coupled Phase Retrieval Problems
================================
In this section, we are investigating coupled phase retieval problems, i.e. problems of the form $|u|=|v|$, $|Tu|=|Tv|$ where $T$ is some transform. This additional assumption involving $T$ may either lead to uniqueness or at least to the reduction of the set of solutions.
Adding a Fixed Reference
------------------------
Klibanov [*et. al.*]{} [@Kli] considered the following constrained problem: $$\label{eq:klibanov}
|g|=|f|~~~\text{and}~~~|g-h|=|f-h|$$ where $h$ is a fixed reference signal. They were able to show that there are at most two solutions of this problem. For the following result, we look at a similar problem. It turns out that for the wide-band case, we also obtain two solutions.
Let $f,g\in H^2_{\tau}(\ss)$ and $h$ be a nonzero complex-valued function such that $\Phi=e^{i\arg h}$ is bounded and analytic on $\R$. Suppose that $|g(x)|=|f(x)|$ and $|g(x)-h(x)|=|f(x)-h(x)|$ for (a.e.) $x\in\R$. Then there exists two solutions of this problem, namely $g(x)=f(x)$ or $g(x)=\overline{f(x)}\Phi(x)^2$, for $x\in\R$.
Consider the two circles on $\C$: $\mathcal C(0,|f(x)|)$ and $\mathcal C(h(x),|f(x)-h(x)|)$. These two circles have two intersection points, one being $f(x)$, the other being $\overline{f(x)}\Phi(x)^2$ (eventually being the same as the first one).
(-0.8975623766832053,0.) – (2.0712118110902464,0.); in [-0.5,0.5,1.,1.5,2.]{} (0pt,-2pt); (0.,-0.6756260552541451) – (0.,1.7291436300545806); in [-0.5,0.5,1.,1.5]{} (-2pt,0pt); (-0.8975623766832053,-0.6756260552541451) rectangle (2.0712118110902464,1.7291436300545806); (0.,0.) ellipse (1.3824866751867755cm and 1.3825188430876827cm); (1.0476687316797637,0.7552908529911855) ellipse (2.0936673898810967cm and 2.093716105638246cm); (0.,0.)– (0.18401695336559143,0.5556453517332945); (0.,0.)– (0.5853229693223431,-0.0010087677861107705); (0.,0.)– (1.0476687316797637,0.7552908529911855); (0.18401695336559143,0.5556453517332945)– (1.0476687316797637,0.7552908529911855); (1.0476687316797637,0.7552908529911855)– (0.5853229693223431,-0.0010087677861107705); (0.18401695336559143,0.5556453517332945)– (0.5853229693223431,-0.0010087677861107705); (0.06310567990319361,0.037127846319317405) node\[anchor=north west\] [$\large |f(x)|$]{}; (-0.14762150670285515,0.2602508068118796) node\[anchor=north west\] [$\large 0$]{}; (0.8998165678977991,1.0659726085905763) node\[anchor=north west\] [$\large h(x)$]{}; (0.8378379836019023,0.5143630673728532) node\[anchor=north west\] [$\large |f(x)-h(x)|$]{}; (0.6271107969958536,-0.17359939414588021) node\[anchor=north west\] [$\large f(x)$]{}; (-0.7983966418097705,1.0659726085905763) node\[anchor=north west\] [$\large\overline{f(x)}\Phi(x)^2$]{}; (-0.11663221455490681,0.8118603480296028) – (0.1606038985191835,0.5850847002079304); (0.7262765318692888,-0.22938013426902037) – (0.5992614757199941,-0.027990725492131302);
(0.,0.) circle (2.0pt); (1.0476687316797637,0.7552908529911855) circle (2.0pt); (0.18401695336559143,0.5556453517332945) circle (2.0pt); (0.5853229693223431,-0.0010087677861107705) circle (2.0pt);
The circles $\mathcal C(0,|f(x)|)$ and $\mathcal C(h(x),|f(x)-h(x)|)$.
Therefore, for each $x\in\R$, either $g(x)=f(x)$ or $g(x)=\overline{f(x)}\Phi(x)^2$. By the pigeonhole principle, one of these two alternatives is valid on a set of positive measure. But $f,g$ and $\overline{f}\Phi^2$ are all analytic so that if $g=f$ on a set of positive measure, then $g=f$ everywhere, otherwise if $g=\overline{f}\Phi^2$ on a set of positive measure, then $g=\overline{f}\Phi^2$ everywhere as well.
If we do not assume $\Phi$ to be analytic, then $\bar b\Phi^2$ may not be analytic and would therefore not be a solution.
Pauli’s Problem
---------------
For our next result, we add a constraint involving the Fourier transforms: $$\label{eq:pauli}
|g|=|f|\text{ and }|\widehat{g}|=|\widehat{f}|$$ This problem is due to Pauli, who speculated that would imply $g=cf$ for some $c\in\T$. However, one may construct many pairs $(f,g)$ satisfying for which this is not the case ([*see e.g.*]{}Vogt [@Vo], Corbett and Hurst [@CH; @CH1]). Such pairs are now called *Pauli partners*. In the band-limited case, Ismagilov [@Is] and the first author [@Ja] have independently shown that the set of the Pauli partners may be arbitrarily large. However, altough this is not explicitly stated in [@Is; @Ja], for a given band-limited $f$ only finitely band-limited partners (up to trivial solutions) are constructed. The following result shows that the solution set of the Pauli problem in the wide-band case may be arbitrarily large as well and even uncountable.
There exists $f\in H^2_{\tau}(\ss)$ which has a nondenumerable infinity of Pauli partners which are not constant multiples of one another.
The proof is a direct adaptation of [@Is; @Ja].
Let $\{\alpha_n\}_{n=0}^{\infty}$ be a sequence of non-zero real numbers such that $\sum_{n=1}^{+\infty}|\alpha_n|^2<\infty$ and consider the associated Riesz product $$R_\alpha(x)=\prod_{n=1}^{\infty}\big(1+2i\alpha_n\sin(2\pi3^nx)\big).$$ For properties of Riesz products, we refer the reader to the book of Katznelson [@Ka]. We may write this Riesz product as a Fourier series $$\label{eq:riesz}
R_\alpha(x)=\displaystyle\sum_{k\in\mathbb Z}a_ke^{2\pi i kx}.$$
Next, let $\varphi\in L^2(\R)$ be such that $\widehat{\varphi}$ is supported on $[0,1]$ and note that $\widehat{\varphi}$ is bounded. For all $x\in\R$, take $f=R_\alpha\varphi$. As $$f(x)=
\left(\sum_{k\in\mathbb Z}a_ke^{2\pi i kx}\right)\varphi(x),$$ we get $$\widehat{f}(\xi)=\sum_{k\in\mathbb Z}a_k\widehat{\varphi}(\xi-k).$$
Now, observe that $a_k=0$ unless there exists an integer $N$ and $\eta_1,\ldots,\eta_N\in\{-1,0,1\}$ with $\eta_N\not=0$ such that $\displaystyle k=\sum_{j=1}^N\eta_j3^j$. Further, $N$ and the $\eta_j$’s are uniquely determined by $k$. In this case, a simple computation shows that $3^{N-1}\leq|k|\leq 3^{N+1}$ and that $$\label{eq:rieszFourier}
|a_k|=\prod_{j=1}^N|\alpha_j|.$$ Therefore, if we chose $0<|\alpha_j|\leq e^{-2\cdot3^{j+1}}$, we get $$|a_k|\leq |\alpha_N|\leq e^{-2\cdot 3^{N+1}}\leq e^{-2 |k|}.$$ As a consequence, for $k\leq |\xi|\leq k+1$, $$|\widehat{f}(\xi)|=|a_k||\widehat{\varphi}(\xi-k)|\leq e^{-2|k|}\|\widehat{\varphi}\|_\infty\leq Ce^{-2|\xi|}.$$ It follows that $f\in H^2_\tau(\ss)$.
Next, let $\varepsilon=\{\varepsilon_n\}_{n=1}^{\infty}\in\{-1,1\}^{\mathbb N}$ and $\alpha(\varepsilon)=\{\alpha_n\varepsilon_n\}_{n=1}^{\infty}$. In particular, for $\varepsilon=\mathbf{1}=(1,1,\ldots)$, $\alpha(\mathbf{1})=\alpha$. Observe that the associated Riesz product $$R_{\alpha(\varepsilon)}(x)=\prod_{n=1}^{\infty}\big(1+2i\alpha_n\varepsilon_n\sin(2\pi3^nx)\big)
=\sum_{k\in\mathbb Z}a_k(\varepsilon)e^{2\pi i kx}$$ has the following properties:
- for every $x\in\R$, $|R_{\alpha(\varepsilon)}(x)|=|R_\alpha(x)|$;
- for every $k\in\mathbb Z$, $|a_k(\varepsilon)|=|a_k|$.
This last property follows directly from . Note also that $R_{\alpha(\varepsilon)}$ is not a constant multiple of $R_{\alpha(\varepsilon')}$ if $\varepsilon\neq\varepsilon'$.
It remains to define $f_\varepsilon=R_{\alpha(\varepsilon)}\varphi$. Then $f_\varepsilon$ has the following properties:
- $f_\varepsilon\in H^2_\tau(\ss)$ and $f_\varepsilon$ is not a constant multiple of $f_{\varepsilon'}$ if $\varepsilon\neq\varepsilon'$;
- $|f_\varepsilon(x)|=|f_{\varepsilon'}(x)|$ for all $x\in\R$;
- $|\widehat{f_\varepsilon}(\xi)|=|\widehat{f_{\varepsilon'}}(\xi)|$ since for $k\leq|\xi|\leq k+1$, $k\in\mathbb Z$, $$|\widehat{f_\varepsilon}(\xi)|=|a_k(\varepsilon)||\widehat{\varphi}(\xi-k)|=|a_k(\varepsilon')||\widehat{\varphi}(\xi-k)|=|\widehat{f_{\varepsilon'}}(\xi)|.$$
Derivation Operator
-------------------
We first look at a direct consequence of Corollary \[cor:uv-strip\]. Let $b,q\in\R$ with $|q|<1$. For all $z\in \ss$ and $f\in H^2_{\tau}(\ss)$, consider the operator $\dfrac{\partial}{\partial z}$ which gives the derivative of $f$, the operator $\delta$ given by $$\delta(f)(z)=f(z+b)-f(z),$$ and the operator $\gamma$ given by $$\gamma(f)(z)=f(qz)-f(z).$$ The key property is that if $D$ is one of $\dfrac{\partial}{\partial z},\delta$ or $\gamma$ and $\varphi,\psi\in H^2_\tau(\ss)$, then $$D(\varphi\cdot\psi)=D\varphi\cdot\psi+\varphi\cdot D\psi.$$ McDonald [@Mc Theorem 1] considered the coupled phase retrieval problem: $f,g$ entire, $|g(x)|=|f(x)|$ with the additional constraint $|Dg(x)|=|Df(x)|$ for $x\in\R$. McDonald showed that if $f=uv$ and $g=uv^*$, then $|Dg|=|Df|$ is equivalent to $$\left(\dfrac{Dv}{v}-\dfrac{Dv^*}{v^*}\right)\left(\dfrac{Du^*}{u^*}-\dfrac{Du}{u}\right)=\dfrac{DfDf^*-DgDg^*}{ff^*}=0,$$ which imposes strong restrictions on either $u$ or $v$. With these, McDonald was able to significantly reduce the solution set into two solutions. As a consequence of Corollary \[cor:uv-strip\], McDonald’s result directly extends to the wide-band case. We omit the proof as it is [*mutadis mutandis*]{} the one provided by McDonald.
Let $f,g\in H^2_{\tau}(\ss)$ and $D$ be one of the operators $\dfrac{\d}{\d x},\delta$ or $\gamma$. Suppose that $|g(x)|=|f(x)|$ and $|Dg(x)|=|Df(x)|$ for (a.e.) $x\in\R$. Then:
1. For the cases $D=\dfrac{\d}{\d x}$ and $D=\gamma$, either $g=\beta f$ or $g=\beta f^*$ for some constant $\beta\in \R$.
2. For the case $D=\delta$, either $g=Vf$ or $g=Vf^*$ where $V$ is a meromorphic function that has period $b$ and continuous and unimodular on $\R$.
Modulus on a Segment on $\ss$
-----------------------------
In the spirit of what was done by Boche [*et. al.*]{} [@Bo], we now consider that $|g(z)|=|f(z)|$ for $z$ in a curve on $\ss$. A similar idea can also be found in [@Ja2]. For this part, we add the fact that $|g(z)|=|f(z)|$ for every $z$ on a segment lying on the strip $\ss$. We first look at this additional constraint on the phase retrieval problem on the disc.
\[lem:angledisc\] Let $f,g\in H^2(\D)$ such that $|g(x)|=|f(x)|$ for $x\in(-1,1)$ and $$\label{eq:angle}
|g(z)|=|f(z)|~~~\text{for }z\in e^{i\theta}(-1,1)$$ where $\theta\notin\pi\mathbb Q$. Then $g=cf$ for some $c\in\T$.
Let $f,g\in H^2(\D)$ and $\mathscr Z=Z(f)\triangle Z(g).$ Since $|g(x)|=|f(x)|$ for all $x\in (-1,1)$, we have $\mathscr Z=\overline{\mathscr Z}$. It clearly follows that $\mathscr Z\cap\R=\emptyset$.
(-1.1903296619010413,0.) – (1.207723250118793,0.); in [-1.,-0.5,0.5,1.]{} (0pt,-2pt); (0.,-1.09092864812463) – (0.,1.2904137024168039); in [-1.,-0.5,0.5,1.]{} (-2pt,0pt); (-1.1903296619010413,-1.09092864812463) rectangle (1.207723250118793,1.2904137024168039); (0.,0.) ellipse (1.7714371433205747cm and 1.7712698886159624cm); (-0.4801049318603239,0.5718332036569326) node\[anchor=north west\] [$\Large\mathbb D$]{}; (0.021230171697829633,0.07049797196399908) node\[anchor=north west\] [$0$]{}; (0.021230171697829633,1.3572583999758616) node\[anchor=north west\] [$i$]{}; plot\[parametric\] function[0.3\*cos((t)),0.3\*sin((t))]{}; (0.29696447865481407,0.37129911097975915) node\[anchor=north west\] [$\theta$]{}; (0.5403023058681398,0.8414709848078965)– (-0.5403023058681398,-0.8414709848078965); (1.0239003788141365,0.0788535591588813) node\[anchor=north west\] [$1$]{}; (1.,0.)– (-1.,0.);
(0.,0.) circle (2.0pt); (0.5403023058681398,0.8414709848078965) circle (2.0pt); (-0.5403023058681398,-0.8414709848078965) circle (2.0pt); (1.,0.) circle (2.0pt); (-1.,0.) circle (2.0pt);
The disc $\D$ and the segment $e^{i\theta}(-1,1)$.
Since $|g(x)|=|f(x)|$ for all $x\in e^{i\theta}(-1,1)$, we have $\mathscr Z=\text{Ref}_\theta\mathscr Z$ where $\text{Ref}_\theta$ refers to a reflection with respect to the segment $e^{i\theta}(-1,1)$. Hence, by composing $\mathscr Z=\overline{\mathscr Z}$ and $\mathscr Z=\text{Ref}_\theta\mathscr Z$, we get that $\mathscr Z=\text{Rot}_{2\theta}\mathscr Z$, where $\text{Rot}_{2\theta}$ refers to a counterclockwise $2\theta$-rotation with respect to 0. Now, since $\theta\notin \pi\mathbb Q$, either $\mathscr Z=\emptyset$ or $\mathscr Z$ is uncountable. Since the zero set is discrete, $\mathscr Z $ cannot be uncountable, and so $\mathscr Z=\emptyset$. Hence, $Z(f)=Z(g)$, which implies that the Blaschke products formed by the zeros of $f$ and $g$ given by $B_f$ and $B_g$ respectively, are equal.
Now, observe that since $|g(x)|=|f(x)|$ for all $x\in (-1,1)$, Lemma \[lem:disc\] implies that for $e^{i\zeta}\in\T$, $$\nu_f(e^{i\zeta})+\nu_f(e^{-i\zeta})= \nu_g(e^{i\zeta})+\nu_g(e^{-i\zeta}).$$ Using this equation, the Fourier coefficients of $\nu_f$ and $\nu_g$ satisfy $$\label{eq:mfourier}
\widehat{\nu_f} (n)+\widehat{\nu_f}(-n)= \widehat{\nu_g}(n)+\widehat{\nu_g}(-n),~~~\text{for all }n\in\mathbb N.$$ On the other hand, $|g(x)|=|f(x)|$ for all $x\in e^{i\theta}(-1,1)$ implies that $|f(e^{i\theta}x)|=|g(e^{i\theta}x)|$ for all $x\in (-1,1)$. For $z\in\D$, we now write $F(z)=f(e^{i\theta}z)$ and $G(z)=g(e^{i\theta}z)$ so that $F,G\in H^2(\D)$ and $|F(w)|=|G(w)|$ for all $w\in(-1,1)$. Note that for $w,z\in \D$, we have $$\begin{aligned}
S_F(w)&=\exp\left(\int_{\mathbb T}\dfrac{w+e^{i\zeta}}{w-e^{i\zeta}}d\nu_F(e^{i\zeta})\right)\\
&=\exp\left(\int_{\mathbb T}\dfrac{ze^{i\theta}+e^{i\zeta}}{ze^{i\theta}-e^{i\zeta}}d\nu_F(e^{i\zeta})\right),\\
\intertext{and so by letting $u=\zeta-\theta$, we get}
S_F(w)&=\exp\left(\int_{\mathbb T}\dfrac{z+e^{iu}}{z-e^{iu}}d\nu_f(e^{i(u+\theta)})\right).
\end{aligned}$$ Thus by Lemma \[lem:disc\], we have for $e^{i\zeta}\in\T$, $$\label{eq:measf-t}
\nu_f(e^{i(\theta+\zeta)})+\nu_f(e^{i(\theta-\zeta)})= \nu_g(e^{i(\theta+\zeta)})+\nu_g(e^{i(\theta-\zeta)}).$$ Next, define the measure $\mu$ on $\T$ by $\mu(e^{i\zeta})=\nu_f(e^{i(\zeta+\theta)})$ for $e^{i\zeta}\in\T$, with Fourier coefficients given by $$\widehat{\mu}(n)=\int_{\mathbb T}e^{-in\theta}d\nu_f(e^{i(\zeta+\theta)})=e^{in\theta}\widehat{\nu_f}(n)$$ for $n\in\mathbb N$. Hence, the previous equation and imply that for $n\in\mathbb{N}$, $$e^{in\theta}\widehat{\nu_f} (n)+e^{-in\theta}\widehat{\nu_f}(-n)= e^{in\theta}\widehat{\nu_g}(n)+e^{-in\theta}\widehat{\nu_g}(-n).$$ Now this equation together with imply that $$\widehat{\nu_g}(n)=\dfrac{e^{-in\theta}\widehat{\nu_f}(n)-e^{in\theta}
\widehat{\nu_f}(n)}{e^{-in\theta}-e^{in\theta}}=\widehat{\nu_f}(n)$$ and $\widehat{\nu_g}(-n)=\widehat{\nu_f}(-n)$, for all $n\in\mathbb N$. It follows that $\nu_f=\nu_g$ and so $S_f=S_g$.
We now prove the same for the outer part. Since $|g(x)|=|f(x)|$ for all $x\in (-1,1)$, Lemma \[lem:disc\] again implies that for a.e. $e^{i\zeta}\in \T$, $$\log|f(e^{i\zeta})|+\log|f(e^{-i\zeta})|=\log|g(e^{i\zeta})|+\log|g(e^{-i\zeta})|.$$ For $e^{i\zeta}\in\T$, letting $h_f(e^{i\zeta})=\log|f(e^{i\zeta})|$ implies that the Fourier coefficients of $h_f$ and $h_g$ satisfy $$\label{eq:hf}
\widehat{h_f} (n)+\widehat{h_f}(-n)= \widehat{h_g}(n)+\widehat{h_g}(-n),~~~\text{for all }n\in\mathbb N.$$ On the other hand, by definition of $F$ and $G$, we have for a.e. $e^{i\zeta}\in\T$, $$\log|f(e^{i(\theta+\zeta)})|+\log|f(e^{i(\theta-\zeta)})|=\log|g(e^{i(\theta+\zeta)})|+\log|g(e^{i(\theta-\zeta)})|.$$ Using this equation and a similar argument to te one for the Fourier coefficients of the singular measures, we get that for $n\in\mathbb{N}$, $$e^{in\theta}\widehat{h_f} (n)+e^{-in\theta}\widehat{h_f}(-n)= e^{in\theta}\widehat{h_g}(n)+e^{-in\theta}\widehat{h_g}(-n).$$ Hence, by this equation and we get that $\widehat{h_g}(n)=\widehat{h_f}(n)$ for all $n\in\mathbb Z$. Therefore $h_f=h_g$, and so $O_f=O_g$.
Finally, since $B_f=B_g$, $S_f=S_g$ and $O_f=O_g$, we have $g=cf$ for some $c\in\T$.
We now consider the coupled phase retrieval problem on the strip that includes a more general form of the constraint given in . Using the previous lemma, we establish the uniqueness of the solution of the following problem.
\[th:anglestrip\] Let $f,g\in H^2_\tau(\ss)$ such that $|g(x)|=|f(x)|$ for $x\in\R$ and $$|g(z)|=|f(z)|~~~\text{for }z\in \left(-e^{i\theta}+a, e^{i\theta}+a\right)$$ where $a\in\R$ and $\theta\notin\pi\mathbb Q$. Then $g=cf$ for some $c\in\T$.
Without loss of generality, we let $a=0$ so that the segment intersects the real line at the origin. Consider $f_{1/2}(z)=f(\frac{1}{2} z),\, g_{1/2}(z)=g(\frac{1}{2} z)$ for all $z\in\D$. Observe that $f_{1/2},g_{1/2}\in H^2(\D)$, and $|g_{1/2}|=|f_{1/2}|$ on $(-1,1)$ and on $e^{i\theta}(-1,1)$. Hence, $g_{1/2}=cf_{1/2}$ on $\D$ for some $c\in \T$ by the Lemma \[lem:angledisc\], and so $g=cf$ on $\frac{1}{2}\D$. Therefore, since $f,g\in \hol(\ss)$ and $g=cf$ on $\frac{1}{2}\D$ so we have $g=cf$ on $\ss$.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The third author is supported by a scholarship from Campus France and CHED, Philippines.
This study has been carried out with financial support from the French State, managed by the French National Research Agency (ANR) in the frame of the ”Investments for the future” Programme IdEx Bordeaux - CPU (ANR-10-IDEX-03-02).
This paper was completed during the first author’s visit at the Shrödinger Institute, Vienna, during the workshop “Operator Related Function Theory”. We kindly acknoledge ESI’s hospitality.
Some results in this paper have been announced in [@JKP]. We thank the referees of that announcement for their helpfull comments that also lead to improvements here.
[10]{}
, [*On the determination of the phase of Fourier integral I*]{}, Trans. Amer. Math. Soc., **83** (1956), 179–192.
, [*On the determination of the phase of Fourier integral II*]{}, Proc. Amer. Math. Soc., **8** (1957), 234–238.
, [*Stable Phase Retrieval in Infinite Dimensions*]{}, Found. Comput. Math., (2018). https://doi.org/10.1007/s10208-018-9399-7
, [*Hardy spaces for the strip*]{}, J. Math. Anal. Appl., **333** (2007), 347–364.
, ‘Phase retrieval in spaces of analytic functions on the unit disk’ in [*2017 International Conference on Sampling Theory and Applications (SampTA)*]{}, Tallinn, Estonia, 2017.
, [*Stable phase retrieval with low-redundancy frames*]{}, Adv. Compu. Math. [**41**]{} (2015) 317–331.
, [*Phase retrieval via Wirtinger flow: theory and algorithms*]{}, IEEE Trans. Inform. Theory, **61** (2014), 1985–2007.
, [*What is needed to determine a state*]{}, manuscript.
, [*Are wave functions uniquely determined by their position and momentum distributions?*]{}, J. Austral. Math. Soc., **20** (1978), 182–201.
, [*The Theory of $H^p$ spaces*]{}, Academic Press, New York, 1970.
, [*Phase retrieval and image reconstruction for astronomy, in: H. Stark (Ed.), Image Recovery: Theory and Application*]{}, Academic Press, 1987, 231–275.
, [*Phase reconstruction via nonlinear least squares*]{}, Inverse Problems, **8** (1992), 541–548.
, [*Phase retrieval of time-limited signals*]{}, Acta Math. Sci. Ser. B (Engl. Ed.), **30** (2010), 39–-46.
, [*The mathematics of phase retrieval*]{}, arXiv:1901.07911.
, [*Phase retrieval of real-valued functions in Sobolev space*]{}, Acta Math. Sin. (Engl. Ser.), **34** (2018), 1778–1794.
, [*Phase Retrieval and Zero Crossing (Mathematical Methods in Image Reconstruction)*]{}, Math. And Its Appl., Kluwer Academic Publisher, 1989.
, [*On the Pauli problem*]{}, Funksional Anal. i Prilozhen, **30** (1986), 82–84. In Russian, translation in Funct. Anal. Appl., **30** (1996), 138–140.
, [*Phase retrieval techniques for radar ambiguity problems*]{}, J. Fourier Anal. Appl., **5** (1999), 309–329.
, [*Uniqueness results in an extension of Pauli’s phase retrieval*]{}, Appl. Comp. Harm. Anal., [**37**]{} (2014) 413-441.
, [*Phase Retrieval for Wide Band Signals*]{}, SAMPTA 2019, Bordeaux, France.
, [*An Introduction to Harmonic Analysis*]{}, Dover, 1976.
, [*The phase retrieval problem*]{}, Inverse Prob., **11** (1995), 1–28.
, [*Representation Theorems in Hardy Spaces*]{}, Cambridge University Press, 2009.
, [*Phase retrieval and magnitude retrieval of entire functions*]{}, , **10** (2004), 259–267.
, [*Phase retrieval of $H^2$-functions*]{}, J. Math. Anal. Appl., **314** (2006), 162–173.
, [*Phase retrieval in crystallography and optics*]{}, J. Opt. Soc. Amer. A, **7** (1990), 394–411.
, [*Reconstruction of steplike potentials*]{}, Wave Motion, **18** (1993), 21–30.
, [*Nontrivial ambiguities for blind frequency-resolved optical gating and the problem of uniqueness*]{}, J. Opt. Soc. Amer. B, **21** (2004), 1089–1097.
, [*Reconstruction of bandlimited functions from unsigned samples*]{}, J. Fourier Anal. Appl., [**17**]{} (2011), 720–732.
, ‘Position and momentum distributions do not determine the quantum mechanical state’ in [*Mathematical Foundations of Quantum Theory*]{} (A. Marlow, editor), Academic Press (1978), 365–372.
, [*Phase recovery, MaxCut and complex semidefinite programming*]{}, Math. Prog., [**149**]{} (2015), 47–81.
, [*The question of phase retrieval in optics*]{}, Opt. Acta, **10** (1963), 41–49.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Molecular dynamics simulations are used to simulate the thermal properties of a model fluid containing nanoparticles (nanofluid). By modelling transient absorption experiments, we show that they provide a reliable determination of interfacial resistance between the particle and the fluid. The flexibility of molecular simulation allows us to consider separately the effect of confinement, particle mass and Brownian motion on the thermal transfer between fluid and particle. Finally, we show that in the absence of collective effects, the heat conductivity of the nanofluid is well described by the classical Maxwell Garnet equation model.'
author:
- 'Mihail Vladkov [^1], Jean-Louis Barrat [^2]'
title: Modelling transient absorption and thermal conductivity in a simple nanofluid
---
Many experimental studies have suggested that the thermal conductivity of colloidal suspensions referred to as “nanofluids” is unusually high [@Eastman; @Patel]. Predictions of effective medium theories are accurate in some cases [@Putnam-poly] but generally fail to account for the large enhancement in conductivity. In spite of a large number of - sometimes conflicting or controversial - suggestions and experimental findings [@Keblinski-Cahill], the microscopic mechanisms for such an increase remain unclear. Among the possibilities that were suggested, the Brownian motion [@prasher] of a single sphere in a liquid leads to an increase in thermal conductivity of the order of $4-5\%$, and appears to be an attractive and generic explanation. The essential idea is that the Brownian velocity of the suspended particle induces a fluctuating hydrodynamic flow [@Alder; @Keblinski-flow], which on average influences (increases) thermal transport. This mechanism is different from transport of heat through center of mass diffusion, which was previously shown to be negligible [@Keblinski]. However, some recent experimental high precision studies reported a normal conductivity in nanoparticle suspensions at very small volume fractions below 1% [@putnam], questioning the validity of this assumption.
In this work we use nonequilibrium molecular dynamics “experiments” to explore further the transfer of heat in a model fluid containing nanoparticles. Our approach is closely related to experimental techniques, but we also make use of the flexibility allowed by molecular simulations to explore extreme cases in terms e.g. particle/fluid mass density mismatch. We concentrate on model systems that are expected to be representative of generic properties.
We start our study by mimicking the “pump-probe” experiments that are used in nanofluids to estimate interfacial resistance, which is an essential ingredient in modelling the thermal properties of highly dispersed system [@pump-probe-Cahill]. All atoms in our system interact through Lennard-Jones interactions $$\label{eq:ljpotcut} U_{lj}(r) = \bigg\{ \begin{array}{lll}
4\varepsilon((\sigma/r)^{12} - c(\sigma/r)^{6}), &r\le r_c \\
0, &r>r_c
\end{array}$$ where $r_c=2.5\sigma$. The coefficient $c$ is equal to 1 for atoms belonging to the same phase, but can be adjusted to modify the wetting properties of the liquid on the solid particle [@barratbocquet; @barratchiaruttini]. Within the solid particle, atoms are linked with their neighbors through a FENE (Finite extension non-linear elastic) bonding potential: $$\label{eq:FENE} U_{FENE}(r) = \frac{k}{2}R_0
\ln(1-(\frac{r}{R_0})^2), \qquad r<R_0$$ where $R_0=1.5\sigma$ and $k=30.0 \varepsilon/\sigma^{2}$. The solid particle in the fluid was prepared as follows: starting from a FCC bulk arrangement of atoms at zero temperature, the atoms within a sphere were linked to their first neighbors by the FENE bond. Then the system was equilibrated in a constant NVE ensemble with energy value corresponding to a temperature $T=1$. The particle contains 555 atoms, surrounded by 30000 atoms of liquid. The number density in the system is $\rho = 0.85\sigma^{-3}$. Taking $\sigma=0.3nm$ this corresponds to a particle radius of order $R_{part} \sim 1.5nm$ and to a system size $L \sim 10nm$.
The transient absorption simulation starts with an equilibrium configuration at temperature $T=1$, by “heating” uniformly the nanoparticle. This heating is achieved by rescaling the velocities of all atoms within the solid particles, so that the kinetic energy per atom is equal to $3\epsilon$. We then monitor the kinetic energy per atom of the particle as a function of time, which we take as a measure of the particle temperature. The system evolves at constant energy, but the average temperature of the liquid, which acts essentially as a reservoir, is only very weekly affected by the cooling process. Within a few time steps the kinetic temperature of the particle drops to a value of $T_p \approx 2$. This evolution corresponds to the standard one for an isolated, harmonic system. As the particle was equilibrated at $T_p = 1$, we have due to kinetic and potential energy equipartition $\langle
E_{pot}(t=0) \rangle = 1/2$. As we start our simulation with $\langle E_k(t=0) \rangle = 3$, within a very short time the kinetic energy drops to a value of $1.5$, then the potential energy stored in the particle atoms positions yields its contribution of $1/2$ to the temperature, equilibrating it to a value of $2$. This first step does not involve any heat exchange with the liquid surroundings.
The subsequent decrease of the particle temperature, on the other hand, directly probes such exchanges. A quantitative understanding of this decay is particularly important, as it remains an essential experimental tool to quantify heat transfer across the particle-liquid interface. In figure \[fig:fit\], we compare the molecular dynamics simulation result for the temperature as a function of time, to the result of a continuum calculation involving the interfacial (Kapitza) thermal resistance as an adjustable parameter. The continuum calculation makes use of the standard heat transfer equations $$\begin{aligned}
C\frac{dT_p}{dt} &=& -4\pi R_{p}^{2} j(R_p, t) \\
\frac{\partial T_l}{\partial t} &=&
D_{th}\frac{1}{r^2}\frac{\partial}{\partial r}\bigg(
r^2\frac{\partial T_l(r,t)}{\partial r}\bigg)\end{aligned}$$ where $T_l(r,t)$ and $T_p$ are the liquid and particle temperatures, respectively. $C$ is the thermal capacity of the particle, $R_p$ its radius, $D_{th}$ is the thermal diffusion coefficient of the liquid. The above equations are solved with the following boundary conditions: $$\begin{aligned}
j(R_p,t) &=& \frac{1}{R_{k}}\big(T_l(R_p^+,t) - T_p(t)\big) \\
j(R_{\infty},t) &=& 0 \label{eq:boundary}\end{aligned}$$ where $R_{\infty}$ is chosen so that $\frac{4}{3} \pi
R_{\infty}^3$ is equal to the volume of the simulation box from the previous section. The initial condition is $$\begin{aligned}
T_p(0) &=& 2 \\
T_l(r,0) &=& 1\end{aligned}$$ The temperature was assumed to be uniform inside the nanoparticle. This assumption is based on the simulation results, where the observed temperature profile inside the particle was found independent of position within statistical accuracy. We used data found in the literature [@barratchiaruttini; @palmer] for the values of the fluid thermal diffusivity and conductivity. As the simulated particle is not exactly spherical, but presents some FCC facets, its radius for use in the calculation was estimated from the radius of gyration: $$\langle R_g^2 \rangle = \frac{1}{N} \sum_1^N (r_i - r_{CM})^2 =
\frac{3}{5}R_p^2$$ where $R_g^2$ is the measured radius of gyration of the particle atoms, and the second equality applies to an ideal sphere. In the range $T=1$ to $T=3.5$, we checked through equilibrium simulations that the heat capacity of the particle is very close to $3k_B T
N$, as for an harmonic ideal solid.
The value of the interface thermal resistance (Kapitza resistance) appearing in equation \[eq:boundary\] was adjusted to fit the simulation data. The value that fits the simulation results for the wetting system $m=1$ and $c=1$ was found to be $R_K \approx 0.8$. This number is in agreement with the thermal resistance for a wetting flat wall calculated in [@barratchiaruttini] for a similar system with a different potential in the solid phase, and a completely different simulation method.
The same cooling simulation was performed using a non wetting particle ($c=0.5$). A substantial slowing down of the cooling rate was also observed, which can be attributed to an increased Kapitza resistance. The resulting value of $R_K$ is 3.2 (Lennard-Jones unit), again in agreement with previous determinations for flat surfaces [@barratchiaruttini]. In real units, a value $R_K=1$ corresponds typically to an interfacial *conductance* $G=1/R_K$, of the order of $100\mathrm{ MW/K m}^2$ [^3]. The method is therefore a sensitive probe of interfacial resistance, as usually assumed in experiments.
![Comparison between the temperature evolution from simulations and the solution of the continuum heat equation. The value of the Kapitza resistance taken for the calculation is $R_K = 0.8$.[]{data-label="fig:fit"}](./fit_eqn.eps){width="7cm"}
In a second step, we explore the influence of thermal Brownian motion of the particle on the cooling process. First, let us recall that the naive idea, that diffusion could speed up cooling by displacing the particles towards cooler fluid regions is easily excluded. Quantitatively, diffusion of the particle and heat diffusion take place on very different time scales. The diffusion coefficient of the particle was measured, in our case, to be three orders of magnitude smaller than the heat diffusion coefficient in the fluid. We also suppressed diffusion by tethering the particle to its initial position with a harmonic spring of stiffness $k=30\epsilon/\sigma^2$. As expected, no effect is observable on the cooling rate. This measurement cannot probe for another possible effect - the influence of fluid flow on the cooling. As discussed in [@Acrivos] the heat transfer from a sphere in a low Reynolds number velocity field is enhanced by the latter. Because of the diffusion velocity of the Brownian particle $v = \big( \frac{3k_B T}{m}\big)^{1/2}$, it can be viewed, at any given moment, as a particle in a velocity field [@prasher]. To probe the influence of this phenomenon, we tether every single atom in the particle to its initial position with a harmonic spring ($k=30$) and compare the measured temperature evolution with the previous results. The cooling rate is still not influenced by this manipulation even if the center of mass is now “frozen” (fig. \[fig:com\]).
![Evolution of the $Z$ coordinate of the particle center of mass for the different systems: free particle, particle confined by a spring attached to its center of mass and particle where all atoms are tethered to their initial position. The spring constant for all springs is $k=30$.[]{data-label="fig:com"}](./com.eps){width="7cm"}
![The evolution of the particle temperature for the different systems: free particle, particle confined by a spring attached to its center of mass and particle where all atoms are tethered to their initial position. Every curve is the mean value from 20 simulation runs. No effect is observed.[]{data-label="fig:spring"}](./tempTp3l1_spring.eps){width="7cm"}
A final check on the influence of such velocity effects was attempted by modifying the mass of the atoms that constitute the nanoparticle. This artificial procedure reduces the thermal Brownian velocity, and when it is carried out we indeed observe a strong slowing down of the cooling process. However, this slowing down is again completely independent of the center of mass motion of the particle, which is controlled by the presence of the tethering springs. On the other hand, the effect of this mass density increase is easily understood in terms of an increase of the interfacial resistance. A higher mass of the particle atoms decreases the speed of sound in the solid and thus leads to a larger acoustic mismatch between the two media, which slows down the cooling. Numerically, we find that for a mass of $100$ times the mass of a liquid atom, the Kapitza resistance increases to $R_K=7.4$.
In summary, we have shown that the Brownian motion of the particle does not affect the cooling process. As a byproduct, we have shown that the mass density parameter provides a flexible numerical way of tuning the interfacial resistance, which will be used in the next section.
![The evolution of the particle temperature as a function of the mass of the particle atoms. Every curve is the mean value from 20 simulation runs. The increased mass slows down heat exchange.[]{data-label="fig:mass"}](./tempTp3l1_mass.eps){width="7cm"}
In this last section, we attempt direct measurements of the nanofluid heat conductivity using a nonequilibrium molecular dynamics simulation of heat transfer across a fluid slab containing one nanoparticle, with periodic boundary conditions in the $x$ and $y$ direction and confined by a flat repulsive potential in the $z$ direction.
![Snapshot of the system used to evaluate the thermal conductivity with a particle of 13% volume fraction.[]{data-label="fig:snap"}](./nanolett.eps){width="6cm"}
Two slices of fluid are thermostated, using velocity rescaling, at different temperatures. To avoid any effect of thermophoresis or coupling of the thermostat the particle, the particle is constrained to stay at equal distance between the two thermostats using two different schemes. First, a confinement between two repulsive, parallel walls, that couple only to the particle atoms. The particle is then constrained to the mid-plane of the simulation cell, but free to diffuse within this plane. The second possibility is to tether the center of mass to a fixed point, as described above, so that any possible effect due to flow or diffusion is eliminated. The energy flux is measured by calculating the energy absorbed by the thermostats. The effective conductivity of the system was defined as: $$\lambda_{eff} = \frac{J}{(T_1 - T_2)/L}$$ The temperatures of the two thermostats were $T_1 = 2$ and $T_2 =
1$. In order to compare the conductivity results for the different systems they were first equilibrated to the same pressure at a temperature $T=1.5$, then a non equilibrium run was performed for about 1500-2000 $\tau_{LJ}$ to make sure the pressure stays the same for the systems of different nature and finally a production run of about 30000 $\tau_{LJ}$ during which thermostats energy, particle diffusion and temperature profiles are monitored. Simulations were performed with two different volume fractions for the particle, 2% or 13%. The volume fraction is defined as the volume of the particle divided by the volume of the fluid outside the thermostats. As expected from the study above, no effect of the particle diffusion on the fluid conductivity was observed. The effective conductivity measured for the particle diffusing in 2D, the particle attached with a single spring or the particle where all atoms were attached to their initial positions has the same value within 1% which is below the error bar of the measurement (around 4-5%). Finally, we investigated the effect of the presence of the nanoparticle on the thermal conductivity of the fluid. For the smallest volume fraction ($\Phi \sim 2\%$), we were not able to detect any change in thermal conductivity compared to the bulk fluid. At the higher volume fraction ($\Phi \sim 13\%$), on the other hand, we observe a clear [*decrease*]{} in the heat conductivity associated with the presence of the nanoparticle (fig. \[fig:mg\]). Clearly, this decrease must be interpreted in terms of interfacial effects. To quantify these effects, we use the Maxwell-Garnett approximation for spherical particles, modified to account account for the Kapitza resistance at the boundary between the two media. The resulting expression for the effective conductivity [@MG] is $$\frac{\lambda_{eff}}{\lambda_l}= \frac{\big(
\frac{\lambda_p}{\lambda_l}(1+2\alpha)+2 \big) + 2\Phi\big(
\frac{\lambda_p}{\lambda_l}(1-\alpha)-1 \big)} {\big(
\frac{\lambda_p}{\lambda_l}(1+2\alpha)+2 \big) - \Phi\big(
\frac{\lambda_p}{\lambda_l}(1-\alpha)-1 \big)}$$ where $\lambda_l$ and $\lambda_p$ are the liquid and particle conductivities, $\Phi$ is the particle volume fraction and $\alpha
= \frac{R_K \lambda_l}{R_p}$ is the ratio between the Kapitza length (equivalent thermal thickness of the interface) and the particle radius. This model predicts an increase in the effective conductivity for $\alpha
> 1$ and a decrease for $\alpha < 1$, regardless of the value of the conductivity of the particles or of the volume fraction. The prediction depends very weekly on the ratio $\lambda_p/\lambda_l$, less than 1% for $10<\lambda_p/\lambda_l<100$. The minimum value of $\lambda_{eff}/\lambda_l$, obtained when $\alpha \to \infty$, is $\frac{1-\Phi}{1+\Phi/2}$ while the maximum possible enhancement (for $\lambda_p \to \infty$ and $R_K \to 0$) is $\frac{1+2\Phi}{1-\Phi}$.
As explained above, the Kapitza resistance can be modified by tuning either the liquid solid interaction coefficient $c$, or the mass density of the solid, or a combination thereof. Figure \[fig:mg\] illustrates the variation of the measured effective conductivity for several values of the Kapitza resistance, determined independently for various values of these parameters. It is seen that the observed variation (decrease in our case) in the effective conductivity is very well described by the Maxwell-Garnett expression. This expression also allows us to understand why the heat conductivity does not vary in a perceptible manner for the smaller volume fraction, for which the predicted change would be less than $3\%$, within our statistical accuracy.
![Comparison between the ratio of the effective conductivity to the conductivity of the pure liquid of the simulated systems and the values obtained from the Maxwell Garnett equation. The values of the Kapitza resistance for the simulations were obtained from the cooling rate of the particle.[]{data-label="fig:mg"}](./mg-simu.eps){width="7cm"}
Conclusion {#conclusion .unnumbered}
==========
We have explored some important aspects of the thermal properties of “nanofluids”, at the level of model system and individual solid particles. The molecular modeling of transient heating experiments confirms that they are a sensitive tool for the determination of thermal boundary resistances. The effect of Brownian motion on the cooling process, on the other hand, was found to be negligible. By varying interaction parameters or mass density, we are able to vary the interfacial resistance between the particle and the fluid in a large range. This allowed us to estimate, over a large range of parameters, the effective heat conductivity of a model nanofluid in which the particles would be perfectly dispersed. Our results can be simply explained in terms of the classical Maxwell-Garnett model, provided the interfacial resistance is taken into account. The essential parameter that influences the effective conductivity turns out to be the ratio between the Kapitza length and the particle radius, and for very small particles a decrease in conductivity compared to bulk fluids is found. We conclude that large heat transfer enhancements observed in nanofluids must originate from collective effects, possibly involving particle clustering and percolation or cooperative heat transfer modes.
[99]{}
Eastman, J. A.; Choi, S. U. S.; Li, S.; Yu, W.; Thompson, L. J.; [*Appl. Phys. Lett.*]{} [**2001**]{}, [*78*]{}, 718.
Patel, H. E.; Das, S. K.; Sundararajan T.; Nair, A. S.; George, B.; Pradeep, T.; [*Appl. Phys. Lett.*]{} [**2003**]{}, [*83*]{}, 2931.
Putnam, S. A.; Cahill, D. A.; Ash, B. J.; Schadler, L. S.; [*J. Appl. Phys.*]{}, [**2003**]{}, [*94*]{}, 6785.
Keblinski, P.; Eastman, J. A.; Cahill, D. A.; [*Materials Today*]{}, [**2005**]{}, [*8*]{}, 36.
Prasher, R.; Bhattacharya, P.; Phelan, P. E.; [*Phys. Rev. Lett.*]{} [**2005**]{}, [*94*]{}, 025901.
Alder, B.J.; Wainwright, T.E.; [*Phys. Rev. A*]{}, [**1970**]{}, [*1*]{}, 18.
Keblinski, P.; Thomin, J.; [*Phys. Rev. E*]{} 73, 010502 (2006)
Keblinski, P.; Phillot, S. R.; Choi, S. U.-S.; Eastman, J. A.; [*Int. J. of Heat and Mass Transf.*]{} [**2002**]{}, [*45*]{}, 855.
Putnam, S. A.; Cahill, D. G.; P. V. Braun, P. V.; Ge, Z.; Shimmin R. G.; [*J. Appl. Phys.*]{}, in press
Wilson, O. M.; Hu, X.; Cahill, D. G.; Braun, P. V.; [*Phys. Rev. B*]{} [**2002**]{}, [*66*]{}, 224301.
Barrat, J.-L.; Bocquet, L.; [*Faraday Discussions*]{} [**1999**]{}, [*112*]{}, 121.
Barrat, J.-L.; Chiaruttini, F.; [*Molecular Physics*]{} [**2003**]{}, [*101*]{}, 1605.
Acrivos, A.; Taylor, T. D.; [*Physics of Fluids*]{} [**1962**]{}, [*5*]{}, 387.
Palmer, B. J.; [*Phys. Rev. E*]{} [**1994**]{}, [*49*]{}, 2049.
Nan, C. W.; Birringer, R.; Clarke, D. R.; Gleiter, H.; [*J. Appl. Phys.*]{}, [**1997**]{}, [*81*]{}, 6692.
[^1]: E-mail: [email protected]
[^2]: E-mail: [email protected]
[^3]: The conversion to physical units is made by taking a Lennard-Jones time unit $\tau_{LJ}=10^{-12}s$, and a length unit $\sigma=0.3nm$. The unit for $G$ is $energy/temperature/(length)^2/time$. As the energy/temperature ratio is given by the Boltzmann constant $k_B$, we end up with a unit for $G$ equal to $k_B /\sigma^2 /\tau_{LJ} \simeq =10^8 W/m^2/s$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study magnetic fluctuations in a system of interacting spins on a lattice at high temperatures and in the presence of [a]{} spatially varying magnetic field. Starting from a microscopic Hamiltonian we derive effective equations of motion for the spins and solve these equations self-consistently. We find that the spin fluctuations can be described by an effective diffusion equation with [a]{} diffusion coefficient [which strongly depends]{} on the ratio of the magnetic field gradient to the strength of spin-spin interactions. We also extend our studies to account for external noise and find that the relaxation times [and]{} the diffusion coefficient are mutually dependent.'
author:
- Andrew Sykes
- Dmitry Solenov
- Dmitry Mozyrsky
title: 'Bloch-Redfield theory of high-temperature magnetic fluctuations in interacting spin systems'
---
Introduction
============
Recent advances in magnetic imaging techniques[,]{} as well as [the]{} development of novel types of electronic devices that utilize electronic spin (rather than charge) as an information carrier[,]{} have renewed interest in understanding mechanisms of spin noise and spin relaxation. While conventional experimental methods, such as nuclear or electron spin resonance and related techniques [@amb; @Schlihter], probe [the]{} temporal evolution of spin correlations, they typically do not provide much information on spatial correlations between neighboring spins. On the contrary, the new approaches to spin resonance, such as magnetic resonance force microscopy (MRFM), combine capabilities of the usual magnetic resonance techniques with [the]{} sensitivity of atomic force microscopy. That is, one can now observe not only the time (frequency) dependence of spin correlations, but also their spatial dispersion with an atomic-scale resolution. Hence[,]{} there is a clear need to develop theoretical tools for the description of such correlations in systems of interest, that is, in systems of [*interacting*]{} spins.
The spatial correlations in interacting spin systems are believed to be controlled by the so-called [*flip-flop*]{} processes. That is, two neighboring interacting spins can exchange magnetization, i.e., the values of their spin components can change by $\pm 1/2$, so that the total spin of the pair is conserved. Such exchange gives rise to the diffusion of spin magnetization, provided the dynamics of the flip-flops is Poissonian [@anderson]. Typical calculations of the effective diffusion constant utilize the method of moments, where the line-width is approximated by a gaussian or lorentzian shape[@Schlihter][. Such]{} approximations are not very well controlled. More recently several types of cluster/cummulant expansions have been proposed in connection with the problem of decoherence of localized electronic spins caused by the fluctuations of nuclear spins [@souza; @vitzel; @saikin]. In that problem though, the decoherence of electronic spins occurs on a timescale small compared to the typical nuclear timescale, which justifies the use of cluster expansions in the description of fluctuations in the nuclear subsystem.
In this paper we study correlations between spatially separated spins in the opposite, long time regime. Such [a]{} regime is specifically relevant to the MRFM technique, which utilizes (micro)mechanical cantilevers with ferromagnetic tips to probe magnetic fluctuations in the underlying samples. We propose an approach based on [the]{} Markov approximation, similar to the frequently used Bloch-Redfield approximation[@Schlihter; @blum] in the theory of open quantum systems. That is, we consider all possible pairs $(i,j)$ of interacting spins, while other spins $\neq(i,j)$ are treated as [an]{} environment, providing finite line-width for the flip-flop transitions through fluctuating magnetic fields [(see Fig. \[fig3\] for a cartoon visualisation of these approximations)]{}. A self-consistency is then established between the flip-flop rates and the line-width so that our approach can be viewed as a sort of dynamical mean field approximation. We argue that our method is well justified, in particular, in the presence of an external strongly non-uniform magnetic field, which introduces separation between the timescales of the flip-flop rates and the correlation time for the fluctuations of the effective magnetic fields. Note that such non-uniform magnetic fields are intrinsic to the MRFM setups, where field gradients are used to address specific spins located within the so-called resonance layer.
![(a) shows a collection of spin-half particles on a rigid lattice in a nonuniform external magnetic field. The quantity of interest in this work is the rate at which spin flip-flops occur. Our model is displayed pictorially in (b), where the neighbouring sites of $i$ and $j$ are replaced by a fluctuating bath. The flip-flop rate is then calculated such that it is consistent with the fluctuations of the bath.[]{data-label="fig3"}](cartoon2.eps){width="8cm"}
Our paper is organized as follows. In [Section \[sec:model\]]{} we describe a general formalism that can be utilized to study spin-spin correlations for a broad class of spin Hamiltonians, e.g. Eq. (\[ham\]). We derive effective equations of motion for the magnetization, e.g. Eq. (\[sigz3\]), which has the form of a stochastic master equation. In doing so we use methods developed in connection with studies of diffusion in classical lattice gas models [@sasha; @kogan] as well as in the theory of open quantum systems [@blum]. The equation of motion is supplemented by a self-consistency equation, Eq. (\[Gamma\]), which relates the rates in the master equation to the correlation function evaluated from the master equation in terms of the rates. In Section [\[heisenberg\] we look specifically at ]{} the Heisenberg model on a cubic lattice in the presence of a spatially non-uniform external magnetic field. We find that the [flip-flop]{} rates are strongly suppressed by the field gradient in the limit when the field gradient significantly exceeds the spin-spin interaction constant. In Section \[relaxation\] we study the influence of spin-relaxation processes on spin flip-flops and derive the effective master equation for the magnetization in the presence of external noise sources acting on the spins. Our main result of that section is [that,]{} while the field gradient suppresses the flip-flops, the noise may actually enhance these rates; see Eq. (\[lorentzian\]) and corresponding discussion. Finally, in [Section \[sec:discussion\]]{} we discuss the validity of our approximations and summarize the results.
Model and general solution {#sec:model}
==========================
We consider a system of spin-half particles on a lattice, interacting with each other according to the following Hamiltonian $$\label{ham}
H\!=\!\sum_i B_i \sigma^z_i+\sum_{\langle i,j\rangle}\left[J_{ij}^\parallel\sigma_i^z\sigma_j^z+
2J_{ij}^\perp\left(\sigma_i^+\sigma_j^-+\sigma_i^-\sigma_j^+\right)\right]$$ where $\sigma^{\pm}_k=(\sigma^x_k\pm i\sigma^y_k)/2$, $k=(k_x,k_y,k_z)$, and $\sigma^\alpha_k$ are Pauli matrices, $\alpha=x, y, z$. The index $i$ in the first sum runs over all lattice sites, while the notation $\langle i,j\rangle$ in the second sum indicates the summation over all pairs of lattice sites. The external magnetic field $B_i$ is assumed to be non-uniform in space. The spin-spin interaction is isotropic when $J^\parallel_{ik}=J^\perp_{ik}$. The equation of motion for $\sigma^z_i$ is $$\label{eq:sigmaz}
i\partial_t\sigma^z_i= [\sigma^z_i, H]=4\sum_{{k\neq i}} J^\perp_{ik}
(\sigma^+_i\sigma^-_k - \sigma^-_i\sigma^+_k).$$ In Eq. (\[eq:sigmaz\]) and in the following we set $\hbar = 1$. Next we consider [the]{} equation of motion for $\sigma^+_i\sigma^-_k$. After a straightforward calculation we obtain $$\begin{aligned}
\label{eq:sigmapm}
i\partial_t(\sigma^+_i\sigma^-_k)=& 2\,\delta\!B_{ki}^{\rm eff}\sigma^+_i\sigma^-_k + J^\perp_{ik}(\sigma^z_i-\sigma^z_k)
\\\nonumber
&+2\!\!\sum_{ n\neq \{i,k\}}\left[ J^\perp_{ni}\sigma^z_i \sigma^+_n\sigma^-_k - J^\perp_{nk}\sigma^z_k \sigma^+_i\sigma^-_n\right],\end{aligned}$$ where $\delta\!B_{ki}^{\rm eff}$ is the difference between effective magnetic fields at sites $k$ and $i$, $$\label{Beff}
\delta\!B_{ki}^{\rm eff}=B_k-B_i+\sum_{n\neq \{k,i\}}\left[J^\parallel_{nk}\sigma_n^z-J^\parallel_{ni}\sigma_n^z\right].$$ This difference consists of a constant part; $$\label{Bconst}
\delta\!B_{ki}=B_k-B_i,$$ and a part which fluctuates (due to spin flips at nearby lattice sites); $$\label{Bfluct}
\delta\!B_{ki}^{\rm fluct}(t)=\sum_{n\neq \{k,i\}}\left[J^\parallel_{nk}\sigma_n^z-J^\parallel_{ni}\sigma_n^z\right].$$ The larger the number of individual spins contributing to $\delta\!B_{ki}^{\rm fluct}$, the more rapidly fluctuating this quantity becomes. Hence, for systems with sufficiently long-range interactions or high dimensionality $\delta\!B_{ki}^{\rm fluct}$ fluctuates very rapidly.
From Eq. (\[eq:sigmapm\]) we see that the expectation value of $\sigma^+_i\sigma^-_k$ contains a prefactor $$\label{Delta}
\Delta_{ik}(t,s)=e^{2i\int_s^t dt^\prime \,\delta\!B_{ik}^{\rm eff}(t^\prime)},$$ related to [the]{} Larmor precession of spins around the effective magnetic field at sites $i$ and $k$. The fluctuating component of the effective-magnetic-field \[see Eq. \] causes the precession frequencies at each site to vary. Moreover, if the effective magnetic fields at sites $i$ and $k$ are large, and the number of spins contributing to the fluctuating component of the field \[see Eq. (\[Bfluct\])\] is much greater than one, then from Eq. \[or more specifically, [the prefactor shown in]{} Eq. \], we would expect the Larmor precession frequency of $\sigma^+_i\sigma^-_k$ to be very fast (compared to the dynamics of the individual $\sigma^z_n$ operators) and fluctuate rapidly. Following this logic, we see that the summation of terms $\sigma_n^+\sigma_k^-$ and $\sigma_i^+\sigma_n^-$ in Eq. (\[eq:sigmapm\]) is essentially a summation over a rapidly fluctuating object, and will statistically self-average to zero (provided a sufficiently large number of spins contribute to $\delta\!B_{ki}^{\rm fluct}$).
A similar approximation is very common in the theory of open quantum systems, where it is known as the secular or Bloch-Redfield approximation [@blum]. As in the case of open quantum systems it relies on the assumption that the off-diagonal elements of a system’s density matrix $\rho$ are small either due to large splittings between the adjacent energy levels or due to rapid fluctuations from the heat bath. In the present case the fields $\delta\!B_{ki}^{\rm fluct}(t)$, play the role of the heat bath operators and must treated self-consistently, to which we now focus our attention.
By integrating Eq. (\[eq:sigmapm\]) (with the summation on the [right-hand-side]{} neglected) we obtain $$\begin{aligned}
\label{eq:sigmasol}
\sigma^+_i\sigma^-_k(t)\simeq -i J^\perp_{ik}\int_0^t ds \Delta_{ik}(t,s)[\sigma^z_i(s)-\sigma^z_k(s)]
\\\nonumber
+\Delta_{ik}(t,0){c_{ik}},\end{aligned}$$ where the last term is due to the initial condition of the operator [$c_{ik}=\sigma^+_i\sigma^-_k(t=0)$]{}. In the high temperature limit the system is disordered and therefore it is natural to assume that the expectation value of $\sigma^+_i\sigma^-_k$ is random, with $\langle\langle \sigma^+_i\sigma^-_k \rangle\rangle=0$ and $\langle\langle \sigma^+_i\sigma^-_k \sigma^+_{i^\prime}\sigma^-_{k^\prime}\rangle\rangle=(1/4)\delta_{ik^\prime}\delta_{ki^\prime}$, provided $i\neq k$ and $i'\neq k'$. Here the double bracket stands for averaging over the ensemble of density matrices of the system as well as over a particular realization of the density matrix [(set by a particular choice of the initial condition)]{}, i.e., $\langle \sigma^+_i\sigma^-_k \rangle= {\rm Tr}(\sigma^+_i\sigma^-_k\rho)$ and $\langle\langle \sigma^+_i\sigma^-_k \rangle\rangle = \langle{\rm Tr}(\sigma^+_i\sigma^-_k\rho)\rangle_\rho$, etc.
We wish to substitute Eq. (\[eq:sigmasol\]) into Eq. (\[eq:sigmaz\]) to obtain a closed form equation for $\sigma^z_i(t)$. This can be significantly simplified if we replace the rapidly fluctuating quantity, $\Delta_{ik}(t,s)$, in the integrand in Eq. (\[eq:sigmasol\]) by its average value. This approximation is in a perfect agreement with our assumption regarding the separation between time scales for the dynamics of the local fluctuating magnetic field at site $i$, and components of the individual spin at site $i$. We make the assumption that, by virtue of the central limit theorem, the random variable $\delta\!B_{ik}^{\rm eff}$ is Gaussian; $$\begin{aligned}
\langle\Delta_{ik}(t,s)\rangle&=e^{2i(B_i-B_k)(t-s)}e^{-2\int_s^t\int_s^tK_{ik}(\tau_1-\tau_2)d\tau_1d\tau_2}\nonumber\\
&=e^{2i(B_i-B_k)(t-s)}e^{-4\int_0^{|t-s|}K_{ik}(\mu)\left(|t-s|-\mu\right)d\mu}\label{AvDelta}\end{aligned}$$ where $$\label{K}
K_{ik}(\tau_1-\tau_2)=\langle\delta\!B_{ik}^{\rm fluct}(\tau_1)\delta\!B_{ik}^{\rm fluct}(\tau_2)\rangle$$ is the autocorrelation function of the fluctuating component of the magnetic field gradient between sites $i$ and $k$. Moreover, since Eq. (as a function of $|t-s|$) decays much faster than the evolution of $[\sigma^z_i(s)-\sigma^z_k(s)]$, we can employ the Markov approximation, and set $s=t$ which removes the latter term from the integral in Eq. to give $$\begin{aligned}
\label{eq:sigsol2}
\sigma^+_i\sigma^-_k(t)\simeq -i J^\perp_{ik}\int_0^t ds \langle\Delta_{ik}(t,s)
\rangle[\sigma^z_i(t)-\sigma^z_k(t)]
\\\nonumber
+\Delta_{ik}(t,0){c_{ik}}.\end{aligned}$$ Now that we have a formal solution for $\sigma^+_i\sigma^-_k(t)$ it is prudent to substitute the expression back into the sum in Eq. which was originally ignored in deriving Eq. . In doing so we wish to find an inequality which quantitatively ensures the summation term is small compared to all other terms in Eq. . The details of this calculation are straightforward (see [Section]{} \[sec:discussion\] for further discussion) and one finds $J_{ik}\ll\Gamma_{ik}$ [(where $\Gamma_{ik}$ is the rate at which flip-flops occur and is calculated below)]{} is a sufficient condition to ensure the summation in Eq. remains small.
We now substitute Eq. into Eq. , to give $$\begin{aligned}
\label{sigz3}
\partial_t\langle\sigma_k^z(t)\rangle=&\sum_{j\neq k}\Gamma_{jk}\left[\langle\sigma_j^z(t)\rangle-\langle\sigma_k^z(t)\rangle\right]+\xi_k(t).\end{aligned}$$ The averages in Eq. (\[sigz3\]) are taken with respect to a particular realization of [the]{} systems density matrix, but not over the ensemble of the density matrices. The coefficient, $\Gamma_{jk}$, represents a rate at which spin flip[-flops]{} occur between sites $j$ and $k$ ([these]{} can only occur when sites $j$ and $k$ have opposite spin). The expression for this rate is given by $$\begin{aligned}
\Gamma_{jk}=&4(J_{jk}^\perp)^2\int_{-\infty}^{\infty}\exp\!\!\Bigg[2i\delta\! B_{jk}s-\nonumber\\
&\qquad
\left.4\int_0^{|s|}K_{kj}(\mu)\left(|s|-\mu\right)d\mu\right]ds,\label{Gamma}\end{aligned}$$ where we have used the quickly-decaying property of $\langle\Delta_{ik}(t,s)\rangle$ to extend the upper and lower limits of the integral to $\pm\infty$. The final term in Eq. represents the uncertainty with respect to the choice of the initial conditions of the system, and is given by $$\label{xi}
{\xi_k(t)=4i\sum_{j\neq k}J_{jk}^\perp\left[c_{jk}\Delta_{kj}(0,t)-c_{kj}\Delta_{jk}(0,t)\right].}$$ Averaging over $\xi_i(t)$ corresponds to averaging over [an ensemble of different density matrices (each density matrix being distinguished by a unique initial condition)]{}. Noting that $\langle\Delta_{ik}(0,t)\Delta_{ki}(0,t')\rangle=\langle\Delta_{ki}(t',t)\rangle$, and since $\Delta_{ik}(t',t)$ is a rapidly fluctuating function of $t-t'$, we can make the approximation; $$\label{fluct}
\langle\xi_i(t)\xi_{j}(t^\prime)\rangle = 2\delta(t-t')\left(-\Gamma_{jk}+\delta_{jk}\sum_{m\neq k}\Gamma_{mk}\right).$$
Together, [Eqs.]{} (\[sigz3\]) and (\[fluct\]) obviously describe Poissonian dynamics of [a]{} coupled two-state system. Indeed, we could have obtained the same result if we had postulated that the dynamics of a given spin (say, at site $i$) is controlled by its flipping rates $-\sum_k \tilde{\Gamma}_{ik} \sigma_i(1-\sigma_k)$ and $\sum_k \tilde\Gamma_{ki} \sigma_k(1-\sigma_i)$, where $\tilde{\Gamma}_{ik}=\Gamma_{ik}+\eta_{ik}$, with $\Gamma_{ik}$ and $\eta_{ik}$ being the constant and fluctuating parts of the rate respectively. In this case $\xi_i=\sum_k (\eta_{ik}-\eta_{ki})$, c.f. Eq. (\[xi\]). Note that one can derive Eq. (\[Gamma\]) for the rates $\Gamma_{ik}$ within a straightforward perturbative calculation[, as shown in Appendix \[appA\]. There, we calculate the probability of a flip-flop for a pair of spins in the presence of an external fluctuating field (along [the $z$]{}-direction). In the current section, we have simply assumed that this fluctuating external field has been created by the neighbouring spins coupled to this pair (see Appendix \[appA\] for details).]{}
Equations (\[sigz3\]) and (\[fluct\]) constitute a closed system of equations, which allows one to evaluate the correlation functions $\langle\sigma^z_i(t)\sigma^z_k(t^\prime)\rangle$. For an arbitrary choice of spin-spin interaction constants $J^\parallel_{ik}$ and $J^\perp_{ik}$ and external fields $B_i$, the rates $\Gamma_{ik}$ in Eqs. (\[sigz3\]) and (\[fluct\]), though formally unkown, are expressed in terms of these correlation functions $\langle\sigma^z_i(t)\sigma^z_k(t^\prime)\rangle$ \[see Eqs. , , and \]. [By]{} evaluating these correlation functions in terms of $\Gamma_{ik}$, one obtains [a closed set of]{} equations which one must solve self consistently for $\Gamma_{ik}$. This provides a way of solving for both the rates, $\Gamma_{ik}$ and the correlation functions, $\langle\sigma^z_i(t)\sigma^z_k(t^\prime)\rangle$ for an arbitrary choice of interaction constants; $J^\parallel_{ik}$, $J^\perp_{ik}$ and external fields; $B_i$.
In the next section we will evaluate the $\Gamma_{ik}$ and $\langle\sigma^z_i(t)\sigma^z_k(t^\prime)\rangle$ for a simple choice of coupling constants given by the three dimensional, cubic, Heisenberg model with nearest-neighbor interactions.
Before proceeding to this task we note that in the limit of large field gradient $|B_i-B_k|\gg J^\parallel_{ik}$, the integrand [of]{} Eq. (\[Gamma\]) rapidly oscillates and therefore the value of the integral decreases with the growth of $|B_i-B_k|$. In the limit of vanishing rate $\langle\sigma^z_i(t)\sigma^z_k(t^\prime)\rangle\simeq \delta_{ik}$[, we find]{} $$\nonumber
K_{ik}(t-t^\prime)\simeq \kappa_{ik}= \sum_{m\neq\{i,k\}}(J_{mk}^\parallel-J_{mi}^\parallel)^2.$$ [Evaluating then, ]{}the Gaussian integral in Eq. (\[Gamma\]) we obtain $$\label{Gamma1}
\Gamma_{ik}\simeq \frac{4\pi^{1/2}( J^\perp_{ik})^2}{\sqrt{2 \kappa_{ik}}}\exp{\!\left[-\frac{\delta\! B_{ik}^2}{2 \kappa_{ik}}\right]}.$$ Thus we predict the rate at which flip-flops occur, [and therefore the rate at which spin diffusion occurs,]{} is very small for $|B_i-B_k|\gg J^\parallel_{ik}$.
Example: Heisenberg model {#heisenberg}
=========================
We now consider a particular example; [the]{} Heisenberg model on a cubic lattice with an external spatially varying magnetic field. The Hamiltonian of the system can be cast in the form $$\label{ham1}
H=\sum_i B({\bf r}_i)\sigma^z_{{\bf r}_i}+ J \sum_{i, \nu, \alpha}\sigma^\alpha_{{\bf r}_i}\,\sigma^\alpha_{{\bf r}_i+{\bf e}_\nu}.$$ where $i=(i_x, i_y, i_z)$, ${\bf r}_i=i_x a {\hat {\bf x}}+i_y a {\hat {\bf y}}+i_z a {\hat {\bf z}}$ ($a$ being the lattice spacing), $\nu = 1, ..., 6$ enumerates the unit vectors which point to the nearest neighbors: ${\bf e}_{1(2)}=\pm\hat {{\bf x}}$, ${\bf e}_{3(4)}=\pm\hat {{\bf y}}$ and ${\bf e}_{5(6)}=\pm\hat {{\bf z}}$, and finally $\alpha=x,y,z$. We also assume that the external field varies linearly in space, $B({\bf r})= b_0 {\bf r}\cdot {\bf g}$ where ${\bf g}$ is a unit vector which points in the direction of variation. The Hamiltonian (\[ham1\]) obviously belongs to the class of Hamiltonians defined in Eq. (\[ham\]).
The equation of motion for $\sigma^z_{{\bf r}_i}$ is given by Eq. (\[sigz3\]), which, for the Hamiltonian in Eq. (\[ham1\]) reads $$\label{ham2}
\partial_t\langle\sigma^z_{{\bf r}_i}\rangle =\sum_\nu\Gamma_{\bf{e}_\nu}\left[\langle\sigma^z_{{\bf r}_i+{\bf e}_\nu}\rangle-\langle\sigma^z_{{\bf r}_i}\rangle\right] +\xi_{{\bf r}_i}(t)$$ and the noise $\xi_{{\bf r}_i}(t)$ is correlated according to Eq. (\[fluct\]), which becomes $$\label{noise2}
\langle\xi_{{\bf r}_i}(t)\xi_{{\bf r}_j}(t^\prime)\rangle = 2\delta(t-t^\prime)\sum_\nu
\Gamma_{\bf{e}_\nu} (\delta_{{\bf r}_i\,{\bf r}_j}-\delta_{{\bf r}_i+{\bf e}_\nu \,{\bf r}_j}).$$
Eqs. (\[ham2\]) and (\[noise2\]) can be readily diagonalized by a Fourier transform method. Writing $$\label{fourier}
\sigma^z_{{\bf r}_i}(t) = \int_{-\infty}^{\infty}\frac{dt}{2\pi} \int_{-\pi/a}^{\pi/a}\frac{d^3{\bf k}}{(2\pi)^3}
{\tilde\sigma}^z({\bf k}, \omega) e^{i{\bf r}_i {\bf k}+i\omega t},$$ where the ${\bf k}$-integral is taken over the first Brillouin zone, (a cube with an edge $2\pi/a$), we obtain from Eq. (\[ham2\]) that $$\label{fourier2}
\langle |{\tilde\sigma}^z({\bf k},\omega)|^2\rangle =
\frac{\langle |{\tilde\xi}({\bf k},\omega)|^2\rangle}{ \omega^2 +
\{\sum_\nu \Gamma_{\bf{e}_\nu}\left[1-\cos{(a\,{\bf e}_\nu {\bf k})}\right]\}^2},$$ with $i=x, y, z$ and ${\tilde\xi}({\bf k}, \omega)$ [being the Fourier transform of $\xi_{{\bf r}_i}(t)$, defined similarly]{} to Eq. (\[fourier\]). From Eq. (\[noise2\]) $$\label{noise2f}
\langle |{\tilde\xi}({\bf k},\omega)|^2\rangle = 2\sum_\nu \Gamma_{\bf{e}_\nu}\left[1-\cos{(a\,{\bf e}_\nu {\bf k})}\right],$$ and taking the inverse Fourier transform of Eq. (\[noise2f\]), we obtain $$\begin{aligned}
\langle \sigma^z_{{\bf r}_i}(t) \sigma^z_{{\bf r}_{i^\prime}}(0) \rangle = \,&e^{-t\sum_\nu \Gamma_{\bf{e}_\nu}}\times\nonumber
\\
&\qquad I_{n_x}(2\Gamma_{{\bf e}_1} t)I_{n_y}(2\Gamma_{{\bf e}_3} t)I_{n_z}(2\Gamma_{{\bf e}_5} t),\label{corr}\end{aligned}$$ where $I_n(z)$ is [the]{} modified Bessel function of complex argument [@grad] and $n_x = |i_x-i_x^\prime|$, etc. At sufficiently large distances (and times) Eq. (\[corr\]) describes (anisotropic) diffusion with diffusion constants $D_{\nu\nu}\sim \Gamma_{\bf{e}_\nu} a^2$.
![Numerical solution of the integral Eq. , showing the rate $\Gamma$ as a function of magnetic field gradient $b_0$.[]{data-label="fig1"}](GammaVsGradientMathematica.eps){width="8cm"}
The rates $\Gamma_{\bf{e}_\nu}$ are yet to be determined. They can be found from Eq. (\[Gamma\]). Note that while for arbitrary direction of the field gradient ${\bf g}$ the rates $\Gamma_{\bf{e}_\nu}$, $\Gamma_{\bf{e}_{\nu'}}$ differ from each other, they are equal ($\Gamma_{\bf{e}_\nu}\equiv\Gamma$) for ${\bf g}={\bf g}_0=(1/\sqrt{3})({\hat {\bf x}}+{\hat {\bf y}}+{\hat {\bf z}})$, i.e., when the field gradient points along the main diagonal of the [cube]{} formed by the unit vectors $\hat {{\bf x}}$, $\hat {{\bf y}}$ and $\hat {{\bf z}}$. In this case Eq. (\[fluct\]) reduces to $$\label{Gamma2}
\Gamma = 4J^2\int_{-\infty}^\infty ds \,e^{2ib_0 s/\sqrt{3}}e^{-4\int_0^{|s|}d\mu K(\mu)\left(|s|-\mu\right)},$$ where $K(\mu)$ is the correlation function of Eq. , which is now independent of the indices $i$ and $k$, due to our convenient choice of magnetic field gradient direction ${\bf g}$, which makes the diffusion process isotropic. $K(\mu)$ can be easily expressed in terms of $\langle \sigma^z_{{\bf r}_i}({\mu}) \sigma^z_{{\bf r}_j}(0)\rangle$: $$\begin{aligned}
\nonumber
K({\mu}) = \,&\frac{1}{2}J^2 \left(\sum_{\nu\neq1}\sum_{\nu^\prime\neq1}\langle\sigma^z_{{\bf e}_\nu}({\mu})
\sigma^z_{{\bf e}_{\nu^\prime}}(0)\rangle\right.
\\&\left.\label{corr2}
-\sum_{\nu\neq1}\sum_{\nu^\prime\neq2}\langle\sigma^z_{{\bf e}_\nu}({\mu}) \sigma^z_{{\bf e}_1+{\bf e}_{\nu^\prime}}(0)\rangle\right),\end{aligned}$$ where we have chosen to calculate $K$ between sites ${\bf r}_i=(0,0,0)$ and ${\bf r}_j={\bf e}_1$ (and then relied on the isoptropy of all directions in the lattice). Using Eq. (\[corr\]) we obtain $K(\mu) = \frac{1}{2}J^2 e^{-6\Gamma \mu}f(2\Gamma \mu)$, where $$\begin{aligned}
f(x) = &5I_0^3(x)+16 I_0(x) I_1^2(x)+ 4 I_0^2(x) I_2(x) \nonumber
\\\nonumber
&- 4I_0^2(x) I_1(x)-8I_1^3(x)- 12 I_0(x)I_1(x)I_2(x) \nonumber
\\
&- I_0^2(x)I_3(x).\label{f}\end{aligned}$$ Substituting this new found expression for $K(\mu)$ into Eq. (\[Gamma2\]) we obtain an integral equation for $\Gamma$. One can solve this integral equation numerically to find $\Gamma/J$ as a function of $b_0/J$ (see Appendix \[appB\]), the results are shown in Fig. \[fig1\]. For $b_0/J\gg 1$ the value of $\Gamma$ is consistent with Eq. (\[Gamma1\]), which for the present case reduces to $$\label{Gamma3}
{\Gamma \simeq 4J\sqrt{\frac{\pi}{20}}e^{-b_0^2/(60J^2)}.}$$ [The analytic solution is also shown in Fig \[fig1\] for comparison. We find the analytic and numerical solutions are equal beyond $b_0\gtrsim 10J$.]{}
Influence of relaxation processes {#relaxation}
=================================
In this section we consider the influence of external noise on [the]{} spin-spin correlation function. We consider a model described by [the]{} Hamiltonian $$\label{hamnoise}
{\tilde H} = H + \sum_{i,\alpha} \eta^\alpha_i(t)\,\sigma^\alpha_i,$$ where $H$ is given by Eq. (\[ham\]) and $\eta^\alpha_i(t)$ is a fluctuating magnetic field. The index $i$ runs over lattice sites, and $\alpha=x,y,z$. In [reality]{} such a field may arise due to phonons ([for instance in semiconductors]{}) or conduction electrons ([for instance in metals]{}). We will assume that $\langle\eta^\alpha_i(t) \eta^{\beta}_{j}(t^\prime)\rangle = \delta_{\alpha\beta}\,\delta_{ij}\,\Lambda(t-t^\prime)$, where $\Lambda(t)$ is some even function which decays to zero over some time scale.
We follow a similar [procedure]{} as in Section \[sec:model\]. By calculating commutation relations[,]{} we find; $$i\partial_t\sigma_k^z=4\sum_{j\neq k}J_{jk}^\perp\left(\sigma_k^+\sigma_j^--\sigma_j^+\sigma_k^-\right)+
4\left(\eta_k^-\sigma_k^+-\eta_k^+\sigma_k^-\right)\label{sigz}$$ where $\eta_k^\pm=\frac{1}{2}\left(\eta_k^x\pm i\eta_k^y\right)$. $$i\partial_t\sigma_k^+=-2B_k^{\rm eff}\sigma_k^++2\eta_k^+\sigma_k^z+2\sum_{j\neq k}J_{jk}^\perp\sigma_k^z\sigma_j^+
\label{sig+}$$ where $B_k^{\rm eff}=B_k+\sum_{j\neq k}J^\parallel_{jk}\sigma_j^z+\eta_k^z$ is the effective magnetic field at site $k$. Also $$\label{sig-}
i\partial_t\sigma_k^-=2B_k^{\rm eff}\sigma_k^--2\eta_k^-\sigma_k^z-2\sum_{j\neq k}J_{jk}^\perp\sigma_k^z\sigma_j^-.$$ Finally, $$\begin{aligned}
\nonumber
i\partial_t\left(\sigma_j^+\sigma_k^-\right)=\,&2\Delta\!B_{kj}^{\rm eff}\sigma_j^+\sigma_k^-+J_{jk}^\perp\left(\sigma_j^z-\sigma_k^z\right)
+\\&\nonumber
2\left[\eta_j^+\sigma_j^z\sigma_k^--\eta_k^-\sigma_j^+\sigma_k^z\right]
+\\&\label{sig+sig-}
2\sum_{i\neq\{j,k\}}\left[J_{ij}^\perp\sigma_j^z\sigma_i^+\sigma_k^--J_{ik}^\perp\sigma_k^z\sigma_j^+\sigma_i^-\right]\end{aligned}$$ where $\Delta\!B_{kj}^{\rm eff}=\delta\!B_{kj}^{\rm eff}+\eta_k^z-\eta_j^z$, and $j\neq k$. Analagous to Eqs. and of Section \[sec:model\], $\Delta\!B_{kj}^{\rm eff}$ consists of a constant part, given by ${\delta\!B_{kj}}$ \[see Eq. \], and a fluctuating part, which is now given by $$\Delta\!B_{kj}^{\rm fluct}(t)=\delta\!B_{kj}^{\rm fluct}(t)+\eta_k^z(t)-\eta_j^z(t),$$ [compared with Eq. .]{} We wish to integrate Eqs. , , and , and thereby find a closed form for the time evolution of $\sigma_k^z$ from Eq. .
We start with Eqs. , and and apply the same logic as in Section \[sec:model\] regarding the self-averaging nature of the summations (due to a fluctuating Larmor precession frequency). What is left can easily be integrated to give $$\begin{aligned}
\sigma_k^\pm(t)=\,&\mp2i\int_0^t \left[e^{\pm2i\int_s^t B_k^{\rm eff}(\tau)d\tau}\eta_k^\pm(s)\sigma_k^z(s)\right]ds+\nonumber\\
&\quad c_k^\pm e^{\pm2i\int_0^t B_k^{\rm eff}(\tau)d\tau},\end{aligned}$$ where $c_k^\pm=\sigma_k^\pm({t=}0)$ gives the contribution from the initial conditions. Looking now at Eq. and ignoring the summation term[,]{} we find $$\begin{aligned}
\sigma_j^+\sigma_k^-(t)\!=\!-i\int_0^t\Delta_{jk}'(s,t)
\Bigg\{\left[J_{jk}^\perp+2\eta_j^+(s)\sigma_k^-(s)\right]\sigma_j^z(s)-\nonumber\\
\!\!\left[J_{jk}^\perp+2\eta_k^-(s)\sigma_j^+(s)\right]\sigma_k^z(s)\Bigg\}ds+c_{jk}\Delta_{kj}(0,t)
\label{sig+sig-soln}\end{aligned}$$ where $\Delta_{jk}'(s,t)=e^{2i\int_s^t\!\Delta\! B_{jk}^{\rm eff}(\tau)d\tau}$, and $c_{jk}=\sigma_j^+\sigma_k^-(0)$ is the initial condition. Substituting Eq. into Eq. , we find that the terms $2\eta_j^+(s)\sigma_k^-(s)$ and $2\eta_k^-(s)\sigma_j^+(s)$ within the square parentheses of Eq. are summed over, and hence can be ignored, due to our self-averaging approximation. We then proceed with the same mean-field approximation as in Section \[sec:model\], this time replacing $\Delta_{jk}'(s,t)\rightarrow\langle\Delta_{jk}'(s,t)\rangle$, which is again assumed to be a Gaussian random variable, such that $$\langle\Delta_{kj}'(s,t)\rangle=e^{-2i\delta\!B_{kj}(t-s)}e^{-4\int_0^{|t-s|}K_{kj}'(\tau)\left(|t-s|-\tau\right)d\tau}$$ where $$\begin{aligned}
K_{kj}'(t-t')&=\langle\Delta\!B_{kj}^{\rm fluct}(t)\Delta\!B_{kj}^{\rm fluct}(t')\rangle\nonumber\\
&=K_{kj}(t-t')+2\Lambda(t-t').\end{aligned}$$ Proceeding in this way, Eq. for the time evolution of $\sigma_k^z$ becomes, $$\begin{aligned}
\partial_t\sigma_k^z(t)=&\sum_{j\neq k}\Gamma_{jk}'\left[\sigma_j^z(t)-\sigma_k^z(t)\right]+\xi_k(t)+\eta_k(t)-\nonumber\\
&8\int_0^t\Big\{\eta_k^-(t)e^{2i\int_s^tB_k^{\rm eff}(\tau)d\tau}\eta_k^+(s)+\nonumber\\
&\qquad\eta_k^+(t)e^{-2i\int_s^tB_k^{\rm eff}(\tau)d\tau}\eta_k^-(s)\Big\}\sigma_k^z(s)ds\label{sigzagain}\end{aligned}$$ where $$\xi_k(t)=4i\sum_{j\neq k}J_{jk}^\perp\left[c_{jk}\Delta_{kj}'(0,t)-c_{kj}\Delta_{jk}'(0,t)\right]$$ and $$\eta_k(t)=4i\left[\eta_k^+e^{-2i\int_0^tB_k^{\rm eff}(\tau)d\tau}c^--\eta_k^-e^{2i\int_0^tB_k^{\rm eff}(\tau)d\tau}c^+\right]$$ are both noise terms, arising from the initial conditions of $\sigma_j^+\sigma_k^-$ and $\sigma_j^\pm$ respectively. The new rate, $\Gamma'_{jk}$ is now given by $$\begin{aligned}
\Gamma_{jk}'=&4(J_{jk}^\perp)^2\int_{-\infty}^{\infty}\exp\!\!\Bigg[2i\delta\! B_{jk}s-\nonumber\\
&\qquad
\left.4\int_0^{|s|}K_{kj}'(\mu)\left(|s|-\mu\right)d\mu\right]ds,\label{Gammaprime}\end{aligned}$$ where we have employed the Markov approximation, to remove $\left[\sigma_j^z(t)-\sigma_k^z(t)\right]$ from the integral, and used the quickly decaying property of $\langle\Delta_{jk}'(s,t)\rangle$ to extend the upper and lower limits of the integral to $\pm\infty$.
The integral term in Eq. can be greatly simplified by replacing the terms in the curly parentheses by their average value. This approximation is consistent with an assumption of the differing time scales between fluctuating local magnetic fields at site $k$, and the individual dynamics of a single spin at site $k$. When a large number of individual spins contribute to the local effective field $B_k^{\rm eff}$ at site $k$ (as is the case for systems with long range interactions or high dimensionality) the fluctuations will appear Gaussian, and the term in Eq. involving the integral, becomes $$\begin{aligned}
-8\int_0^t\Lambda(t-s)\cos&\left[2B_k(t-s)\right]\times\nonumber\\
&e^{-4\int_0^{|t-s|}G_k(\tau)\left[|t-s|-\tau\right]d\tau}\sigma_k^z(s)ds\label{integralterm}\end{aligned}$$ where $$\begin{aligned}
G_k(\tau)=&\sum_{m\neq k}\sum_{n\neq k}J_{mk}^\parallel J_{nk}^\parallel\langle\sigma_m^z(0)\sigma_n^z(\tau)\rangle+
\Lambda(\tau).\end{aligned}$$ The term preceeding $\sigma_k^z(s)$ in Eq. decays much faster than the evolution of $\sigma_k^z(s)$, so we can apply the Markov approximation $\sigma_k^z(s)\rightarrow\sigma_k^z(t)$, and extending the upper and lower limits of integration to $\pm\infty$ we find $$\label{sigz4}
\partial_t\langle\sigma_k^z\rangle=\sum_{j\neq k}\Gamma_{jk}'\langle\sigma_j^z-\sigma_k^z\rangle\!
-\! \Upsilon_k\langle\sigma_k^z\rangle
+\xi_k+\eta_k$$ where $$\Upsilon_k=4\int_{-\infty}^\infty\Lambda(s)\cos\left(2B_ks\right)e^{-4\int_0^{|s|}G_k(\mu)\left[|s|-\mu\right]d\mu}ds$$ gives a new rate at which the spin direction at site $k$ relaxes down into a completely random orientation of either $\pm1$. [This]{} relaxation mechanism is entirely due to the fluctuating external magnetic field terms; $\eta^\alpha_i(t)$, in [the Hamiltonian of Eq. .]{}
Example: white noise
--------------------
If we [consider]{} the following simple example $$\Lambda(t)=\lambda\,\delta(t)$$ then we find $\langle\eta_j(t)\eta_k(t')\rangle=8\delta_{jk}\lambda\,\delta(t-t')$ and $\Upsilon_k=4\lambda$. We wish to examine two different limiting cases;
1. $\sqrt{\lambda}\ll J_{ik}$ and $J_{ik}\ll B_i-B_k$
2. $J_{ik}\ll \sqrt{\lambda}$ and $J_{ik}\ll B_i-B_k$
In [**case 1.**]{} the external noise is sufficiently weak that the relaxation time-scale is essentially infinite, in which case we can set $\langle\sigma_m^z(t)\sigma_n^z(t')\rangle\simeq\delta_{mn}$. In this way we find $K'_{jk}(t)=\kappa_{jk}+2\lambda\delta(t)$, where $$\kappa_{jk}=\sum_{m\neq\{j,k\}}\left(J_{mk}^\parallel-J_{mj}^\parallel\right)^2.$$ [Continuing with the calculation]{}, we find the following expression for the rate; $$\Gamma_{jk}'=8(J_{jk}^\perp)^2\int_0^\infty ds\cos\left[2(B_k-B_j)s\right]e^{-2[\kappa_{jk}s^2+4\lambda s]}.$$ This integral can be expanded to first order in the small parameter, to give $$\begin{aligned}
\nonumber
\Gamma_{jk}'&\simeq8(J_{jk}^\perp)^2\!\int_0^\infty \!\!ds\cos\left[2\delta\!B_{jk}s\right]e^{-2\kappa_{jk}s^2}\left(1-8\lambda s\right)
\\&=4(J_{jk}^\perp)^2\Biggl[\frac{\sqrt{\pi}\exp\left({-\frac{(\delta\! B_{jk})^2}{2\kappa_{jk}}}\right)}{\sqrt{2\kappa_{jk}}}-
\frac{4\lambda}{\kappa_{jk}}+\nonumber\\
&\qquad\qquad\qquad\qquad
\frac{4\sqrt{2}\delta\! B_{jk}\lambda F_{\rm D}\left(
\frac{\delta\! B_{jk}}{\sqrt{2\kappa_{jk}}}\right)}{\kappa_{jk}^{3/2}}\Biggr]\label{case1}\end{aligned}$$ where $F_{\rm D}(x)=e^{-x^2}\int_0^x e^{y^2}dy$ is Dawsons integral [@abramowitz]. [This result is shown in the solid lines of Fig. \[fig2\] for the case of the Heisenberg model on a cubic lattice (as discussed in Section \[heisenberg\]).]{} We can further approximate Dawsons integral, in the case of a large gradient $\delta\! B_{jk}\gg\sqrt{2\kappa_{jk}}$, to give $F_{\rm D}(x)\simeq\frac{1}{2x}+\frac{1}{4x^3}+{ {\cal{O}} }(x^{-5})$, for large $|x|$. From this we find the asymptotic [behaviour]{} of the rate $$\Gamma_{jk}'\rightarrow\frac{16(J_{jk}^\perp)^2\lambda}{\delta\!B_{jk}^2},$$ [valid when $\delta\!B_{jk}\gg J_{jk}^\perp$.]{}
In [**case 2.**]{} the external noise is sufficiently strong, that it dominates over the interaction-induced spin-diffusion process[. We can then]{} approximate Eq. as $$\partial_t\langle\sigma_k^z\rangle\simeq- \Upsilon_k\langle\sigma_k^z\rangle
+\eta_k.$$ In this case one would observe exponential decay in the autocorrelation function (due to the noise term $\eta_k$) given by $$\langle\sigma_k^z(t)\sigma_j^z(t')\rangle=\delta_{jk}e^{-4\lambda |t-t'|}.$$ This leads to $K_{jk}'(t)=2\lambda\delta(t)+\kappa_{jk}e^{-4\lambda|t|}\simeq2\lambda\delta(t)$, which gives us the following expression for the rate; $$\label{lorentzian}
\Gamma_{jk}'=16(J_{jk}^\perp)^2\frac{\lambda}{\delta\!B_{jk}^2+16\lambda^2}.$$ [This result is shown in the dashed lines of Fig. \[fig2\] for the case of the Heisenberg model on a cubic lattice (as discussed in Section \[heisenberg\]).]{}
Thus, in both cases 1. and 2. we find that the rate now decays as the inverse of the gradient squared; $\sim\left(J_{jk}^\perp/\delta\!B_{jk}\right)^2$. This provides a huge contrast with the noiseless situation of [Section]{} \[sec:model\], where the rate decays as $\sim\exp\left[-\left(J_{jk}^\perp/\delta\!B_{jk}\right)^2\right]$. The presence of the noise provides a means for spin diffusion [to occur]{} over a much faster time-scale [(in the presence of a strong external magnetic field gradient)]{}.
![Plotting the diffusion rate $\Gamma_{jk}'/J$ as a function of magnetic field gradient $(B_j-B_k)/J$ for a variety of different values of $\lambda/J$. [The solid lines show the result in Eq. for case 1. The dashed lines show the result of Eq. for case 2.]{} The actual model is taken to be the same as the Heisenberg model discussed in Section \[heisenberg\]. []{data-label="fig2"}](WhiteNoiseFig3.eps){width="8cm"}
Discussion and summary {#sec:discussion}
======================
In Section \[sec:model\] of this article we have derived a dynamical mean-field theory for systems of spin-half particles on a lattice[,]{} in the presence of a nonuniform, external magnetic field. The theory is applicable in the case where the magnetic field gradient between two lattice sites is large compared to the interactions. Additionally, the number of interacting pairs should be large (as is the case for systems of high dimensionality or long range interactions). This condition is necessary to ensure the fluctuations of the [*effective*]{} field at each lattice site are Gaussian (the central limit theorem). One of the most notable approximations we made in deriving this theory of spin diffusion was the exclusion of the summation in Eq. . With this sum excluded, we were able to derive a solution to Eq. , shown in Eq. . We can use this expression for $\sigma_j^+\sigma_k^-(t)$, to estimate the size of the summation term in Eq. , and thus estimate the error in this approximation.
First, we note from Eqs. and , the size of $\sigma_j^+\sigma_k^-(t)$ is roughly $\Gamma_{jk}/(2J_{jk}^\perp)$. Thus, if we substitute our expression for $\sigma_i^+\sigma_k^-$ back into Eq. , we see that the size of the summation term is approximately $\max\left\{\Gamma_{ni},\Gamma_{nk}\right\}$ where $n$ runs over lattice sites which are [*mutual*]{} neighbors of sites $i$ and $k$. Assuming a certain level of isotropy exists within the system, we conclude that, provided $J_{ik}^\perp\gg\Gamma_{ik}$, for all interacting pairs $i$ and $k$, the exclusion of the summation in Eq. is justified. With all conditions satisfied, the equation of motion for the $z$-component of the individual spins is a Langevin equation with additive noise, see Eqs. .
If the condition $J_{ik}^\perp\gg\Gamma_{ik}$ were not satisfied, and the summation in Eq. could not be justifiably ignored, we would expect a similar analysis to be possible. The summation term would manifest as multiplicative noise in the coefficients $\Gamma_{ik}$ of the Langevin equation , as well as the additive noise which we have derived. Further work on this issue however, is still in progress, and the details deferred to a future publication.
The model can be described in terms of simple physical principles, as illustrated in Fig. \[fig3\]. Interactions between sites $i$ and $j$ can cause spin *flip-flopping*, i.e. $|i_{\uparrow}j_{\downarrow}\rangle\rightleftarrows |i_{\downarrow}j_{\uparrow}\rangle$. This process occurs when sites $i$ and $j$ have opposite spin, and does not conserve energy when the external field gradient is nonzero (due to the different Zeeman energies). These sites $i$ and $j$ however, also interact with all other neighboring lattice sites (the number of which is assumed to be large). A crucial approximation in our model is to treat all remaining sites as composing an effective bath, or rapidly fluctuating environment in which sites $i$ and $j$ inhabit \[see Fig. \[fig3\] (b)\]. In this way, one can derive the rate at which the spin *flip-flopping* occurs (we have labelled this quantity $\Gamma_{ij}$), and naturally it will depend on the bath parameters. To be more specific, it depends on the correlation functions between neighboring sites within the bath. The final step then is to determine the rate $\Gamma_{ij}$ that is self-consistent with the bath, i.e. the value of $\Gamma_{ij}$ which yields the [*same correlation function*]{} between neighboring sites, as that from which it was derived.
We find the rate $\Gamma_{ij}$ decays very quickly with increasing field gradient. Equation predicts the rate decays in the same way as a Gaussian distribution. From a numerical study of the cubic Heisenberg lattice (presented in Section \[heisenberg\]), we expect this prediction to be accurate for $B_i-B_j\gtrsim10J_{ij}$ (see Fig. \[fig1\]). This result implies that the observation of spin diffusion in systems with a very strong magnetic field gradient is likely to be difficult as the diffusion time-scales would be very large.
However, in [Section]{} \[relaxation\] we studied the influence [of]{} external noise on this rate[. The presence of the external noise]{} turns out to be favourable for increasing the diffusion rates. We made use of the same set of assumptions in deriving a second Langevin equation \[see Eq. \]. In contrast to [Section]{} \[sec:model\], the Langevin equation now includes a decay-constant, denoted $\Upsilon_k$, which relaxes the system down into [a]{} state where the orientation of the magnetic moment is completely random, i.e. $\langle\sigma_k^z\rangle=0$. Spin flip-flops still occur in the system, and the rate at which they occur; $\Gamma_{ij}'$, is affected by the noise. As a general rule, the rate $\Gamma_{ij}'$ increases with increasing noise, as is illustrated in Fig. \[fig2\]. In the limiting case where the external noise is far greater than both the interaction coupling and the external field gradient, we find the rate $\Gamma_{ij}$ decays in the same way as a Cauchy-Lorentz distribution, see Eq. . This predicted increase in the rate may help to explain experiments where diffusion has purportedly been observed in systems with very large magnetic field gradients.
Acknowledgements
================
We thank Olexander Chumak, Chris Hammel and Semion Saykin for valuable discussions. The work is supported by the US DOE, and, in part, by ONR and NAS/LPS. Andrew Sykes gratefully acknowledges the support of the U.S. Department of Energy through the LANL/LDRD Program for this work.
Perturbation theory for the two-body problem in a fluctuating external field {#appA}
============================================================================
Consider the following time dependent Hamiltonian describing two spin-half particles located at sites 1 and 2, interacting via an exchange interaction, $$\hat{H}=B_1(t)\sigma_1^z+B_2(t)\sigma_2^z+J\left(\sigma_1^x\sigma_2^x+\sigma_1^y\sigma_2^y+\sigma_1^z\sigma_2^z\right)$$ where the external magnetic field $B_i(t)=B_i+b_i(t)$ consists of a constant part and a fluctuating part. We wish to calculate the probability of the spins [*flip-flopping*]{} in time $t$, that is $$\label{p}
p(t)=\Big|\langle 1_{\downarrow}2_{\uparrow}|\hat{U}(t,0)|1_{\uparrow}2_{\downarrow}\rangle\Big|^2$$ where $\hat{U}(t,0)$ is the time evolution operator for $\hat{H}$. Splitting the full Hamiltonian up into a noninteracting and an interacting part, $$\begin{aligned}
\hat{H}_0&=B_1(t)\sigma_1^z+B_2(t)\sigma_2^z\\
\hat{V}&=J\left(\sigma_1^x\sigma_2^x+\sigma_1^y\sigma_2^y+\sigma_1^z\sigma_2^z\right)\end{aligned}$$ and moving to the interaction picture; $|\Psi_{\rm S}(t)\rangle=\hat{U}_0(t,0)|\Psi_{\rm I}(t)\rangle$ where $\hat{U}_0(t,0)=\exp\left[-i\int_{0}^t\hat{H}_0(\tau)d\tau\right]$ is the time evolution operator of the noninteracting Hamiltonian. Defining $\hat{V}_I(t)=\hat{U}_0(t,t_0)\hat{V}\hat{U}_0(t_0,t)$ and $\hat{U}_I(t,0)\exp_+\left[-i\int_{0}^t\hat{V}_I(\tau)d\tau\right]$, where $\exp_+$ denotes the usual time-ordered Dyson series [@sakurai] (appropriate for noncommuting $[\hat{V}_I(t_1),\hat{V}_I(t_2)]\neq0$ when $t_1\neq t_2$).
Using this standard formalism, we approximate the full time evolution operator as, $$\begin{aligned}
\hat{U}(t,0)&=\hat{U}_0(t,0)\hat{U}_I(t,0)\nonumber\\
&\simeq\hat{U}_0(t,0)\left[1-i\int_{0}^t\hat{V}_I(\tau)d\tau\right]\end{aligned}$$ by truncating the Dyson series for $\hat{U}_I$. Substituting this approximation into Eq. and working through the calculation in a straight-forward manner we arrive at $$\begin{aligned}
\nonumber
p(t)=4 J^2\int_0^t d\tau_1\int_0^t d\tau_2\,e^{-2i(B_1-B_2)(\tau_1-\tau_2)+2i\int_{\tau_1}^{\tau_2}\delta b(s)ds}\end{aligned}$$ where $\delta b(s)=b_1(s)-b_2(s)$. At this point it is convenient to take an average over the fluctuating component of the external field, $e^{2i\int_{\tau_1}^{\tau_2}\delta b(s)ds}\rightarrow\langle e^{2i\int_{\tau_1}^{\tau_2}\delta b(s)ds}\rangle$. Assuming these fluctuations are Gaussian, and time-translationally invariant, we find $$\begin{aligned}
\nonumber
p(t)=\,&4 J^2\int_0^t d\tau_1\int_0^t d\tau_2\,e^{-2i(B_1-B_2)(\tau_1-\tau_2)}\times\\
&\quad e^{-4\int_{0}^{|\tau_1-\tau_2|}ds\langle\delta b(0)\delta b(s)\rangle\left(|\tau_1-\tau_2|-s\right)}\nonumber\\
=\,& 4J^2\int_{-t}^t d\tau\,\left(t-|\tau|\right) e^{-2i(B_1-B_2)\tau}\times\nonumber\\
&\quad e^{-4\int_{0}^{|\tau|}ds\langle\delta b(0)\delta b(s)\rangle\left(|\tau|-s\right)}.\end{aligned}$$ In the limit then, where the time $t$ is much larger than the time scale over which the final term $e^{-4\int_{0}^{|\tau|}ds\langle\delta b(0)\delta b(s)\rangle\left(|\tau|-s\right)}$ decays, the probability becomes $$\begin{aligned}
p(t)=\,&4tJ^2\int_{-\infty}^\infty \exp\!\!\Bigg[2i(B_1-B_2)s-\nonumber\\
&\qquad
\left.4\int_0^{|s|}\langle\delta b(0)\delta b(s)\rangle \left(|s|-\mu\right)d\mu\right]ds\label{probability}\end{aligned}$$ This probability in Eq. should be compared to the rate at which spin flips are predicted to occur from Eq. in Section \[sec:model\]. In making this comparison, we see that the approximations we have applied in deriving the equation of motion for $\sigma_k^z$ amount to treating all sites other $j$ and $k$ as composing an effective bath (equivalent to a fluctuating external field).
Numerical algorithm for solving the integral equation {#appB}
=====================================================
For our particularly simple choice of $B({\bf r})=\frac{b_0}{\sqrt{3}}(r_x+r_y+r_z)$, the integral equation we must solve is simply Eq. with $K(\mu)$ given by Eq. . From this equation, we wish to determine $\Gamma$ as a function of $b_0$, the dependence on $J$ can be removed, by switching to variables $\tilde{\Gamma}=\Gamma/J$ and $\tilde{b}_0=b_0/J$, such that we have $$\nonumber
\tilde{\Gamma} = 4\int_{-\infty}^\infty ds \,e^{\frac{2i\tilde{b}_0 s}{\sqrt{3}}}
e^{-8\int_0^{|s|}d\mu\, e^{-6\tilde{\Gamma}\mu}f(2\tilde{\Gamma}\mu)\left(|s|-\mu\right)}.$$ We then search for a root of this equation, by iterating $$\label{GammaNumerics1}
\tilde{\Gamma}^{(n+1)} = 4\int_{-\infty}^\infty \!\!\! ds \,e^{\frac{2i\tilde{b}_0 s}{\sqrt{3}}}
e^{-8\int_0^{|s|}d\mu\, e^{-6\tilde{\Gamma}^{(n)}\mu}f(2\tilde{\Gamma}^{(n)}\mu)\left(|s|-\mu\right)}.$$ for $n=0,1,2,\ldots$ up to convergence, which in our case was chosen to be $|\tilde{\Gamma}^{(n+1)}-\tilde{\Gamma}^{(n)}|<10^{-4}$. In order to choose a reasonable initial prediction for $\tilde{\Gamma}^{(0)}$ we begin the algorithm at $\tilde{b}_0=20$, and define $$\label{Gamma0}
\tilde{\Gamma}^{(0)} \simeq 4\sqrt{\frac{\pi}{20}}e^{-\tilde{b}_0^2/(60)}.$$ Once the algorithm has converged, we decrease $b_0$ by a small amount and use our previous prediction for $\tilde{\Gamma}$ as our new $\tilde{\Gamma}^{(0)}$.
The issue regarding convergence/divergence of $\Gamma$ as $b_0\rightarrow0$ {#appc}
===========================================================================
As the field gradient decreases in a particular direction, the rate at which spin [*flip-flops*]{} occur in that particular direction increases, see Figure \[fig1\]. It is not clear, a priori, that the rate will remain finite in the limit of vanishing gradient. Consider, for example, the RHS of Eq. (and set $J=1$). We can rewrite this in terms of the Fourier transform of $K(\mu)=\frac{1}{2\pi}\int e^{-i\omega\mu}\tilde{K}(\omega)d\omega$, and we are only interested in the case where $b_0=0$, so we find, $${\rm RHS}(\Gamma)=4\int_{\mathbb{R}} \exp\left[-\frac{2}{\pi}\int_{\mathbb{R}}
\tilde{K}(\omega)\frac{1-\cos(\omega|s|)}{\omega^2}d\omega\right]ds.$$ Next we define $\gamma=\mu\Gamma$, in which case $$\begin{aligned}
\tilde{K}(\omega)&=2\int_{\mathbb{R}}\frac{d\gamma}{\Gamma} e^{i\frac{\omega\gamma}{\Gamma}}e^{-6\gamma}f(2\gamma)\nonumber\\
&=\frac{2}{\Gamma}H\left(\frac{\omega}{\Gamma}\right)\end{aligned}$$ where $H(x)=\int_{\mathbb{R}}d\gamma\,e^{ix\gamma}e^{-6\gamma}f(2\gamma)$. The RHS therefore becomes, $${\rm RHS}(\Gamma)=4\int_{\mathbb{R}} \exp\left[-\frac{4}{\pi}\int_{\mathbb{R}}d\eta\,
H(\eta)\frac{1-\cos(\Gamma\eta|s|)}{\Gamma^2\eta^2}\right]ds.$$ where we defined $\eta=\omega/\Gamma$. Now we make the assumption that $\Gamma$ does become very large, in this limit we find $$\frac{1-\cos(\Gamma\eta|s|)}{\Gamma^2\eta^2}\rightarrow\frac{\pi|s|}{\Gamma}\delta(\eta)$$ and therefore $${\rm RHS}(\Gamma)\rightarrow \frac{2\Gamma}{H(0)}$$ for large $\Gamma$. The quantity $H(0)$ can be calculated numerically to be $H(0)\eqsim2.33$, thereby indicating that the slope of the RHS is $<1$ for large $\Gamma$. From this we conclude that a finite value of $\Gamma$ will exist at $b_0=0$ which satisfies Eq. .
[99]{}
A. Abragam, [*Principles of Nuclear Magnetism*]{} (Oxford University Press, 1961). C. P. Slichter, [Principles of Magnetic Resonance]{} (Springer-Verlag, 1978). J. R. Klauder and P. W. Anderson, Phys. Rev. [**125**]{}, 912 (1962). R. de Sousa and S. Das Sarma, Phys. Rev. B [**68**]{}, 115322 (2003). W. M. Witzel and S. Das Sarma, Phys. Rev. B [**74**]{}, 033322 (2006). S. K. Saikin W. Yao and L. J. Sham, Phys. Rev. B [**75**]{}, 125314 (2007). A. A. Tarasenko, P. M. Tomchuk and A. A. Chumak, [*Fluctuatsii v ob’eme i na poverhnosti tverdyh tel*]{} (in Russian, Kiev, Naukova Dumka, 1992). Sh. Kogan, [*Electronic noise and fluctuations in solids*]{}, (Cambridge University Press, 1996). K. Blum, [*Density matrix theory and its applications*]{} (Springer Series on Atomic, Optical and Plasma Physics, 1996). Gradshtein and Ryzhik, [*Table of integrals, series and products*]{} (Academic press, 2000). J. J. Sakurai, [*Modern quantum mechanics*]{} (Addison-Wesley Publishing, 1994). M. Abramowitz and I. A. Stegun, [*Handbook of mathematical functions*]{}, (US Govt Printing Office, 1972).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In modern particle physics experiments wavelength-shifting and scintillating fibres based on plastic polymers are used for tracking and calorimetry. In this review the role of photon trapping efficiencies, transmission functions and signal response times for common multimode active fibres is discussed. Numerical simulations involving three dimensional tracking of skew rays through curved fibres demonstrate the characteristics of trapped light. Of practical interest are the parametrisations of transmission functions and the minimum permissible radius of curvature. These are of great importance in today’s experiments where high count rates and small numbers of photoelectrons are encountered. Special emphasis has been placed on the timing resolution of fibre detectors and its limitation due to variations in the path length of generated photons.'
address: 'University of Oxford, Sub-department of Particle Physics, Denys Wilkinson Bld., Keble Rd., Oxford, OX1 3RH, UK'
author:
- 'Carsten Patrick Achenbach[^1]'
date: 27 July 2003
title: Active optical fibres in modern particle physics experiments
---
Introduction
============
Optical fibres with large core diameters, i.e. where the wavelength of the light being transmitted is much smaller than the fibre diameter, are commercially available and readily fabricated, have good timing properties and allow a multitude of different designs. Multimode fibres are useful for short data-bus connections, local area networks and for multiplexing and sensor technologies. Since their first appearance in charged particle detectors of the early 1980s active optical fibres have also played an important part in the field of nuclear and particle physics. Optical fibres are commonly produced from glass, plastic and synthetic fused silica, often called silica or quartz fibre. Each type has its own advantages and drawbacks. Early glass materials were based on Cerium (Ce$^{3+}$ oxide) and have attracted some attention. For charged particle detection, plastic scintillator compositions have been emerged as the far superior material. In contrast, for data communications applications, silica fibre is the overwhelming choice. Nowadays, the low costs of plastic base materials make it possible for many particle physics experiments to use plastic fibres in large quantities.
Light is generated inside an active fibre either through interaction with the incident radiation (scintillating fibres) or through absorption and re-emission of primary light (wavelength-shifting fibres). A small fraction of the emitted light is guided via total internal reflection to the fibre end where it is detected by visible-light photon sensors. The great interest in fibres is based on the fast signal response of organic scintillators and the high spatial precision and easy handling of fibres. As the output signals of many photon sensors are short on the time-scale of electronic circuits and data acquisition systems, these detectors can be operated at high count rates. In case the active fibres are located inside a strong magnetic field region, the fibres can be spliced on clear fibres that have a higher transmission, so that the photon sensors can be placed outside the field region. This is also done if the active area of a scintillating fibre detector has to be restricted to minimise background count rates.
In response to the need for precise and fast detectors Borenstein reported on the properties of plastic scintillating fibres in the year 1981. His group had measured decay constants of a few nanoseconds in fibres. In these years, development work was focused on vertex detectors for fixed target experiments. In addition, some experiments used scintillating fibres as active targets for the study of rare processes. Active targets were formed from coherent arrays of scintillating glass fibres, plastic fibres, or capillaries filled with liquid scintillators and the targets were viewed by image intensifiers. Throughout the 1980s and 1990s substantial efforts have been devoted to the development of better glass and plastic scintillation materials and the detection of ionising radiation with scintillating fibres has been generally practised in many different ways. The main applications of active fibres in the present generation of particle physics experiments are large-area tracking detectors and fine-sampled calorimeters.
Trackers are sub-detectors which surround the interaction point to reconstruct charged particle tracks and their vertex positions at lepton or hadron accelerators. A modern system can be found at the Fermi National Accelerator Laboratory where the D$\emptyset$ Central Fiber Tracker comprises 71,680 multi-cladding fibre channels [@DZERO1995]. One of the most exhaustive of all fibre systems is the CHORUS tracker [@CHORUS1998]. It is based on a total of about 1.2 million plastic scintillating fibres of 0.5mm diameter which are read-out via an opto-electronic system comprising image intensifier tubes and CCD cameras (Charged Coupled Devices) in series. The long baseline neutrino oscillation experiment K2K also employs, as a component of its near detector, a scintillating fibre tracker with 274,080 fibres in total [@K2K2000]. For its signal processing groups of 11,420 fibres are glued together to make single bundles of 12cm diameter each with an opto-electronic read-out system similar to the CHORUS one. For the construction of the highly segmented tracker of the ATLAS experiment more than 600,000 wavelength-shifting fibres have been used [@ATLAS1994]. For fibre trackers the basic element is a fibre doublet ribbon, which is formed from two single layers of fibres, with one of the fibre layers set off relative to the other by half a fibre spacing. The stacking of fibres provides a higher light yield per channel and enough spatial overlap to avoid relying on the detection of events with only a grazing contact with the charged particle. A charged particle which crosses the gap between two fibres in one of the layers is likely to traverse the full fibre diameter in the other layer. It has been shown that a high detection efficiency and a good spatial resolution can be achieved with multiple doublets. The good timing resolution of such tubes enables a higher level event-selection in the early stage of the trigger of large detector systems. Used in conjunction with other detectors like calorimeters or muon spectrometers, such trigger can provide a powerful signature for identification of electrons, muons, photons and event vertices. Accordingly, multi-layered structures of stacked scintillating fibres coupled to multi-anode photomultipliers became the preferred choice for some fast trigger detectors, the COMPASS trigger [@Horikawa1999] being one recent example.
Fibre calorimeters consist of dense absorber materials sampled with scintillating fibre planes to achieve a very compact geometry. Fibre calorimeters are found, for example, in the muon (g-2) experiment [@Sedykh2000] at Brookhaven and in the KLOE detector [@KLOE1996] at the DA$\Phi$NE accelerator of the INFN LNF, where lead foils are interleaved with layers of scintillating fibres. In some cases, bulk scintillators are read out via wavelength-shifting fibres, most recently in the MINOS experiment [@MINOS1998] where fibres are embedded in 8m long scintillating bars. The approved experiments for the LHC collider at CERN are relying heavily on active fibres, too. The LHCb experiment uses 6000 detector cells with wavelength-shifting fibre read-out [@LHCb1998] and the CMS experiment uses wavelength-shifting fibres embedded in scintillator plates for its sampling hadronic calorimeter [@CMS1997]. Fibre calorimeters can achieve energy resolutions comparable to full absorption calorimeters.
At high luminosity accelerators a new generation of magnetic spectrometers is being developed for fixed target experiments to reveal the nucleon structure with electromagnetic probes. Up to now large magnetic spectrometers have mainly used various types of wire chambers as coordinate detectors for momentum determination in the dispersive plane. But these detectors are slow and in the high particle flux environments of the high luminosity accelerators very fast detectors are indispensable. Furthermore, active elements of low density with long radiation and interaction lengths are necessary to avoid multiple scattering and energy loss of particles. For the same reason, mounting structures and other inactive materials have to be minimised within the detector system’s volume. These requirements can be met by stacks of scintillating fibres which are a good compromise between signal to noise ratio and trigger or tracking efficiency on the one side and the amount of material traversed by the particles on the other side. Alternative detector concepts include the newly developed carbon plated kapton straw tubes with minimised drift times. In addition, there has been an impressive progress during the last couple of years on gaseous micro-pattern detectors. These detectors have much higher rate capabilities than wire chambers. In a detector called Micromegas the gas amplification happens between a metal mesh several $\mu$m thick and a printed circuit board with metal strips about 0.2mm apart [@Giomataris1996]. Another detector called Gas Electron Multiplier (GEM), first introduced in November 1996 by Sauli [@Sauli1997], is very promising. Its key element is the GEM electrode, consisting of a polymer film about 50$\mu$m thick, metallised on both sides, with a regular array of small holes. For a spacing of 140$\mu$m the holes have diameters of about 70$\mu$m. In stacks of several GEMs amplification and induction gaps are separated and very high gas amplifications can be reached. Another type of detector to be considered as competitive to conventional detector concepts in future fixed target experiments are silicon micro-strip or micro-pad detectors. These are the detectors of choice for the LHC experiments ATLAS and CMS. They provide an extremely good spatial resolution but their timing resolution is limited to $\sigma
\le 100\,$ns by the signal to noise ratio of the solid state element. Another disadvantage of such detectors is their relatively large thickness. From the above discussion one may conclude that the fastest coordinate detectors presently available—scintillating fibres—are playing a decisive role for the latest experiments in hadron physics where luminosities and particle fluxes are challenging.
Thus, the interest in active fibres by experimental particle physicists is large. In recent years, additional applications have emerged in medical and biological dosimetry, in electron or ion beam monitoring, in activity studies of radioactive waste, and in X-ray and synchrotron radiation detection. In contrast, the basic theory describing the propagation of photons in fibres is not commonly known to particle physicists. The strict treatment of small diameter optical fibres involves electromagnetic theory applied to dielectric waveguides, which was first achieved by Snitzer [@Snitzer1961] and Kapany [@Kapany1963]. This is not a simple undertaking, especially for bent fibres where an eigenvalue equation is not available. To solve this problem many approximation techniques have been developed which are reasonably accurate for single-mode fibres. In multimode fibres, however, the electromagnetic fields get substantially modified by any curvature [@Winkler1979]. Light losses in small diameter fibres with uniform curvature have been calculated numerically but the method becomes extremely difficult for fibres of large diameters. Although these approaches provide insight into the phenomenon of total internal reflection and eventually lead to results for the field distributions and the radiation from curved fibres, it is advantageous to use ray optics for large diameter fibres where the waveguide analysis is an unnecessary complication. The optics of meridional rays in fibres was developed in the 1950s [@Kapany1957] and can be found in numerous textbooks, e.g. [@Kapany1967; @Allan1973; @Ghatak1998]. Since then, the scientific and technological progress in the field of fibre optics has been enormous.
This paper is organised as follows: section 2 starts with a review of active optical fibres in the field of detection and measurement of ionising radiation and charged particles. In section 3 a simulation code is outlined that performs a three-dimensional tracking of light rays in cylindrical fibres. The first part of section 4 discusses the trapping of light rays and the quantitative distinction between skew and meridional rays. Then, it continues with a comprehensive overview on the light yield of active fibres where special emphasis is placed on light losses in sharply curved fibres. The chapter ends with a paragraph on the radiation resistance of plastic fibres. The timing resolution of active fibres is an important and prevailing issue that is presented in section 5. Finally, the conclusions review the current research on active fibres for modern particle physics experiments. References on latest developments in scintillating fibre technology are given throughout the text as they apply to charged-particle detection.
General discussion
==================
For charged particle detection mostly fibres produced from plastic polymers are used. For several reasons they are better suited than alternative candidates. For example, they have intrinsically higher efficiency, faster signal response times, and lower material densities. In general, a typical fibre consists of a core coated with a thin ($2-5\,\mu$m) transparent cladding with a smaller index of refraction than the core has. The plastic cores of active fibres include several components. The base material ($x > 98\%$ by weight) is an organic polymer, such as polystyrene (PS) or polyvinyltoluene (PVT). It is doped with organic molecules, mostly aromatic compounds, which emit scintillation[^2] light. Total internal reflection at the core-cladding interface allows for an efficient transport of light over many metres. The cladding itself provides a protective layer around the fibre core. A fibre with the refractive index constant over the fibre core cross-section is called a [*step index*]{} fibre. The most common type of fibre used in particle physics consists of a polystyrene-based core of refractive index $n_{\it core}=$ 1.6 and a thin polymethylmethacrylate (PMMA, C$_5$H$_8$O$_2$) cladding of refractive index $n_{\it clad}=$ 1.49 (indices are given at a wavelength of 590nm). Throughout this paper I will refer to this formulation as “standard”. A more recent cladding material is fluorinated methylmethacrylate (MMA) with $n_{\it clad'}=$ 1.42. Single MMA cladding fibres have shown a poor performance in terms of absorption and mechanical stability, so that MMA now is only used in double cladding fibres. These have an inner PMMA cladding and an outer MMA cladding, which leads to a significantly increased trapping efficiency and light yield. In figure \[fig:sketch\] a cross section through a double cladding fibre is shown to illustrate the cone of trapped light in the meridional approximation. For a double cladding fibre the critical axial angle is given by $\theta_{\it crit} = \arccos n_{\it clad'}/n_{\it core} = 26.7^\circ$. In principle an active cladding is possible, but uncertain mechanical compatibility issues between the core and a chemically modified cladding have prevented such a development up to now. In this paper I will consider round fibres only, since edged cross-sections tend to entail in the manufacturing process serious problems of homogeneity at the edges. Typical diameters of fibres used in calorimetry are in the range of 0.5 – 1.5mm, for tracking detectors the diameter is of the order of 10 – 100$\mu$m. Obviously, the fibre dimensions together with their geometrical overlap define the spatial precision of a tracker. An early work on active plastic fibres can be found in [@White1988]. In the year 1995, Leitz [@Leutz1995] wrote an excellent review of scintillating fibres in particle physics, including working conditions of fibres, the scintillation process and photoelectron counting.
After the passage of a charged particle many of the atoms in the medium will be excited into higher energy levels. The electronic levels have a typical energy spacing of $\Delta E \simeq
4\,$eV. Transitions from the ground state to the excited state occur with no change in interatomic distance. The excited molecule quickly loses vibrational energy to arrive in an electronic state S$^*$. In non-scintillating materials the excitation energy is given up in the form of heat or lattice vibrations. In a scintillator, however, some of the excitation energy is released as electromagnetic radiation. These compounds contain bound benzene rings, e.g. p-terphenyl. The electronic state S$^*$ of a scintillator decays to some vibrational level of the ground state within a characteristic decay time, $\tau
\le 2\,$ns. The emitted light is not self-absorbed as the light propagates along the fibre by a further S$_0 \rightarrow$ S$^*$ transition because of the differing shapes of the excited and ground state energy levels as a function of interatomic spacing. The net result is a disparity between the absorption and the emission spectrum of the molecule, where the details of both spectra are governed by the electronic structure of the molecule. The difference between the wavelengths of the peak positions is called Stokes’ shift. It is the excitation of the electrons within the $\pi$-orbitals that results in the scintillation.
In polystyrene the absorption peak is far from the p-terphenyl emission, although the base is opaque to this light because of Rayleigh scattering. Usually, the base scintillators are not good intrinsic light transmitters. The fluorescence yield, that is the ratio of the number of excited atoms which emit a photon to the total number of excited atoms, of pure polystyrene is rather poor, $Q\approx
0.03$. A shift of the emission light into the base material’s transparent region ($\lambda > 400\,$nm for polystyrene) can be achieved by adding a fluorescent dopant to the monomers of the core base material for combined polymerisation. Most modern fibres are working with a one-component system (3HF or PBBD), where the sufficiently high concentration (molar fraction $x \ge 1\%$) of the dopant allows local, non-radiative Förster transitions from the excitation site to the dopant. Förster transitions are fast ($\Delta
t < 1\,$ns) and resonant dipole-dipole transitions. Thereafter, the excitation energy is stored in the excited dopant molecule. Usually, the dopant is chosen to have a large Stokes’ shift which leads to a small self-absorption of the secondary light. The fluorescence yield can be enhanced to values of $Q= 0.8-0.9$. Several families of compounds with these properties (fast, efficient, and low self-absorption) have been found. Among these are the hydroxyflavons, hydroxybenzothiazoles and hydroxybenzoxazoles, from which 3HF (3-hydroxyflavone) has been produced.
Some dopants like p-terphenyl or PBD need additional wavelength-shifting components in very low concentrations, for example 1,4-di-(2-(5-phenyloxazolil))-benzene (C$_{24}$H$_{16}$N$_2$O$_2$), called POPOP, or bPBD (C$_{18}$H$_{14}$). Their absorption bands overlap the emission band of the primary dopant and the primary light is absorbed by the wavelength shifter before it can get attenuated by other processes. The self-absorption of the secondary light is smaller than the self-absorption of the primary light because of the low concentration (molar fraction $x < 1\%$) of the wavelength-shifting molecules. The minimal self-absorption also helps to eliminate cross-talk between neighbouring fibres and the long emission wavelengths provide improved immunity to the effects of radiation damage. More details of the scintillation process are discussed in depth by Birks [@Birks1964].
Wavelength-shifting fibres are very similar in structure to the scintillating fibres discussed earlier. A wavelength-shifting fibre uses a non-scintillating base material doped with a fluorescent dye to shift external light from a scintillator to longer wavelengths. The most common formulation is an absorption peak in the blue region of the spectrum and an emission peak in the green or yellow region, where a wide spectral overlap between the emission spectrum of the scintillator and the absorption spectrum of the wavelength-shifting fibre is necessary. The fibres may be positioned in grooves machined into the surface of plastic scintillators bars. The quantum efficiency of the wavelength-shifter is usually high, i.e. in the range of $70-80$%.
In the fibre core a certain fraction of the scintillation light, called [*core*]{} light, is trapped by total internal reflections at the core-cladding interface. Light not trapped in the fibre core is refracted into the cladding and again some portion of this light is trapped by total internal reflections at the cladding-air interface, called [*cladding*]{} light. Because of the stepping refractive indices the trapping efficiency for cladding light is much larger than the one for core light. In general, the relative contribution of each light component depends on the distance of the detector from the emission point and the attenuation lengths. Most light detected after short fibre lengths can be related to the cladding light, but the cladding light is attenuated to a greater degree than the core light. The losses of cladding light are mainly caused at the cladding-air interface which is exposed to the environment and therefore will degrade by abrasion or accumulation of impurities. The removal of the cladding light can be useful and is accomplished by coating the fibre with an extra-mural absorber. It is also known that internal reflections being less than total give rise to so-called [*leaky*]{} or non-guided modes, where part of the electromagnetic energy is radiated away at the reflection points. An extra-mural absorber eliminates those components together with most of the cladding light. Sometimes a diffuse reflecting paint, usually TiO$_2$, is used for gluing stacks of fibres together. In these applications the extra-mural absorber suppresses optical cross-talk between adjacent fibres, which occurs when untrapped photons, i.e. about 90% of all light, are trapped in neighbouring fibres after being scattered. A trapped ray may be categorised by its path along the fibre. The path of a meridional ray is confined to a single plane, all other modes of propagation are known as skew rays. The projection of a skew ray on a plane perpendicular to the fibre axis changes its orientation with every reflection. In the special case of a cylindrical fibre all meridional rays pass through the fibre axis.
The short and fast light pulses of active fibres are detected by photo-effect based light detectors. Several sensors for the detection of the fibre light have been investigated. In most cases the detection is performed by means of conventional photomultiplier tubes. Their photocathodes are mainly composed of semi-conducting photo-emissive materials (bialkali, trialkali and others) which are evaporated as thin films on the inside of optic front windows. The quantum efficiency of a typical photocathode is good ($Q.E. \approx 20-25\%$) in the blue spectral region and fair to poor ($Q.E. \approx 5-10\%$) in the yellow and green region. This combination of fast active elements with modern read-out devices entails the characteristics of low noise, large gain, good linearity and good timing resolution. The details of the coupling depend strongly on transmission range and refractive index of the window material and on the quantum efficiency of the photocathode. The refractive index of front windows ranges from 1.4 (LiF) over 1.5 (crown glass, borosilicate, quartz) to 1.95 (YalO$_3$), the transmission edge ranges from 120 to 350nm. Mineral oils and optical couplants have been used successfully as optical coupling media bridging the difference in refractive indices. For most fibre to photomultiplier interfaces transmission values above 95% are readily achieved. Losses can occur because of longitudinal, lateral and angular misalignment.
Position-sensitive photomultipliers are especially suitable for fibre read-out because of the good match between photomultiplier segmentation and common fibre diameters, offering an important reduction in size and cabling with respect to conventional tubes. Since the pioneering work of Kuroda [@Kuroda1981] such tubes have been developed in order to meet the demands on precise and reliable tracking devices under high-rate circumstances [@FAROS1995; @FAROS1996; @Agoritsas1998]. In recent years they have been continously improved. In modern experiments, fibre bundles involving a rather large number of channels are easily read out via position-sensitive photomultipliers, while the use of single channel photomultipliers is no longer economical in terms of cost and space requirements. Three different types of position-sensitive photomultipliers are in use: a) multi-anode photomultiplier with a grid-like dynode structure and segmented anode pixels, (b) multi-dynode photomultiplier with a separate dynode structure for each channel and (c) multi-channel photomultiplier where each channel additionally has its own photocathode and glass window. The two most important parameters to characterise position-sensitive photomultipliers is the amount of cross-talk between adjacent channels and the channel to channel gain variations. The first generation of position-sensitive photomultiplier tubes suffered from high cross-talk and large gain variations.
The Hamamatsu Photonics [@Hamamatsu] R5900 series is a rather new development which has been chosen by many groups, e.g. the MINOS collaboration [@MINOS1998], as their fibre read-out device. The performance of this tube was first reported in [@Yoshizawa1997] and many comprehensive tests have been conducted since then, see e.g. [@Enkelmann1998]. The drawbacks of early position-sensitive photomultipliers have been greatly reduced; the tube exhibits very little cross-talk and a high gain uniformity between channels. It comes in a very compact design and six different anode geometries suitable for fibre read-out, one particularly useful type of anode shape is the M-16, in which the $17.5 \times 17.5$mm$^2$ bialkali photocathode is divided into 16 pixels each with a sensitive area of $4 \times 4$mm$^2$ per pixel. The cathode is followed by individual metal channel dynodes incorporating 10 to 12 stages and the output signal is read out from independent multiple anodes. Single fibres or bundles of up to seven fibres can get coupled to one pixel.
Photomultiplier tubes are limited in their intrinsic energy resolution by fluctuations in the number of secondary electrons produced at the first dynode. This limitation can be overcome by using hybrid photomultiplier tubes with their excellent multiple photon separation and high efficiency. A hybrid photomultiplier tube consists of a reversely biased silicon P-I-N diode, in which highly accelerated photoelectrons create a few thousand electron-hole pairs with much smaller statistical fluctuations. The quantum efficiency of silicon photodiodes can be as high as 70% for visible wavelengths. In special cases, image intensifiers with CCD cameras, solid state photomultipliers or so-called Visible Light Photon Counters (VLPC) have been employed in active fibre read-out. The VLPC has been developed by Rockwell Int. Science Center and is used e.g. in experiment E835 at the Fermi National Accelerator Laboratory [@Ambrogiani1998]. It can reach 85% quantum efficiency. Some studies on silicon based avalanche photodiodes have been performed [@Baer2000] and it was demonstrated that it is possible to use this type of photodiodes for room-temperature fibre read-out, too. The diodes consist of a compact semiconductor element operated with at reverse bias voltage near the breakdown voltage. The basic structures include an absorption region and a multiplication region with a high electric field. The multiplication region is broad enough to provide a useful gain of at least 100 by impact ionisation. Recently, there has been some progress in improving gain and stability and first arrays of avalanche photodiodes have become available to cope with a large number of fibres closely lined up.
The signal amplitude from a fibre detector, quantified by the number of detected photoelectrons, is a complex factor involving the light output of the fibre, the light collection efficiency, the coupling of the fibre to the photon sensor, and the sensor’s characteristics. In many applications the number should be as large as possible to discriminate the signal against dark pulses and electronic noise. It is instructive to work through a numerical estimate. The following example gives a parametric factorisation of the signal amplitude for a scintillator bar with a wavelength-shifting fibre of approximately 3m length, read out with a multi-anode photomultiplier. The scintillator bar is exposed to a uniform illumination of charged particles from one direction at normal incidence at the far end of the scintillator. The source of the signal is the luminescence of the scintillator, numerically equal to the number of scintillation photons produced at the site of ionising radiation. It can be estimated by: $$\begin{aligned}
{\mathcal L}_{scin} & = & \Delta E \times \frac{d N_{scin}}{d E}\\ &
= & \frac{d E}{d\rho x}\Big|_{mip} \times \rho x
\frac{d N_{scin}}{d E} \simeq 35,000 \,\end{aligned}$$ where a mean energy deposition corresponding to minimum ionising particles $-dE/d\rho x|_{mip}= 1.68$MeV$/$cm$^2$, the density of plastic scintillator $\rho= 1.032$g$/$cm$^3$, the average path length of a charged particle traversing the scintillator of height $x=
2$cm, and an absolute light yield, $d N_{scin}/d E$, of 10,000 photons produced per MeV of deposited energy in polystyrene is assumed. The light yield corresponds to $\sim$1 excitation per 4.8eV and a quantum efficiency $Q\sim 4-5\%$ of the scintillator.
The light collection and transfer factor, $\epsilon_{coll}$, of a scintillator bar is of the order of 50%. It is dependent on the geometry and the effective attenuation length of the scintillator, $\Lambda$, the reflectivity of the scintillator surface, $R$, and the number of fibres per scintillator bar, $i$. For complicated geometries the exact number may be evaluated by Monte Carlo simulations. In case of a simple geometry it can be estimated. In infinite long bars the probability of a photon to hit a fibre of diameter $2\rho$ can be written as $p_i= i 2\rho/l$ where $l$ is the distance between adjacent fibres, in this example identical to the width of the bar, $w$. The probability of a photon being absorbed in the scintillator is simply $p_\Lambda = 1 - e^{-P/\Lambda} \approx l/(\Lambda\,
\cos\theta)$, where $P$ is the path length of the photon between two fibres. The probability for the photon not to be reflected can be written as $p_R = 1 - R$. This leads to a light collection efficiency of $$\epsilon_i = \frac{p_i}{l/(\Lambda\, \cos\theta) + (1 - R) + p_i}\ ,$$ In this model the increase in light yield when using two fibres instead of one is equal to $\epsilon_2 / \epsilon_1 = 57\%/40\% =
1.43$. The simple model has been verified to be sufficiently accurate by using a ray tracing code to find the fraction of scintillation photons which are absorbed in a wavelength-shifting fibre located in a groove in a rectangular scintillator.
Photons are emitted isotropically within the fibre. Using the meridional approximation the trapping efficiency for core light emitted in one axial direction of the fibre is $3-5\%$ for a single cladding fibre. For this example the transmission functions of fibre and photomultiplier entrance window are assumed to be $T_{fibre}\simeq
70\%$ and $T_{PMT}\simeq 97\%$. The emission spectrum is a characteristic of the scintillation material and the spectral quantum efficiency of the detector is a characteristic of the photocathode. An wavelength averaged value of $Q.E.\simeq 16\%$ is reasonable. Finally, the number of photoelectrons in a typical fibre experiment can be estimated by combining the above values: $$N_{p.e.} = {\mathcal L}_{scin} \times \epsilon_{coll} \times \Omega_{1/2}
\times T_{fibre} \times T_{PMT} \times Q.E. \approx 25\ .$$ Hence, a typical detected photoelectron yield is of the order of $\bar{N} \approx 20 - 30$. This value is consistent with measurements.
Three-dimensional Tracking of Light Rays
========================================
To illustrate the characteristics of trapped light I use results from a programme which simulates emission and propagation of light rays. The motivation for writing the programme was to understand the loss of light in sharply curved fibres. Since the analytic analysis of the passage of skew rays along a curved fibre is exceedingly complex, a Monte Carlo technique had to be applied. This type of numerical integration using random numbers is a standard method in the field of particle physics and is now practical given the CPU power currently available. Detailed results of this programme on the light acceptance and propagation in straight and curved multimode active fibres can be found in [@Achenbach2003].
A light ray is followed by the simulation until it is absorbed or detected. Quantities to be delivered by the programme are the proportion of light detected, and the arrival time distribution, or the various ways light rays may be lost. On its path the ray is subject to attenuation, parameterised firstly by an effective absorption coefficient and secondly by a reflection coefficient. At the core-cladding interface the ray can be reflected totally or partially internally. In the latter case a random number is compared to the reflection probability to select reflected rays.
Light rays are randomly generated on the circular or rectangular cross-section of a fibre with radius $\rho$ or width $w$ and height $h$. An arbitrary ray is defined by its axial and azimuthal or skew angle. An advantage of this method is that any distribution of light rays can easily be generated. The axis of the fibre is defined by a curve $z= f(s)$ where $s$ is the arc length. For $s < 0$, it is a straight fibre along the negative $z$-axis and for $0 < s < L_F$, the fibre is curved in the $xz$-plane with a radius of curvature $R_{\it
curv}$. In particular, the curve $f(s)$ is tangential to the $z$-axis at $s = 0$.
Light rays are represented as lines and determined by two points, $\vec{r}$ and $\vec{r}^{\,\prime}$. The points of incidence of rays with the core-cladding interface are determined by solving the appropriate systems of algebraic equations. In the case of a straight fibre the geometrical representation of a straight cylinder or box is used resulting in a quadratic equation. Its positive solution defines the point of incidence, $\vec{r}_R$, on the fibre wall. In the case of a cylindrical fibre curved in a circular path, the cylinder equation is generalised by a torus equation. The coefficients of this fourth degree polynomial are real and depend only on $R_{\it curv}$ and the vector components of $\vec{r}$ and $\vec{r}^{\,\prime}$ up to the fourth power. In most cases there are two real roots, one for the core-cladding intersection in the forward direction and one at $\vec{r}$ if the initial point already lies on the cylinder wall. The roots are found using Laguerre’s method [@Recipes1992]. It requires complex arithmetic, even while converging to real roots, and an estimate for the root to be found. The routine implements a stopping criterion in case of non-convergence because of round-off errors. The initial estimate is given by the intersection point of the light ray and a straight cylinder that has been rotated and translated to the previous reflection point. A driver routine is used to apply Laguerre’s method to all four roots and to perform the deflation of the remaining polynomial. Finally the roots are sorted by their real part. The smallest positive, real solution for $m$ is then used to determine the reflection point, $\vec{r}_R$.
After the point of incidence has been found, the reflection length and absorption probability can be calculated. The angle of incidence, $\alpha$, is given by $\cos{\alpha} = \vec{r}_{in} \cdot \vec{n}$, where $\vec{n}$ denotes the unit vector normal to the core-cladding interface at the point of reflection and $\vec{r}_{in}=
(\vec{r}-\vec{r}_R)/|\vec{r}-\vec{r}_R|$ is the unit incident propagation vector. Then, the reflection probability corresponding to this angle $\alpha$ is determined. In case the ray is partially or totally internally reflected, the total number of reflections is increased and the unit propagation vector after reflection, $\vec{r}_{\it out}$, is calculated by mirroring $\vec{r}_{in}$ with respect to the normal vector: $\vec{r}_{\it out} = \vec{r}_{in} - 2
\vec{n} \cos{\alpha}$. The programme returns in a loop to the calculation of the next reflection point. When the ray is absorbed on its path or not reflected at the reflection point, the next ray is generated. At any point of the ray’s path axial, azimuthal and skew angle are given by scalar products of the ray vector with the coordinate axes in a projection on a plane perpendicular to the fibre axis and parallel to the fibre axis, respectively. The transmitted flux of a specific fibre, taking all losses caused by bending, absorption and reflections into account, is calculated from the number of lost rays compared to the number of rays reaching the fibre exit end.
This method gives rise to an efficient simulation technique for fibres with constant curvature. It is possible to extend the method for the study of arbitrarily curved fibres by using small segments of constant curvature. In the current version of the programme light rays are tracked in the fibre core only and no tracking takes place in the surrounding cladding, corresponding to infinite cladding thickness. For simplicity only monochromatic light is assumed in the simulation and highly wavelength-dependent effects are not included explicitly. The simulation code takes about 1.5ms to track a skew ray through a curved fibre.
Light Yield—Trapping of Light
=============================
In practical applications, the light yield is the most important criterion for the design of a fibre detector. Any deficiency in this respect reduces detection efficiency, compromises timing resolution and restricts total fibre length. Due to the small cross-section of fibres the light yield is intrinsically low. Whether the fibres are scintillating or wavelength-shifting one is only ever concerned with a few 10’s or 100’s of photons propagating in the fibre and a single photon counting capability is often necessary. With increasing dopant concentration the light yield increases, but above a certain limit self-absorption of the emitted light reduces the attenuation length noticeable. Accordingly, fibres with various concentrations of fluorescent dyes have been systematically tested since 1992. For all commercially available fibres an optimum has been found and no further improvements from adjusting the dopant concentrations can be expected. Thus, the trapped light as a fraction of the intensity of the emitted light and the transmission function have the largest impact on the light yield of an active fibre.
Angular Phase Space of Trapped Light
------------------------------------
The geometrical path of any rays in cylindrical fibres, including skew rays, was first analysed in a series of papers by Potter [@Potter1961] and Kapany [@Kapany1961]. The treatment of angular dependencies in this paper is based on their approach. The angle $\gamma$ is defined as the angle of the projection of the light ray in a plane perpendicular to the axis of the fibre with respect to the normal at the point of reflection. One may describe $\gamma$ as a measure of the “skewedness” of a particular ray, since meridional rays have this angle equal to zero. The polar angle, $\theta^\prime$, is defined as the angle of the light ray in a plane containing the fibre axis and the point of reflection with respect to the normal at the point of reflection. It can be shown that the angle of incidence at the core-cladding interface of the fibre, $\alpha$, is given by $\cos{\alpha}= \cos{\theta^\prime}\, \cos{\gamma}$. The values of the two orthogonal angles $\theta^\prime$ and $\gamma$ will be preserved independently for a particular photon at every reflection along its path.
In general, for any ray to be internally totally reflected within the fibre core, the inequality $\sin{\alpha} \geq \sin{\theta^\prime_{\it
crit}} = n_{\it clad}/n_{\it core}$ must be fulfilled, where the critical angle, $\theta^\prime_{\it crit}$, is given by the index of refraction of the fibre core, $n_{\it core}$, and that of the cladding, $n_{\it clad}$. For the further discussion in this paper it is convenient to use the axial angle, $\theta\ [= \pi/2 -
\theta^\prime]$, as given by the supplement of $\theta^\prime$, and the skew angle, $\gamma$, to characterise any light ray in terms of its orientation. A skew ray can be totally internally reflected at larger angles $\theta$ than meridional rays and the relationship between the minimum permitted skew angle, $\overline{\gamma}$, at a given axial angle is determined by the critical angle condition: $\cos{\overline{\gamma}}= \sin{\theta_{\it crit}} / \sin{\theta}$. In the meridional approximation the above equations lead to the well-known critical angle condition for the polar angle, $\theta^\prime \ge \theta^\prime_{\it crit}$, which describes an acceptance cone of semi-angle $\theta$ with respect to the fibre axis (see for example [@Potter1961] and references therein). Thus, in this approximation all light within the forward cone, which experiences multiple total internal reflections, will be considered as trapped.
Figure \[fig:phasespace\](a) shows the total acceptance domain, i.e. the angular phase space of possible propagation modes, and the phase space density, i.e. the number of trapped photons per angular element $d\cos\gamma\ d\sin\theta$, which is represented by proportional boxes. Photons have been generated randomly on the cross-section of the fibre with an isotropic angular distribution in the forward direction. The density increases quadratically with $\cos{\gamma}$ and linear with $\sin{\theta}$. To the left of the dividing line at $\sin{\theta_{\it crit}}$ all skew angles are accepted. To the right of the line a minimum skew angle is required by the critical angle condition. The phase space contours in this figure relate to sharply curved fibres. They show the distribution of photons which are trapped in a straight fibre section but get refracted out of sharply curved fibres with a radius of curvature to fibre radius ratio, $R_{\it curv}/\rho$, of 33 and 83. The contours demonstrate that only photons from a small region close to the phase space boundary are getting lost. The smaller the radius of curvature, the larger the affected phase space region. Figure \[fig:phasespace\](b) shows a projection of the phase space onto the $\sin\theta$-axis. A peak around the value of $\sin{\theta_{\it crit}}$ is apparent.
Trapping Efficiency
-------------------
The trapping efficiency for forward propagating photons, $\epsilon^{\mathrm{fw}}$, may be defined as the fraction of totally internally reflected photons. Figure \[fig:phasespace\](a) gives values for the two trapping efficiencies which have been determined by integrating the phase space density over the two angular regions. The trapping efficiency can also be determined analytically by two integrals [@Potter1963] for the flux transmitted by a fibre: $$\begin{aligned}
F & = F_m + F_s
= & 4 \rho^2 \int_{\theta= 0}^{\theta_{\it crit}}
\int_{\gamma= 0}^{\pi/2} \int_{\phi= 0}^{\pi/2}
I(\theta,\phi)\, \cos^2{\gamma}\, d\gamma\, d\Omega\ + \nonumber \\
& & 4 \rho^2 \int_{\theta= \theta_{\it crit}}^{\pi/2}
\int_{\gamma= \overline{\gamma}(\theta)}^{\pi/2}
\int_{\phi= 0}^{\pi/2}
I(\theta,\phi)\, \cos^2{\gamma}\, d\gamma\, d\Omega\ ,
\label{eq:flux}\end{aligned}$$ where $d\Omega$ is the element of solid angle, $\overline{\gamma}(\theta)$ refers to the maximum axial angle allowed by the critical angle condition, $\rho$ is the radius of the fibre and $I(\theta,\phi)$ is the angular distribution of the emitted light in the fibre core. The two integrals, $F_m$ and $F_s$, refer to the meridional and skew case, respectively. The lower limit of the integral $F_s$ is given by $\overline{\gamma}=
\arccos{(\sin{\theta_{\it crit}}/\sin{\theta})}$.
For an isotropic emission of fluorescence light the total flux through the cross-section of the fibre core, $F_0$, equals $4 \pi^2 \rho^2
I_0$. Then, dividing the first term of equation (\[eq:flux\]) by the total flux gives the trapping efficiency in the meridional approximation, $$\epsilon^{\mathrm{fw}}_m = \frac{1}{2} (1 -
\cos{\theta_{\it crit}}) \approx
\frac{\theta^2_{\it crit}}{4}\ ,
\label{eq:omega_m}$$ where all photons are considered to be trapped if $\theta \le
\theta_{\it crit}$. Contributions of all skew rays to the trapping efficiency are given by $$\epsilon^{\mathrm{fw}}_s = \frac{1}{2} (1 - \cos{\theta_{\it crit}})
\cos{\theta_{\it crit}}\ .
\label{eq:omega_s}$$ The total trapping efficiency is then: $$\epsilon^{\mathrm{fw}} = \frac{1}{2} (1 - \cos^2{\theta_{\it crit}})
\approx \frac{\theta^2_{\it crit}}{2}\ ,
\label{eq:omega_tot}$$ which equals for small critical angles approximately twice the trapping efficiency in the meridional approximation. This trapping efficiency is crucially dependent on the circular symmetry of the core-cladding interface. Any ellipticity or variation in the fibre diameter will lead to the refraction of some skew rays. Skew rays get also attenuated more quickly, so that the trapping efficiency of skew rays does not contribute to the light yield of a fibre in the same way as the trapping efficiency of meridional rays does. In conclusion, for long fibres the effective trapping efficiency is closer to $\epsilon^{\mathrm{fw}}_m$ than to $\epsilon^{\mathrm{fw}}$.
The trapping efficiency for cladding light is determined by replacing $n_{\it core}$ with $n_{\it clad}$ and $n_{\it clad}$ with the refractive index of the surrounding medium, $n_{\it ext}$, in the above formulae. Thus, the critical angle changes to $ \cos{\theta_{\it
crit}} = n_{\it ext}/n_{\it core}$. The cladding component of the trapped light is the difference $\epsilon_{\it clad} = \epsilon -
\epsilon_{\it core} = \frac{1}{2} (n_{\it clad}^2 - n_{\it ext}^2)/
n_{\it core}^2$. This cladding light is usually a factor 3–4 more intense than the core light. While this is highly desirable in terms of light yield, it leads to a larger cross-talk between fibres of a bundle. Furthermore, cladding light is heavily affected by the external surface quality of the fibre: cracks in the cladding or defects in the surface can cause significant light losses leading to huge differences in the light yield of otherwise identical fibres.
Formula \[eq:omega\_m\] gives a meridional ray trapping efficiency of $\epsilon^{\mathrm{fw}}_m=$ 3.44% for “standard” fibres with $n_{\it core}=$ 1.6 and $n_{\it clad}=$ 1.49; the skew ray efficiency in formula \[eq:omega\_s\] evaluates to $\epsilon^{\mathrm{fw}}_s=$ 3.20% and the combined core efficiency in formula \[eq:omega\_tot\] to $\epsilon^{\mathrm{fw}}=$ 6.64%. The efficiency for cladding light neglecting any absorption equals 23.83%. For square fibres the core cladding efficiency is somewhat larger, the simulation calculates a value of $\epsilon^{\mathrm{fw}}_{\mathrm{square}}=$ 8.13%.
It is obvious from the critical angle condition that a photon emitted close to the cladding has a higher probability to be trapped than when emitted close to the centre of the fibre. Scintillation photons are distributed uniformly in solid angle. For a given axial angle the range of possible azimuthal angles for the photon to get trapped increases with the radial position, $\hat{\rho}$, of the light emission point in the fibre core. Figure \[fig:trap-r\] shows the core trapping efficiency, $\epsilon^{\mathrm{fw}}$, for photons propagating in the forward direction as a function of radial position, $\hat{\rho}$, of the emission point in the fibre core. It can be deduced from figure \[fig:trap-r\] that the meridional approximation is a good estimate for $\epsilon$ if the photons originate at radial positions $\hat{\rho} < 0.8$. The trapping of skew rays only becomes significant for photons originating at radial positions $\hat{\rho}
\ge 0.9$. This fact has been discussed e.g. in [@Johnson1994].
The maximum possible concentration factor of an optical system without any light loss is $1/\sin^2\theta$, where $\theta$ describes the divergence of light in the system. This angle can be approximated by the maximum axial angle of trapped light in active elements. In fibres, the comparatively small angular phase space permits the use of optical concentrators with high concentration factors, and the concentration is possible by means of total internal reflections within a light guide. Optical concentrators, mostly of the Winston type, have been built to couple scintillation light efficiently to photo-detectors [@Kuhlen1991].
Transmission of Straight Fibres
-------------------------------
A question of practical importance for the estimation of the light output of a particular fibre application is its transmission function, which quantifies the transmission probability of trapped photons. The function is dependent on the total photon path length per axial fibre length, $P$, the number of internal reflections per axial fibre length, $\eta$, and the optical path length between successive internal reflections, $l_R$. It should be noted that these three variables are not independent as $P= \eta \times l_R$.
Light attenuation in active fibres has many sources, among them absorption in the base material, at optical non-uniformities or at impurity centres, as well as reflection losses caused by a rough surface or variations in the refractive indices. Skew light is attenuated by stronger absorption and reflection losses than meridional light because of the longer path length and the higher number of reflections it suffers from. Accordingly, the light attenuation at short distances differs from the attenuation at large distances. Furthermore, the absorption and emission processes in fibres are spread out over a wide band of wavelengths and the attenuation is known to be wavelength dependent. The attenuation of active fibres at wavelengths close to its emission band ($400-600$nm) is much higher than in wavelength regions of interest for communication applications where mainly infrared light is transmitted ($0.8-0.9\,\mu$m and $1.2-1.5\,\mu$m).
The two main sources of attenuation in the base material are self-absorption of scintillation light and Rayleigh scattering. The cumulative effect of these attenuation processes can be conveniently parameterised by an effective attenuation length, $\Lambda_{\it eff}$, over which the signal amplitude is attenuated to 1$/e$ of its original value. This parameter is often used in particle physics applications to characterise the attenuation, but is of limited significance when analysing different causes and effects of attenuation. Instead, the transmission function of a straight fibre should be written as a function of the axial angle $$T(\theta) = {\mathrm e}^{- P(\theta)\, L_F/\Lambda_{\it bulk}}
\times q^{\eta(\theta) L_F}\ ,$$ where the bulk attenuation length $\Lambda_{\it bulk}^{-1} =
\Lambda_{\it abs}^{-1} + \Lambda_{\it scat}^{-1}$ describes light losses due to bulk absorption (bulk absorption length $\Lambda_{\it
abs}$) and scattering (scattering length $\Lambda_{\it scat}$), and the second factor describes light losses due to imperfect reflections (reflection coefficient $q$ to parameterise the internal reflectivity). The self-absorption is mainly caused by an overlap of the absorption and emission bands of the fluorescent dyes. The scattering length quantifies Rayleigh scattering on small density fluctuations in the core. The cross-section for the scattering processes increases with decreasing wavelength and becomes noticeable in the region of the emission peak of the base scintillator (200 – 300nm). A comparison of available data indicates that a reasonable value of the bulk attenuation length in polystyrene is $\Lambda_{\it
bulk} \simeq 3 - 5$m for doped fibres and $\simeq 8$m for clear fibres. Fibres are drawn from a boule and great care is taken during production to ensure that the core-cladding interface has the highest possible uniformity and quality. Most published data suggest a deviation of the reflection coefficient from unity between $5 \times
10^{-5}$ and $6.5 \times 10^{-5}$ [@Ambrosio1991]. A reasonable value of $q= 0.9999$ is used in the simulation to account for all losses proportional to the number of reflections.
The light from a fibre may be read-out from either one or both ends. A reflector at the open end of the fibre allows the collection of photons propagating in direction opposite to the photon sensor which would normally escape from the fibre. Mirroring may be applied by sputtering an aluminium coating onto the fibre end or by bringing the fibre end into direct contact with a highly specular reflector such as an aluminised mylar foil. Simple foils provide a reflectivity $R
\approx 70\%$ and can lead to an increase in light yield of 20%. Covering the fibre end with a white diffuse reflector can help, as well. When describing the light yield of these fibres a second term has to be added to the transmission function to account for the reflected light. The proportion of the direct, $T_d$, to the reflected, $T_r$, light intensity depends on the distance to the emission point, $L_0$, and the reflection coefficient, $R$, of the reflector. The transmission function becomes: $$T = T_{d} + R\, T_{r} = {\mathrm e}^{- P\, L_0/\Lambda_{\it bulk}}
\times q^{\eta L_0}\ + R \left( {\mathrm e}^{- P\, (2L_F - L_0
)/\Lambda_{\it bulk}} \times q^{\eta (2L_F - L_0)} \right)\\ .$$ Comparison of the signal arrival times from each end can be used to determine the longitudinal position of the light emission. For simplicity, in this paper the analysis and discussion is restricted to the direct light only.
Figure \[fig:pathlength\] shows the distribution of the normalised path length, $P(\theta)$, for trapped photons reaching the exit end of straight fibres of 0.6mm radius. The figure also gives results for curved fibres of two different radii of curvature. The distribution of path lengths which are shorter than the path length for meridional photons propagating at the critical angle is almost flat. It can be shown that the normalised path length along a straight fibre is given by the secant of the axial angle and is independent of other fibre dimensions: $P(\theta)= \sec\theta$. It is clearly seen that, when a fibre is curved, the normalised path length of the trapped photons is less than the secant of the axial angle and photons on near meridional paths are refracted out of the fibre most. The average normalised path length for those photons which remain trapped is smaller than the average for the straight fibre. The over-all fibre length for the curved fibres in these calculations is 0.5m and the fibres are curved over a circular arc for their entire length.
The distribution of the normalised number of reflections, $\eta(\theta)$, for photons reaching the exit end of straight and curved fibres is shown in figure \[fig:reflections\]. Again, the figure gives results for curved fibres of two different radii of curvature. The number of reflections a photon experiences scales with the reciprocal of the fibre radius. In the meridional approximation the normalised number of reflections is related by simple trigonometry to the axial angle and the fibre radius: $\eta_m(\theta) =
\tan{\theta}/2\rho$. The distribution of $\eta_m$, based on the distribution of axial angles for the trapped photons, is represented by the dashed line. The upper limit, $\eta(\theta_{\it crit})$, is indicated in the plot by a vertical line. The number of reflections made by a skew ray, $\eta_s(\theta)$, can be calculated for a given skew angle: $\eta_s(\theta)= \eta_m(\theta) / \cos{\gamma}$. It is clear that this number increases significantly if the skew angle increases. From the distributions it can be seen that in curved fibres the trapped photons experience fewer reflections on average.
Internal reflections being less than total give rise to so-called leaky or non-guided modes, where part of the electromagnetic energy is radiated away. Rays in these modes populate a region defined by axial angles above the critical angle and skew angles slightly larger than the ones for totally internally reflected photons. In the simulation these modes are taken into account by using the Fresnel equation for the reflection coefficient, $\langle R \rangle$, averaged over the parallel and orthogonal plane of polarisation $$\langle R \rangle = \frac{1}{2} \left( R_{||} + R_\perp \right) =
\frac{1}{2} \left( \frac{\tan^2(\alpha - \beta)}
{\tan^2(\alpha + \beta)} + \frac{\sin^2(\alpha - \beta)}
{\sin^2(\alpha + \beta)} \right)\ ,$$ where $\alpha$ is the angle of incidence and $\beta$ is the refraction angle. However, it is obvious that non-guided modes are lost quickly in a small fibre. This is best seen in the fraction of non-guided to guided modes, $f$, which decreases from $f = 11\%$ at the first reflection of the ray over $f = 2.5\%$ at the second reflection to $f
< 1\%$ at further reflections. Since the average reflection length of non-guided modes is $l_R \approx 1.5$mm those modes do not contribute to the flux transmitted by fibres longer than a few centimetres.
In the meridional approximation and substituting $\exp(-\ln{q})$ by $\exp(1-q)$ the attenuation length can be written as $$\Lambda_m = \cos{\theta_{\it crit}}\, \left[ 1/\Lambda_{\it bulk} +
(1-q)\sin{\theta_{\it crit}}/2\rho \right]^{-1}\ .$$ Only for small diameter fibres ($2\rho \le 0.1\,$mm) the attenuation due to imperfect reflections is of the same order as the absorption lengths. In calorimeters the reflection losses are not relevant for the transmission function, because of the large radii of the fibres used. There, the attenuation length contracts to $\Lambda_m=
\Lambda_{\it bulk} \cos{\theta_{\it crit}}$. The transmission function including all skew ray effects can be found by integrating over the normalised path length distribution $$T = \frac{1}{N} \int_{P=0}^{\infty} e^{-P\, L_F/
\Lambda_{\it bulk}}\, \frac{dN}{dP}\, dP\ ,$$ where $dN$ represents the number of photons per path length interval $dP$, weighted by the exponential bulk attenuation length. Figure \[fig:absorption\] shows the simulated transmission function versus the ratio of fibre to absorption length, $L_F/\Lambda_m$. A simple exponential fit, $T \propto
\exp\left[-L_F/\Lambda_{\it eff}\right]$, applied to the simulated light transmissions for a large number of fibre lengths results in an effective attenuation length of $\Lambda_{\it eff}= 86\%\ \Lambda_{\it
bulk}$. For $L_F/\Lambda_m \ge 0.2$ this number is sufficiently accurate to parameterise the transmission function, at smaller values for $L_F/\Lambda_m$ the light is attenuated faster. The difference to the meridional attenuation length, $\Lambda_m= 93\%\ \Lambda_{\it
bulk}$, is attributed to the skew rays in the tail of the path length distribution.
Measurements of the light attenuation in fibres proved the simple model of a single effective attenuation length to be wrong. A dependence of the attenuation length with distance is usually observed [@Davis1989]. One cause of this effect is the fact that the short wavelength components of the scintillation light is dominantly absorbed. The use of a double spectrometer allows the precise determination of the spectral attenuation length of fibres [@Drexlin1995]. The characteristic shape of plastic spectral attenuation lengths shows a minimum at around 440nm and an increase towards longer wavelengths. This leads to a shift of the average wavelength in the emission spectrum towards longer wavelengths and to an increase in the effective attenuation length. The wavelength dependent quantum efficiency of any photon sensor enhances this effect, so that the integral attenuation length is only a rough quality criterion for the light yield of a fibre.
Transmission of Sharply Curved Fibres
-------------------------------------
One of the most relevant practical issues in implementing optical fibres into compact particle detector systems are macro-bending losses. In general, some design parameters of fibre applications, especially if the over-all size of the detector system is important, depend crucially on the minimum permissible radius of curvature. The problem of bending is eminent for tile-fibre calorimeters where wavelength-shifting fibres are embedded in plastic scintillator tiles and are bent at radii of curvature of a few centimetres. Flexibility of the fibres is also essential for space physics experiments or for detectors carried by balloons or aircrafts. The routing of fibres to photon sensors requires radii of curvature as small as possible to minimise the weight and the costs associated with the transport of the detector, see e.g. [@Adler2001].
Photons are lost from a fibre core both by refraction and tunnelling. In the simulation only refracting photons were considered. The angle of incidence of a light ray at the tensile (outer) side of the fibre is always smaller than at the compressed side and photons propagate either by reflections on both sides or in the extreme meridional case by reflections on the tensile side only. If the fibre is curved over an arc of constant radius of curvature, photons can be refracted, and will then no longer stay trapped, at the very first reflection point on tensile side. Therefore, the trapping efficiency for photons entering a curved section of fibre towards the tensile side is reduced most.
Figure \[fig:bradius\] displays the explicit dependence of the transmission function for fibres curved over circular arcs of 90$^\circ$ on the radius of curvature to fibre radius ratio for different fibre radii, $\rho=$ 0.2, 0.6, 1.0 and 1.2mm. No further light attenuation is assumed. Evidently, the number of photons which are refracted out of a sharply curved fibre increases very rapidly with decreasing radius of curvature. The losses are dependent only on the curvature to fibre radius ratio, since no inherent length scale is involved, justifying the introduction of this scaling variable. The light loss due to bending of the fibre is about 10% for a radius of curvature of 65 times the fibre radius.
The transmission function in the meridional approximation in the bending plane can be estimated by assuming that all photons with axial angles above a limiting angle, $\theta_0 (\theta, R_{\it curv}/\rho$), are refracted out of the fibre: $$T= 1 - \frac{1}{1 + R_{\it curv}/\rho}\
\frac{\cos\theta_{\it crit}}{1 - \cos\theta_{\it crit}}\ .$$ This transmission function is shown in figure \[fig:bradius\] as a dashed line to be compared with the simulation results including all skew rays. It overestimates the light losses due to the larger axial angles allowed for skew rays. The meridional approximation in the bending plane is, however, a good approximation for the transmission function because the light losses are dominantly produced by meridional rays [@Winkler1979; @Gloge1972].
A comparable theoretical calculation using a two-dimensional slab model and a generalised Fresnel transmission coefficient has been performed by Badar and co-workers [@Badar1991A]. Their plot of the power contained in the fibre core as a function of the radius of curvature (figure 5 in [@Badar1991A]) shows results on the transmission function in the meridional approximation which are similar to the simulation output. In [@Winkler1979] a ray optics calculation for curved multimode fibres involving skew rays is presented, unfortunately a discussion on the transmission function is missing. Instead, a plot of the power remaining in a curved fibre versus distance is shown which gives complementary information.
For photons entering a curved section of fibre the first point of reflection on the tensile side defines the transition angle, $\Phi_{\it trans}$, measured from the plane of entry. Figure \[fig:bentfibre\] shows a section of a curved fibre and the passage of a meridional ray in the bending plane with maximum axial angle. The angular range of transition angles associated with each ray is called the transition region of the fibre. The simulation results on the transmission as a function of bending angle, $\Phi$, for a “standard” fibre are presented in figure \[fig:bending\]. Once a sharply curved fibre with a ratio $R_{\it curv}/\rho > 83$ is bent through angles $\Phi \simeq
\pi/8$rad, light losses do not increase any further. For ratios $R_{\it curv}/\rho$ smaller than 10 the model is no longer valid to describe the transmission function.
For photons emitted towards the tensile side the transition angle is related to the axial angle. Since the angular phase space density of trapped photons is highest close to the critical angle the difference $\theta_{\it crit} - \theta_0$ allows to estimate the transition angle. Photons emitted from the fibre axis towards the compressed side are not lost at reflections on this side. They experience at least one reflection on the tensile side if the bending angle exceeds the limit $\Phi_{\it limit} = \arccos\left[R_{\it curv}/(R_{\it curv} + 2\,
\rho)\right] \approx \arccos\left[1 - 2\, \rho / R_{\it
curv}\right]$. Therefore, a transition in the transmission function should occur at bending angles between $\Phi_{\it limit}/2$, where photons emitted towards the tensile side must have experienced a reflection, and $\Phi_{\it limit}$, where this is true for all photons. For a fibre radius $\rho=$ 0.6mm and radii of curvature $R_{\it curv}=$ 1, 2 and 5cm the above formula leads to transition regions $\Phi \sim 0.44 - 1.06$rad which are indicated in figure \[fig:bending\] by arrows.
Transition and bending losses have been thoroughly investigated using waveguide analysis techniques from which a loss formula in terms of the Poynting vector has been derived [@Marcuse1976; @Gambling1979]. Those studies are difficult to extend to multimode fibres since a large number of guided modes has to be considered. Despite the extensive coverage of theory and experiment in this field, only fragmentary studies on the trapping efficiencies and refraction of skew rays in curved multimode fibres could be found [@Winkler1979; @Badar1991A; @Badar1989; @Badar1991B]. When applying ray optics to curved multimode fibres the use of a two-dimensional model is very common [@Badar1991A; @Badar1989; @Badar1991B]. In contrast, the simulation method presented in this paper follows a three-dimensional approach.
Experimental results on losses in curved multimode fibres along with corresponding predictions are best known for silica fibres with core radii $\rho \approx 50\,\mu$m. Results on multimode plastic fibres are rare. The manufacturer Kuraray has investigated a time dependent drop of the transmission of curved plastic fibres on time scales of several days [@Hara1998]. A dependence of the bending losses on the $S$ parameter was found. This parameter is a characteristic index for the degree of orientation of polystyrene chains along the fibre axis, where fibres are more flexible for larger $S$ values. Consequently, softer and more flexible fibres have been developed by Kuraray (so-called $S$ type fibres). Bending losses of more than 50% in 1mm diameter Bicron BCF-91A fibres have been observed for one turn of 360$^\circ$ with a radius of curvature of 5cm [@Gomes1998]. With the same type of fibres no effect was seen for a radius of 10cm. Calculations on the basis of ray optics for a plastic fibre with $\rho = 0.49\,$mm can be found in [@Badar1991B]. The simulation result on the transmission function in the meridional approximation $T= 0.35$ at $\rho/R_{\it curv}= 20$ is in good agreement with this two-dimensional calculation. The higher transmission of $T= 0.65$ for this curvature as predicted by the simulation is explained by the fact that skew rays are less sensitive to bending effects. It should be noted that the difference between finite and infinite cladding and oscillatory losses in the transition region have not been investigated in the simulation.
Transmission of Irradiated Fibres
---------------------------------
It is frequently desirable to have fibre detectors very close to beams and targets. This task results in rather demanding specifications. The fibres have to reasonably immune to radiation or any other effect of aging within the expected period of operation.
Radiation resistance remains to be an important issue for detectors at high luminosity accelerators. It is influenced by many parameters: dose rate, recovery times, temperature, chemical composition, level of dissolved oxygen and others. Since the effective attenuation length is essentially the only property which is affected by the irradiation, the transmission curves before and after exposure to electromagnetic radiation can be used to study the influence of the deposited energy dose on the light yield. Many data points for various irradiation conditions exist. From these experiences one can draw the following, simplified conclusions for most modern fibres: after irradiation the attenuation length is reduced significantly compared to the original value, while the scintillation mechanisms themselves are not strongly affected. Parts of the attenuation length recover on multiple time scales. The manufacturer Kuraray gives a simple formula for estimating the ratio of attenuation lengths after to before the irradiation as a function of the absorbed dose, $D$ (for $D > 0.1$), in units of krad (1krad $=$ 10Gy) [@Hara1998]: $$\Lambda/\Lambda_0 = 0.80 \pm 0.01 - (0.144 \pm 0.007) \times \log D\ .$$
In general, strong dose rate effects are observed and a more accurate description has to include the dynamics of absorption centres. Three types of absorption centres can be formed during irradiation: (1) stable absorption centres leading to a permanent damage, (2) radicals (e.g. benzyl radicals of polystyrene) which decay practically at once if oxygen is dissolved in the fibre, and (3) short-lived absorption centres which decay within hours via bi-molecular reactions [@Wick2001]. The short-lived absorption centres in polystyrene mainly absorb red and green light and it was concluded that these centres are caused by the dopants and not the base material.
For the quantitative analysis of the radiation induced changes the difference in the absorption coefficients of the irradiated and non-irradiated fibre, $\Delta k(\lambda) = k_{\it after}(\lambda) -
k_{\it before}(\lambda)$, is usually quoted, where $\lambda$ is the wavelength and $k = \Lambda^{-1}$ is the inverse light attenuation length. For many fibres the permanent induced absorption, $\Delta
k_{\it perm}$, that is the damage after the end of the recovery process, rises linearly with the absorbed dose for moderate doses between 0.1 and 6kGy [@Wick2001]. For larger absorbed doses $\Delta k_{\it perm}(D)$ becomes a non-linear function. The annealable part of the absorption, $\Delta k - \Delta k_{\it perm}$, shows a broad maximum in the blue spectral region of many irradiated fibres.
Timing Properties
=================
The timing properties of active fibres are usually defined in terms of a coincidence timing resolution between identical counters. For small counters a timing resolution in the range of $\sigma_{tot} \approx 50
- 100$ps can be achieved. Several effects which are dependent on the signal amplitude, $A$, contribute to the timing resolution, namely the scintillation process with its decay time $\sigma_{sci}$, the photomultiplier tube with its transit time spread $\sigma_{TTS}$ and the fibre as a light guide with its pulse dispersion $\sigma_{disp}$. The discriminator, which could be either of type leading edge or constant fraction, and any noise in the electronic circuits contribute to the timing distribution. They may shift the response time with signal amplitude (“time walk”). Time walk effects can get corrected by various means, either in hardware or software [@Spieler1982]. Usually, the remaining effect is parameterised by the time spread $\sigma_{\it elec}$. A common way of summarising the different contributions to the timing resolution is the following: $$\sigma= \sqrt{2} \sqrt{\frac{\sigma_{\it sci}^2}{A} +
\frac{\sigma_{\it TTS}^2}{A} + \frac{(L_F \sigma_{\it disp})^2}{A} +
\sigma_{\it elec}^2}\ ,$$ where the factor $\sqrt2$ assumes two identical counters for start and stop time, whose timing resolutions add up quadratically: $\sigma^2 =
\sigma^2_{\it start} + \sigma^2_{\it stop}$.
Statistical Fluctuations
------------------------
Obviously, the signal amplitude depends on the number of photoelectrons, $N$, appearing during some integration time, $T$. This number fluctuates from one pulse to another. The mean number of photoelectrons per pulse, $\bar{N}$, and its variance, $\sigma_N$, are characteristic of the photoelectron statistics. As the number of photoelectrons increases, a larger number of time intervals is sampled. For a precise quantitative description of the distribution of $N$ one has to use quantum theory, where the number of photoelectrons becomes an operator. However, in a semiclassical model the probability for observing $N$ photoelectrons over a time interval $T$ is given by a Poissonian distribution $P(\bar{N}, N)$. For larger numbers of photoelectrons the distribution is assumed to be Gaussian, so that the signal amplitude dependent contributions should vary with $1/\sqrt{\bar{N}}$.
In active fibres the addition of fluorescent dopants can sharply reduce the decay times of some scintillating base materials by Förster transitions, which couple base and fluorescent dye in extremely short times. The base material polyvinyltoluene can be quenched with benzophenone, for example, to reach a 220ps pulse width. The faster timing comes at the expense of light yield, however. For most plastic fibres the short time behaviour can be described phenomenologically by a single decay constant, $\sigma_{sci}$, usually of the order of a few nanoseconds. The decay times in wavelength-shifting fibres are substantially longer than the scintillator decay constants. Typical wavelength-shifting molecules have decay times of $7-12$ns, dominating the time characteristics of the entire detector. For critical timing situations wavelength-shifting fibres are usually avoided.
The contribution of the photomultiplier tube to the time width of the observed output pulse is determined exclusively by the electron transit time spread, $\sigma_{TTS}$, where the transit time is the time difference between photoemission at the cathode and the arrival of the subsequent electric signal at the anode. Modern compact photomultiplier tubes like the Hamamatsu R-5900 reach transit time spreads of $\sigma_{TTS} \approx 100$ps with anode pulse rise times of $600$ps for single photoelectrons. The timing performance of photomultiplier tubes is an active area of development and it is assumed that even faster tubes will be available. In many cases, the transit time spread of conventional 1$\frac{1}{8}$ inch photomultiplier tubes is well described by a $1/\sqrt{\bar{N}}$ dependence. Sometimes, other dependences of $\sigma_{TTS}$ on $\bar{N}$ are seen. Ref. [@Kume1986] gives an example of a $\sigma_{TTS} \propto 1/\bar{N}^{0.4}$ dependence for a specific photomultiplier tube.
Light Pulse Dispersion
----------------------
A pulse of light, consisting of several photons propagating along a light guide, broadens in time. The chromatic dispersion is due to the spectral width, $\Delta\lambda$, of the emission band. It is a combination of material dispersion and waveguide dispersion. If the core refractive index is explicitly dependent on the wavelength, $n(\lambda)$, photons of different wavelengths have different propagation velocities along the same path, called material dispersion. The broadening of a pulse travelling along a fibre is given by $\Delta \tau = L_F/c_{\it core} \left( \lambda^2
d^2n/d\lambda^2 \right) \Delta\lambda/\lambda$, where $\Delta \lambda$ is the spectral width of the emission peaks of scintillating or wavelength-shifting fibres [@Ghatak1998]. The full widths at half maximum (FWHM) of the emission bands in most fibre polymers (e.g.polystyrene) is approximately $40-50$nm and the material dispersion is of the order of a few ns$/$nm per kilometre fibre length and is almost negligible for multimode fibres.
Because of the linear dependence of the transit time on the path length the transit time is simply given by $\tau= L_F\,
P(\theta)/c_{\it core}$, where $c_{\it core}$ is the speed of light in the fibre core and $L_F$ is the total axial length of the fibre. The simulation results on the transit time are shown in figure \[fig:timing\]. The FWHM of the pulses in the time spectrum are presented for four different fibre lengths. The resulting dispersion has to be compared with the time dispersion in the meridional approximation which is simply the difference between the shortest transit time $\tau_S = \tau(\theta=0)$ and the longest transit time $\tau_L = \tau(\theta=\theta_{\it crit})$: $\Delta \tau
\equiv \tau_L - \tau_S = L_F/c_{\it core}\ (\sec{\theta_{\it
crit}}-1)$. The dispersion evaluates for “standard” fibres to 197ps for 0.5m, 393ps for 1m, 787ps for 2m and 1181ps for 3m. Those numbers are in good agreement with the simulation, although there are tails associated to the propagation of skew rays. Using average attenuation parameters as discussed in section 4 the fraction of photons arriving later than $\tau_L$ decreases from 37.9% for a 0.5m fibre to 32% for a 3m fibre due to the stronger attenuation of the skew rays in the tail. Because of the inter-modal dispersion the pulse broadening in multimode fibres is quite significant and for longer fibres ($L_F \ge$ 1m) the light dispersion due to path length variations, $\sigma_{path}$, dominates the timing resolution.
In the meridional approximation and assuming an isotropic emission, $dN/d\Omega = 1/(4\pi)$, the probability of a photoelectron to get emitted before a time $t$ can be derived. The normalised transit time distribution in this approximation is $f(t) = \tau_S
\tau_L/(\Delta\tau\, t^2)$, leading to the probability $$p(t) = \int_{\tau_L}^{t} f(t^\prime)\ dt^\prime = \frac{\tau_S
\tau_L}{\Delta\tau} \left( \frac{1}{\tau_L} - \frac{1}{t} \right)\ .$$ Then, the average transit times $\langle t \rangle = \int_0^1 t(p)\,
dp$ and $\langle t^2 \rangle = \int_0^1 t^2(p)\, dp$ can be calculated, resulting in a time dispersion due to path length differences: $$\sigma_{path}^2 = \langle t^2 \rangle - \langle t \rangle^2
= \tau_S \tau_L - \left( \frac{\tau_S \tau_L}{\Delta \tau}
\right)^2 \ln{\frac{\tau_L}{\tau_S}}\ .$$ Recalling that $\tau_S \propto L_F$ and $\tau_L \propto L_F$ will reproduce the linear dependence of the time dispersion on the distance between the emission point and the photon sensor measured along the fibre axis.
In order to reduce noise from dark counts the threshold of a discriminator used with a time-to-digital converter is usually larger than the minimum voltage corresponding to one photoelectron. Assuming that the time measurement requires a fixed number of photoelectrons, $j$, and $N$ photoelectrons are produced in a given event, then simple probabilistic considerations provide the time distribution of photoelectron $j$: $$P(p) = \frac{N!}{(N-j)!\, (j-1)!} (1-p(t))^{N-j}\, p(t)^{j-1}\ .$$ Calculating the time dispersion in the same way as for the single photoelectron case involves Gauss’s hyper geometric function, $_2F_1$, which arises frequently in physical problems, $$_2F_1(1, j, N+1, \Delta \tau / \tau_L) =
\frac{\Gamma(N+1)}{\Gamma(j)\, \Gamma(N+1-j)} \int_0^1
\frac{\tau_S}{1 - p \Delta \tau / \tau_L} (1-p)^{N-j}\,
p^{j-1}\, dp\ ,$$ where $\Gamma$ is Euler’s Gamma function. Obviously, the integral is dependent on $j$, and so the statistical behaviour of the timing resolution is a function of the threshold. A parameterisation of the integral with $\sigma_{path} \propto L_F/\sqrt{\bar{N}}$ is common. For a large number of photoelectrons the approximation $\sigma_{path}
\propto L_F/\bar{N}$ is better suited.
The maximum axial angle allowed by the critical angle condition is smaller in fibres than in bulk scintillators by $$\theta_{\it crit}^{bulk}/\theta_{\it crit}^{fibre} \approx
\frac{\cos^{-1}1/n_{\it core}}{\cos^{-1}n_{\it clad}/n_{\it core}}
\approx \sqrt{\frac{n_{\it core} - 1}{n_{\it core} - n_{\it clad}}}\ ,$$ which equals $\approx 2.3$ for “standard” fibres. Thus, the time dispersion due to path length variation is significantly reduced in fibre bundles. A pioneering work by Kuhlen and co-workers [@Kuhlen1991] has shown that the timing resolution achieved with a scintillating fibre detector is about a factor two better than the timing resolution achieved with a geometrically identical bulk scintillator.
Conclusions
===========
Since the first demonstration of a scintillating fibre detector in the early 1980s, particle detection and read-out techniques using active fibres have become mainstream. Developments were mainly based on step-index fibres with polystyrene cores. The plastic scintillators in active fibres are composites of more than one type of fluorescent molecules containing aromatic rings. The energy transfer mechanisms in these binary or even tertiary mixtures are complex, involving radiationless internal conversions and overlapping absorption and emission bands.
Nowadays, active optical fibres are an integral part of hadron calorimetry. In addition, tracking detectors comprising thousands of scintillating fibres are frequently built into large detector systems. Wavelength-shifting fibres are found in several modern particle physics experiments around the world. They allow for a read-out with a very high level of hermeticity. Recently, the ongoing interest concentrated on the development of low-mass particle detectors with high timing and spatial resolutions.
Because of the relatively low quantum efficiency of the scintillator and the low trapping efficiencies in fibres, the light yield at the end of a long fibre is typically small. Thus, a detailed understanding of the photon propagation characteristics is almost inevitable. In this paper, the propagation of photons in straight and curved optical fibres has been reviewed. The geometrical conditions have been illustrated to explain the quantitative difference between meridional and skew rays. The overall transmission through a fibre cannot be described by a simple exponential function of propagation distance. One contribution to this effect is the large spread in optical path lengths between the most meridional and most skew rays.
Since light pulses are often near the noise level of the measurement system, the light yield can get critically low for sharply curved fibres, e.g. as used in calorimeters with a large hermeticity. A Monte Carlo programme has been used to evaluate the loss of photons propagating in fibres curved in a circular path in one plane. The results show that the loss of photons due to the curvature of the fibre is a simple function of radius of curvature to fibre radius ratio and the transmission is $T > 90\%$ if the ratio is $\rho/R_{\it
curv} > 65$. The simulations also show that for larger ratios this loss takes place in a transition region during which a new distribution of photon angles is established. Photons which survive the transition region propagate without further losses.
It is long known that long fibres have better timing resolutions than bulk scintillation counters of the same overall dimensions. Fast photon sensors with transit time spreads $\sigma_{TTS} \approx
0.1-0.5\,$ns in conjunction with plastic scintillators with time constants $\sigma_{\it sci} < 2\,$ns allow the measurement of time differences on the level of $\Delta\tau \simeq 100\,$ps. Several effects are responsible for the signal time spread in active fibres. There are variations in the response time of the scintillator, which include time variations in the energy transfer and the finite decay time of the fluorescent dyes. Effects due to the light detection include variations in the transit time from the photocathode to the first dynode of photomultiplier tubes. Another group of limitations is due to the electronics circuits used to process the signal. One of the main contributions to the finite timing resolution comes from the time dispersion due to path length variation in the fibre. The simulation has been used to investigate the dispersion of transit times of photons propagating in straight fibres. For fibre lengths between 0.5 and 3m approximately two thirds of the photons arrive within the spread of transit times which would be expected from the use of the simple meridional approximation and the refractive index of a “standard” fibre core. The remainder of the photons arrive in a tail at later times due to their helical paths in the fibre. The fraction of photons in the tail of the distribution decreases only slowly with increasing fibre length and will depend on the attenuation parameters of the fibre.
In conclusion, the timing properties of today’s commercially available fibres and photomultiplier tubes allow resolutions in the sub-nanosecond region and the small detector dimensions make them very attractive for experiments at present and future high luminosity accelerators.
I wish to express my thanks to J.H. Cobb who initiated my works on optical fibres during my years at Oxford University.
[53]{}
R. Mussa, M. Onorato, N. Pastrone, D. Bettoni, R. Calabrese, B. Camanzi, and E. Luppi, “Development of a cylindrical scintillating fiber tracker for experiment [E835]{} at [FNAL]{},” Nucl. Instr. and Meth. in Phys. Res. [ **A360,**]{} 13–16 (1995).
P. Annis [*et al.*]{}, “The [CHORUS]{} scintillating fiber tracker and opto-electronic readout system,” Nucl. Instr. and Meth. in Phys. Res. [ **A412,**]{} 19–37 (1998).
A. Suzuki [*et al.*]{}, “Design, construction, and operation of SciFi tracking detector for [K2K]{} experiment,” Nucl. Instr. and Meth. in Phys. Res. [ **A453,**]{} 165–176 (2000).
, “A general purpose $pp$ experiment at the [LHC]{},” Technical proposal, CERN/LHCC/94-43, CERN (1994) .
S. Horikawa [*et al.*]{}, “Time resolution of a scintillating fiber detector,” Nucl. Instr. and Meth. in Phys. Res. [**A431,**]{} 177–184 (1999).
S. Sedykh [*et al.*]{}, “Electromagnetic calorimeters for the [BNL]{} muon (g-2) experiment,” Nucl. Instr. and Meth. in Phys. Res. [**A455,**]{} 346–360 (2000).
A. Antonelli [*et al.*]{}, “Measurements of light yield, attenuation length and time response of long samples of “blue” scintillating fibers,” Nucl. Instr. and Meth. in Phys. Res. [**A370,**]{} 367–371 (1996).
, “The [MINOS]{} Detectors,” Technical Design Report, NuMI-L-337, Fermilab (1998) .
, “A large hadron collider beauty experiment for precision measurements of [CP]{} violation and rare decays,” Technical proposal, CERN/LHCC/98-04, CERN (1998) .
, “The Hadron Calorimeter,” Technical Design Report, CERN/LHCC 97-31, [CERN]{} (1997) .
Y. Giomataris, P. Rebourgeard, J. P. Robert, and G. Charpak, “[MICROMEGAS:]{} a high-granularity position-sensitive gaseous detector for high particle-flux environments,” Nucl. Instr. and Meth. in Phys. Res. [**A376,**]{} 29–35 (1996).
F. Sauli, “[GEM:]{} a new concept for electron amplification in gas detectors,” Nucl. Instr. and Meth. in Phys. Res. [**A386,**]{} 531–534 (1997).
E. Snitzer, “Cylindrical dielectric waveguide modes,” J. Opt. Soc. Am. [ **51,**]{} 491–498 (1961).
N. S. Kapany, J. J. Burke, and C. C. Shaw, “Fiber Optics. [X]{}. [E]{}vanescent Boundary Wave Propagation,” J. Opt. Soc. Am. [**53,**]{} 929–935 (1963).
C. Winkler, J. D. Love, and A. K. Ghatak, “Loss calculation in bent multimode optical waveguides,” Opt. Quantum Electron. [**11,**]{} 173–183 (1979).
N. S. Kapany, “Fiber Optics. [I]{}. [O]{}ptical properties of certain dielectric cylinders,” J. Opt. Soc. Am. [**47,**]{} 413–422 (1957).
N. S. Kapany, [*Fibre optics: principles and applications*]{} (Academic Press, London and New York, 1967).
W. B. Allan, [*Fibre optics: theory and practice*]{}, [*Optical Physics and Engineering*]{} (Plenum Press, London and New York, 1973).
A. Ghatak and K. Thyagarajan, [*Introduction to fiber optics*]{} (Cambridge University Press, Cambridge, 1998).
T. O. White, “Scintillating fibres,” Nucl. Instr. and Meth. in Phys. Res. [**A273,**]{} 820–825 (1988).
H. Leutz, “Scintillating Fibres,” Nucl. Instr. and Meth. in Phys. Res. [ **A364,**]{} 422–448 (1995).
J. B. Birks, [*The theory and practice of scintillation counting*]{} (Pergamon Press, Oxford, 1964).
K. Kuroda, D. Sillou, and F. Takeutchi, “New type of position sensitive photomultiplier,” Rev. Sci. Instrum. [**52,**]{} 337–346 (1981).
, “Fast readout of scintillating fiber arrays using position-sensitive photomultipliers,” Nucl. Instr. and Meth. in Phys. Res. [**A357,**]{} 78–86 (1995).
, “Scintillating fiber hodoscopes using position-sensitive photomultipliers,” Nucl. Instr. and Meth. in Phys. Res. [**A372,**]{} 63–69 (1996).
V. Agoritsas [*et al.*]{}, “Read-out of scintillating fibres using a weak cross-talk position-sensitive photomultiplier,” Nucl. Instr. and Meth. in Phys. Res. [**A406,**]{} 393–402 (1998).
, Hamamatsu Photonics, Hertfordshire, United Kingdom, 2000, (catalog TPMO0001E06).
Y. Yoshizawa and J. Takeuchi, “The latest vacuum photodetector,” Nucl. Instr. and Meth. in Phys. Res. [**A387,**]{} 33–37 (1997).
M. Enkelmann, U. Werthenbach, G. Zech, and T. Zeuner, “An optical readout for a fiber tracker,” Nucl. Instr. and Meth. in Phys. Res. [**A412,**]{} 216–222 (1998).
M. Ambrogiani, W. Baldini, D. Bettoni, R. Calabrese, E. Luppi, R. Mussa, and G. Stancari, “Results from the [E835]{} scintillating fiber detector,” Nucl. Instr. and Meth. in Phys. Res. [**A419,**]{} 632–636 (1998).
J. Bähr, H. Bärwolff, V. Kantserov, G. Kell, and R. Nahnhauer, “Investigation of silicon avalanche photodiodes for use in scintillating fiber trackers,” Nucl. Instr. and Meth. in Phys. Res. [**A442,**]{} 203–208 (2000), 2nd Conference on New Developments in Photodetection, Beaune, France, June 21–25, 1999.
C. P. Achenbach and J. H. Cobb, “Computational studies of light acceptance and propagation in straight and curved multimodal fibres,” J. Opt. A: Pure Appl. Opt. [**5,**]{} 239–249 (2003).
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, [ *Numerical recipes in Fortran77: the art of scientific computing*]{}, 2nd ed. (Cambridge University Press, Cambridge, 1992), Vol. 1 of Fortran Numerical Recipes.
R. J. Potter, “Transmission properties of optical fibers,” J. Opt. Soc. Am. [**51,**]{} 1079–1089 (1961).
N. S. Kapany and D. F. Capellaro, “Fiber Optics. [VII]{}. [I]{}mage Transfer from [L]{}ambertian Emitters,” J. Opt. Soc. Am. [**51,**]{} 23–31 (1961), (appendix: Geometrical optics of straight circular dielectric cylinder).
R. J. Potter, E. Donath, and R. Tynan, “Light-collecting properties of a perfect circular optical fiber,” J. Opt. Soc. Am. [**53,**]{} 256–260 (1963).
K. F. Johnson, “Achieving the theoretical maximum light yield in scintillating fibres through non-uniform doping,” Nucl. Instr. and Meth. in Phys. Res. [**A344,**]{} 432–434 (1994).
M. Kuhlen, M. Moszynski, R. Stroynowski, E. Wicklund, and B. Milliken, “Timing properties of long scintillation counters based on scintillating fibers,” Nucl. Instr. and Meth. in Phys. Res. [**A301,**]{} 223–229 (1991).
C. D’Ambrosio, H. Leutz, and M. Taufer, “Reflection losses in polysterene fibres,” Nucl. Instr. and Meth. in Phys. Res. [**A306,**]{} 549–556 (1991).
A. J. Davis, P. Hink, W. Binns, J. Epstein, J. Connell, M. Israel, J. Klarmann, V. Vylet, D. Kaplan, and S. Reucroft, “Scintillating optical fiber trajectory detectors,” Nucl. Instr. and Meth. in Phys. Res. [**A276,**]{} 347–358 (1989).
G. Drexlin, V. Eberhard, D. Hunkel, and B. Zeitnitz, “Spectral attenuation length of scintillating fibres,” Nucl. Instr. and Meth. in Phys. Res. [ **A360,**]{} 245–247 (1995).
C. P. Achenbach and J. H. Cobb, “A new airborne detector for atmospheric muons,” In [*Proc. of the 27th Int. Cosmic Ray Conf. (ICRC 2001)*]{}, K.-H. Kampert, G. Hainzelmann, and C. Spiering, eds., pp. 1313–1316 (Copernicus Gesellschaft, Katlenburg-Lindau, 2001).
D. Gloge, “Bending loss in multimode fibers with graded and ungraded core index,” Appl. Opt. [**11,**]{} 2506–2513 (1972).
A. H. Badar, T. S. M. Maclean, H. Ghafoori-Shiraz, and B. K. Gazey, “Bent slab ray theory for power distribution in core and cladding of bent multimode optical fibres,” IEE Proc. J [**138,**]{} 7–12 (1991).
D. Marcuse, “Curvature loss formula for optical fibers,” J. Opt. Soc. Am. [**66,**]{} 216–220 (1976).
W. A. Gambling, H. Matsumura, and C. M. Ragdale, “Curvature and microbending losses in single-mode optical fibres,” Opt. Quantum Electron. [**11,**]{} 43–59 (1979).
A. H. Badar, T. S. M. Maclean, B. K. Gazey, J. F. Miller, and H. Ghafoori-Shiraz, “Radiation from circular bends in multimode and single-mode optical fibres,” IEE Proc. J [**136,**]{} 147–151 (1989).
A. H. Badar and T. S. M. Maclean, “Transition and pure bending losses in multimode and single-mode bent optical fibres,” IEE Proc. J [**138,**]{} 261–268 (1991).
K. Hara, K. Hata, S. Kim, M. Mishina, M. Sano, Y. Seiya, K. Takikawa, M. Tanaka, and K. Yasuoka, “Radiation hardness and mechanical durability of [K]{}uraray optical fibers,” Nucl. Instr. and Meth. in Phys. Res. [**A411,**]{} 31–40 (1998).
A. Gomes, M. David, A. Henriques, and A. Maio, “Comperative study of [WLS]{} fibres for the [ATLAS]{} tile calorimeter,” Nucl. Phys. B (Proc. Suppl.) [ **61,**]{} 106–111 (1998).
K. Wick and T. Zoufal, “Unexpected behaviour of polystyrene-based scintillating fibers during irradiation at low doses and low dose rates,” Nucl. Instr. and Meth. in Phys. Res. [**B185,**]{} 341–345 (2001).
H. Spieler, “Fast timing methods for semiconductor detectors,” IEEE Trans. Nucl. Sci. [**NS-29,**]{} 1142–1157 (1982).
H. Kume, S. Muramatsu, and M. Iida, “Position sensitive photomultiplier tubes for scintillation imaging,” IEEE Trans. Nucl. Sci. [**NS-33,**]{} 359–363 (1986).
‘
\
\
\
\
\
\
\
\
\
\
[^1]: Present address: Institut f[ü]{}r Kernphysik, Joh. Gutenberg-Universit[ä]{}t Mainz, J J Becher-Weg 45, 55099 Mainz, Germany. Tel.: +49–6131–3925831; fax: +49–6131–3922964. [*E-mail:*]{} [email protected]
[^2]: Scintillation is an example of radioluminescence. Scintillators may be organic and inorganic solids, liquids, and gases.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'A. Mignano, I. Prandoni, L. Gregorini, P. Parma, H. R. de Ruiter, M. H. Wieringa, G. Vettolani, R. D. Ekers'
bibliography:
- 'atesp\_v4.bib'
date: 'Received -; Accepted -'
title: |
The ATESP 5 GHz radio survey.\
II. Physical properties of the faint radio population [^1]
---
Introduction {#sec:introduction}
============
The faint (sub-mJy) radio population consists of a mixture of different classes of objects. Since the early seventies it has been known that the strongest sources are almost exclusively associated with either active galactic nuclei (AGNs) or giant ellipticals, the latter of which are also known as radio galaxies (99% above 60 mJy, @Windhorst90). More recent work on mJy and sub-mJy sources has revealed that faint sources are also found to be associated with normal elliptical, spiral and star-forming galaxies, with the early type galaxies being the dominant component [@Gruppioni1999; @Georgakakis1999; @Magliocchetti2000; @Prandoni2001b; @Afonso2006], while at $\mu$Jy levels star-forming galaxies prevail (see e.g. @Richards1999).
In spite of the progress made in our understanding of the faint radio population, many questions remain open. For example, the relative fractions of the different types of objects are still quite uncertain, and our knowledge of their dependence on limiting flux density is still incomplete. The reason is, of course, that very little is known about the faint ends of the various luminosity functions, and even less is known about the cosmological evolution of different kinds of objects. This uncertainty is due to the incompleteness of optical identification and spectroscopy, since faint radio sources usually have very faint optical counterparts. Clearly [*very*]{} deep ($I\apprge 25$) optical imaging and spectroscopy, for reasonably large deep radio samples, are critical if one wants to investigate these radio source populations.
Since the radio emission comes from different types of objects an important question is what are the physical processes that trigger this emission. It is natural to assume that in the case of star-forming galaxies the emission traces the history of galaxy formation and subsequent evolution by merging and interaction, while the emission in AGNs will reflect black hole accretion history. To make matters more complicated, both processes may be present at the same time.
Although research in this field proceeds slowly due to very time–consuming spectroscopy much progress has been made in recent years thanks to strong improvement in the photometric redshift technique. Several multi–colour/multi–object spectroscopy surveys overlapping deep radio fields have recently been undertaken, including the Phoenix Deep Survey [@Hopkins1998; @Georgakakis1999; @Afonso2006] and the Australia Telescope ESO Slice Project (ATESP) survey [@Prandoni2000a; @Prandoni2000b; @Prandoni2001b]. In other cases, deep multi–colour/multi–wavelength surveys have been complemented by deep radio observations (see e.g. the VLA–VIRMOS, @Bondi2003; and the COSMOS, @Schinnerer2006).
Multi–frequency radio observations are also important in measuring the radio spectral index, which may help to constrain the origin of the radio emission in the faint radio sources. This approach is especially meaningful when high resolution radio images are available and radio source structures can be inferred. However, multi–frequency radio information is available for very few, and small, sub-mJy radio samples.
The largest sample with multi–frequency radio coverage available so far is a complete sample of 131 radio sources with $S>0.4$ mJy, extracted from a square degree region observed at both 1.4 and 5 GHz as part of the ATESP radio survey [@Prandoni2000a; @Prandoni2000b; @Prandoni2006].
The $1.4-5$ GHz radio spectral index analysis of the ATESP radio sources was presented in the first paper of this series (@Prandoni2006, hereafter Paper I). We found a flattening of the radio spectra with decreasing radio flux density. At mJy levels most sources have steep spectra ($\alpha \sim -0.7$, assuming $S\sim \nu^{\alpha}$), typical of synchrotron radiation, while at sub-mJy flux densities a composite population is present, with up to $\sim 60\%$ of the sources showing flat ($\alpha > -0.5$) spectra and a significant fraction ($\sim 30\%$) of inverted-spectrum ($\alpha>0$) sources. This flattening at sub-mJy fluxes confirms previous results based on smaller samples (@Donnelly1987 [@Gruppioni1997; @Ciliegi2003]). Flat spectra in radio sources usually indicate the presence of a self-absorbed nuclear core, but they can also be produced on larger scales by thermal emission from stars.
It is possible to combine the spectral index information with other observational properties and infer the nature of the faint radio population. This is especially important with respect to the class of flat/inverted–spectrum sources as it permits us to study the physical processes that trigger the radio emission in those sources. This kind of analysis needs information about the redshifts and types of the galaxies hosting the radio sources. A detailed radio/optical study of the sample above is possible, thanks to the extensive optical/infrared coverage mostly obtained in the ESO *Deep Public Survey* (DPS, @Mignano2007 [@Olsen2006]).
We give a brief discussion of all the data collected so far in Sect. \[sec:datacoverage\], followed by a more detailed analysis of the DPS optical data in Sect. \[sec:dpsanalysis\], where we derive the UBVRI colour catalogue and photometric redshifts for the DPS galaxies in the region covered by the ATESP survey, assessing the reliability of the photometric redshifts themselves. In Sects. \[sec:optid\] and \[sec:radiozphot\], respectively, we use the DPS UBVRIJK optical data to identify the ATESP radio sources and to derive photometric redshifts.
A radio/optical analysis of the optically identified radio sources is presented in Sect. \[sec:comp\], while in Sect. \[sec:nature\] we discuss the nature of the mJy and sub–mJy population on the basis of all the radio and optical data available to the ATESP sample. The main results are briefly summarised in Sect. \[sec:summary\].\
Throughout this paper we use the $\Lambda$CDM model, with $H_0=70$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$.
Radio and optical data {#sec:datacoverage}
======================
The ATESP 1.4 and 5 GHz surveys {#sec:atesp}
-------------------------------
As discussed in Paper I, the Australia Telescope Compact Array (ATCA) was used to image, at 5 GHz, part of the $26^\circ\times 1^\circ$ strip of sky previously covered by the 1.4 GHz sub-mJy ATESP survey (@Prandoni2000a [@Prandoni2000b]). In the $2\times 0.5$ sq. deg. area observed at both 1.4 and 5 GHz a total of 131 distinct radio sources are catalogued above a 6$\sigma$-threshold ($S>0.4-0.5$ mJy) at either 1.4 or 5 GHz (see Table 4 of Paper I). In particular we have 89 sources that appear in both the 1.4 and 5 GHz catalogues (@Prandoni2000b and Paper I), while the remaining 42 sources are catalogued only at one radio frequency: 20 sources at 1.4 GHz and 22 sources at 5 GHz. For the sake of the spectral index analysis (see Paper I), we searched for $3\sigma$ ($S\geq 0.2$ mJy) counterparts for these sources at the other radio frequency by directly inspecting the (1.4 or 5 GHz) ATESP radio mosaics. As a result $\geq 3\sigma$ (1.4 or 5 GHz) flux measurements were provided for 29 additional sources (12 catalogued at 1.4 GHz and 17 at 5 GHz), while for 13 (8 catalogued at 1.4 GHz and 5 at 5 GHz) sources (1.4 or 5GHz) $3\sigma$ upper limits were estimated. Among the 131 sources catalogued at 1.4 and/or 5 GHz there are three multiple sources: one is catalogued as triple at 1.4 GHz and as double at 5 GHz and another source is catalogued as double at 1.4 GHz and as a single non–Gaussian (extended) source at 5 GHz.
The optical/infrared DPS survey {#sec:dps}
-------------------------------
The $2\times 0.5$ sq. deg. area imaged at both 1.4 and 5 GHz as part of the ATESP survey overlaps entirely with one sub-region (namely the DEEP1 sub-region, see below) of the ESO DPS survey. The DPS is a multi–colour survey consisting of both optical and near–infrared observations. The DPS was carried out in the optical ($U,B,V,R,I$), using the WFI (Wide Field Imager) camera mounted at the 2.2m ESO telescope, and in the NIR ($J,K_s$), using the SOFI camera mounted at the ESO NTT telescope. For a detailed description of the UBVRIJK filters used for the DPS we refer to @Mignano2007 and @Olsen2006.
The optical (UBVRI) observations cover three distinct $2\times 0.5$sq. deg. regions of sky (named DEEP1, DEEP2, DEEP3). Each of the three regions is covered by four $0.5\deg\times 0.5\deg$ WFI pointings (a, b, c, d). Typical depths of the optical observations are $U_{AB}\sim 25.7$, $B_{AB}\sim 25.5$, $V_{AB}\sim 25.2$, $R_{AB}\sim
24.8$, $I_{AB}\sim 24.1$ [@Mignano2007].
The infrared DPS comprises two strategies: shallow $K_s$–band ($K_{s\,AB}\leq 21.3$) contiguous coverage of about half the WFI fields, complemented by deeper $J$– and $K_s$–band ($J_{AB}\leq 23.4$ and $K_{s\,AB}\leq 22.7$) contiguous coverage ($4\times 4$ SOFI pointings) of the central part of the WFI fields observed in the shallow strategy [@Olsen2006]. In particular for region DEEP1, the one of interest for this work, infrared coverage was proposed for WFI fields DEEP1a and DEEP1b in shallow strategy ($7\times 7$ SOFI pointings), and for the central part of them in deep strategy ($4\times 4$ SOFI pointings).
There are some gaps in the optical/NIR imaging of the three fields of the region DEEP1 (see Table \[tab:dps\_completeness\] for the summary of the observations). Since NIR coverage of each single WFI field is obtained with several contiguous SOFI images, the seeing and the limiting magnitude values reported in the table for $J$ and $K_s$ bands are an average ($\pm$ standard deviation) of all the SOFI images contributing to the WFI field.
Figure \[fig:completDPS\] shows the distribution of the infrared SOFI pointings over the WFI fields DEEP1a (top) and DEEP1b (bottom) for the two available infrared bands (J and K). $K-$band coverage is shown in both strategies: shallow and deep imaging (left and middle panels). $J-$band coverage (right panels) was obtained only for the deep strategy. The infrared frames are represented by the small numbered squares that overlap the corresponding optical WFI frames (big squares).
From Table \[tab:dps\_completeness\] and Fig. \[fig:completDPS\] it is clear that DEEP1a imaging is complete in the optical U, B, and R pass-bands, while no imaging is available in the V-band. The $K_s$–band imaging, on the other hand, is 70% complete in the shallow strategy and 75% complete in the deep strategy, while 100% completeness is reached by the $J$–band imaging.
The optical imaging of DEEP1b is complete, except for the $U$–band imaging, which is slightly shallower than planned ($m_{lim} =\sim 24.6$). The deep infrared imaging has a good coverage in both filters ($>$80%), while the shallow $K_s$–band imaging covers only about 55% of the area.
It is interesting to note that, even if not complete, the infrared coverage of DEEP1a and b is distributed in such a way that many of the ATESP radio sources (filled black points in Fig. \[fig:completDPS\]) in the two fields (27 and 26 radio sources for DEEP1a and b respectively) have infrared information. In particular, 75% ($40/53$) of the sources have shallow $K_s$–band coverage, while deep $J$– and $K_s$–band infrared data are available for 100% ($20/20$) of the radio sources located in the central part of the fields.
DEEP1c was only observed in the V band (only down to $m_{lim}\sim 25$) and R band, while no observations are available for field DEEP1d.
Reduced images and single pass-band source catalogues extracted from both the optical and infrared DPS are described in detail in [@Mignano2007] and [@Olsen2006], respectively, and are publicly available at the Centre de Données astronomiques de Strasbourg (CDS).
Field Pass-band Seeing ($\prime\prime$) m$_{lim}$
-------- ----------------- ------------------------- ----------------
DEEP1a $U$ 1.37 25.26
$B$ 1.37 25.85
$R$ 0.87 25.74
$I$ 0.86 23.76
$J$ $0.676\pm 0.094$ $22.17\pm0.23$
$K_{s,deep}$ $0.712\pm0.090 $ $20.07\pm0.23$
$K_{s,shallow}$ $1.275\pm0.066 $ $19.57\pm0.16$
DEEP1b $U$ 1.17 24.62
$B$ 1.43 25.66
$V$ 1.31 25.35
$R$ 1.29 25.32
$I$ 0.97 24.19
$J$ $0.073\pm0.238$ $22.14\pm0.23$
$K_{s,deep}$ $0.911\pm0.208$ $20.24\pm0.31$
$K_{s,shallow}$ $0.890\pm0.198$ $19.38\pm0.29$
DEEP1c $V$ 1.19 25.03
$R$ 0.98 25.43
: DPS optical and infrared data status and main attributes. The table gives in Col. 1 the WFI field, in Col. 2 the pass-band, in Col. 3 the seeing, and in Col. 4 the limiting magnitude ($5\sigma$, 2aperture, Vega system). []{data-label="tab:dps_completeness"}
Field Pass-band Seeing ($\prime\prime$) m$_{lim}$
-------- ----------- ------------------------- ----------- -- --
DEEP1a $V$ 0.98 25.76
DEEP1c $U$ 1.09 25.07
$B$ 1.27 26.56
$I$ 1.21 24.83
: Main attributes of additional optical imaging obtained for fields DEEP1a and DEEP1c. Columns as in table \[tab:dps\_completeness\][]{data-label="tab:addimaging"}
     
Additional optical imaging {#sec:new-wfi}
--------------------------
Since the DPS was not completed, we have undertaken new WFI optical observations in order to collect the missing data necessary to have full colour information for region DEEP1, and hence for our ATESP radio sources. In this framework we have obtained V–band imaging for DEEP1a, and U-, B-, I-band imaging for DEEP1c. All these new observations were taken in collaboration with the group that developed the Garching-Bonn Deep Survey (GaBoDS) data reduction pipeline (@Schirmer2003 [@Erben2005]) and therefore these new data were reduced through that pipeline. The main attributes for this additional imaging are shown in Table \[tab:addimaging\]. We refer to [@Hildebrandt2006] for a detailed description of the data (both reduced images and single pass-band source catalogues). Our multi–colour analysis of ATESP radio sources can rely on full UBVRI information for DEEP1a, b and c (plus infrared information for most of the sources in DEEP1a and b).
Other optical information {#sec:otherdata}
-------------------------
It is worth mentioning that other optical imaging and/or spectroscopic data are available. The 26 square degree area covered by the ATESP survey was chosen to overlap with the region where [@Vettolani97] made the ESP (*ESO Slice Project*) redshift survey. They performed a photometric and spectroscopic study of all galaxies down to $b_J
\sim$ 19.4. The ESP survey yielded 3342 redshifts (@Vettolani98), to a typical depth of $z=0.1$ and a completeness level of 90%.
In the same region lies the *ESO Imaging Survey* (EIS) Patch A ($\sim 3^{\circ}\times 1^{\circ}$ square degrees, centred at $22^h 40^m$, $-40^{\circ}$), mainly consisting of images in the I-band, out of which a galaxy catalogue ($95\%$ complete to $I=22.5$) was extracted (@Nonino99). This catalogue allowed us to identify $\sim 57\%$ of the 386 ATESP sources present in that region and optical spectroscopy was obtained for a complete magnitude-limited ($I<19$) sub-sample of 70 sources (see @Prandoni2001b). Some VLT/NTT spectroscopy is also available for fainter sources in the same region ($\sim 40$ sources with $19<I<21.5$, Prandoni et al., in prep.). However, the 3 square degree ATESP-EIS sample only overlaps partially with the DEEP1 region, covering the fields DEEP1c and DEEP1d.
This paper mainly focuses on the radio/optical analysis of the 85 ATESP radio sources located in DEEP1a, b and c, for which deep multi–colour optical/NIR information can be exploited. However, whenever considered useful, we include in our discussion any optical data (imaging and/or spectroscopy) available to the ATESP sources located in DEEP1d. Such data may come either from the observations mentioned above, or from the literature.
Multi–colour analysis of DEEP1 DPS data {#sec:dpsanalysis}
=======================================
A general discussion of the DPS optical imaging is provided in [@Mignano2007] and in @Hildebrandt2006, where the global quality of the photometry obtained through the EIS and the GaBoDS pipelines, respectively, is discussed. Here, we focus our attention on region DEEP1, which is the region of interest of this work. A careful analysis of the photometry of the single pass-band images covering region DEEP1 is very important since we will use this data later to estimate photometric redshifts for the ATESP radio sources. Also very important is the recipe followed to produce the optical colour catalogues, since reliable galaxy colours are crucial to get reliable photometric redshifts.
Colour catalogues {#sec:colorcat}
-----------------
We used the available UBVRI images to derive overall optical colour catalogues for DEEP1a, b and c.
To obtain a good quality colour catalogue it is clear that one should use UBVRI images reduced in a consistent way. The optical images available to this work were reduced with different pipelines: the EIS pipeline for the images obtained in the framework of the DPS survey and the GaBoDS pipeline for the images obtained later on (UBI images for DEEP1c and V images for DEEP1a). In order to avoid internal inconsistency, we therefore decided to refer to the EIS reduction for DEEP1b (see @Mignano2007) and to the GaBoDS reduction for both DEEP1a and DEEP1c (see @Hildebrandt2006).
The technique of [*reference imaging*]{} (see below) was adopted in this work to produce the colour catalogue, since it provides the most accurate colour estimates, through the measuring of the source flux within the same area in any of the different pass-band images. This is especially important for extended objects.
We selected as the reference image the best seeing single pass-band image. Such a choice allows us to minimise the effect of very close pairs of objects, which are not resolved due to poor seeing. The $I$–band and R–band images for field DEEP1a have very similar seeing values and the colour catalogue of DEEP1a was extracted by using the $R$–band image as the reference. For DEEP1b and DEEP1c best seeing is measured for I– and R–band images, respectively, and the choice of the reference image was done accordingly.
We run SExtractor (ver. 2.3, @Bertin1996) in the so–called [double image mode]{}: detection and object apertures were based on the reference image, followed by isophotal magnitudes measured in the same aperture for each detected object on the other pass-band images separately.
The optical colour catalogues were then cross–correlated with single pass-band catalogues extracted from the $J$ and $K_s$ images, whenever available. Since the infrared catalogues overlap, it may happen that the same optical object is identified in more than one infrared catalogue. In such cases, the infrared object with the lowest magnitude error was selected.
We did not include the NIR information in the colour catalogue production from the beginning since a) it is available only for limited sub-regions of DEEP1a and DEEP1b fields and b) the data are taken with a different instrument and telescope (SOFI at the 3.6 m) and reduced through a specific EIS pipeline.
Photometric redshifts {#sec:photometric_z}
---------------------
![$U-B$ vs. $B-V$ colour diagram for stars in field DEEP1b. No correction applied (top), correction $U_{corr}=-0.15$ applied (bottom). Green points refer to DPS stars, black dots to modelled stars.[]{data-label="fig:color_checkUBBV"}](8545fig7.jpg)
![$U-B$ vs. $B-V$ colour diagram for stars in field DEEP1b. No correction applied (top), correction $U_{corr}=-0.15$ applied (bottom). Green points refer to DPS stars, black dots to modelled stars.[]{data-label="fig:color_checkUBBV"}](8545fig8.jpg)
![Optical vs. infrared colour-colour diagram for stars in DEEP1b: $V-R$ vs. $R-J$ (top) and $V-R$ vs. $R-K_s$ (bottom). Green points refer to DPS stars, black dots to modelled stars.[]{data-label="fig:color_checkVRRK"}](8545fig9.jpg)
![Optical vs. infrared colour-colour diagram for stars in DEEP1b: $V-R$ vs. $R-J$ (top) and $V-R$ vs. $R-K_s$ (bottom). Green points refer to DPS stars, black dots to modelled stars.[]{data-label="fig:color_checkVRRK"}](8545fg10.jpg)
The success of photometric redshift estimate routines strongly depends on the accuracy of the photometric calibration in the various pass-bands and on the accuracy of the colour estimation. In @Mignano2007 and @Hildebrandt2006, comparisons between the UBVRI colours of stars in the various regions covered by the DPS, and the ones expected from a theoretical model [@Girardi2005], were presented to check for the presence of possible systematic offsets. Here, we report on the results obtained specifically for the DEEP1 region, which is the one of interest to our radio/optical study. From the colour-colour diagram analysis very good agreement was found between the catalogued star colours and the theoretical expectations, except in the case of the U-band for field DEEP1b, where an offset of $\sim 0.15$ mag is present (see Fig. \[fig:color\_checkUBBV\], top panel). After correcting for this offset, a good overlap between observed and expected colours is obtained (see Fig. \[fig:color\_checkUBBV\], bottom panel).
We have also checked the optical (WFI)–infrared (SOFI) colours of the DPS stars, and no appreciable offset was seen. This is shown in Fig. \[fig:color\_checkVRRK\], where $V-R$ vs. $R-J$ and $R-K_s$ are plotted for DEEP1b, chosen as reference.
We also used any spectroscopic data available from the literature in this region to analyse the impact of both the correction applied in the U-band for DEEP1b and the use of NIR colours (when available) in the determination of galaxy photometric redshifts.
It is important to note that most spectra come from the ESP redshift survey (see Sect. \[sec:otherdata\]), which covers a limited redshift range ($z<0.3$, @Vettolani98) and therefore the present comparison mainly probes the most local galaxies of the DPS.
Photometric redshifts for 88 galaxies with spectroscopy information present in fields DEEP1a, b and c were estimated using the public photometric redshift code [*Hyperz*]{} [@Bolzonella2000], by using both the templates created from the synthetic stellar libraries of @BruzualCharlot1993, hereafter BC, and the empirical ones compiled by [@Coleman1980] to represent the local galaxy population (hereafter CWW). We stress that such galaxies are not necessarily associated to ATESP sources.
From this analysis we found a clear improvement in the photometric redshift determination when correcting for the systematic offset in U-band photometry. A further improvement is obtained when adding the NIR ($J$ and $K$) information (when available) to the $UBVRI$ colour catalogue. The $z_{phot}$ vs. $z_{spec}$ linear fit slope gets closer to unity ($a=0.93\pm 0.05$) and the object distribution around the $z_{phot} = z_{spec}$ line gets narrower. The final $z_{phot} - z_{spec}$ diagram is shown in Fig. \[fig:ESPcheckcNIR\]. Dotted lines indicate the range that contains 95% of the objects (z$_{phot}=$ z$_{spec} \pm$ 0.1). Such a range, albeit rather large, is adequate for this kind of study, where errors in luminosity determinations of $\Delta logL$ of the order of $\la 0.5$ are acceptable. The horizontal error bars are not shown since they are negligible: ESP redshifts are characterised by errors of the order of $\sim 60$ km/s on the measured recession velocity, i.e. $\Delta z \sim
2\cdot 10^{-4}$ (@Vettolani98).
As a final remark, we stress that photometric redshifts shown in Fig. \[fig:ESPcheckcNIR\] were obtained using different template sets for different redshift ranges: galaxies with spectroscopic redshift $<0.1$ were fitted by CWW templates, while objects at z$_{spec}\geq 0.1$ by BC templates. This choice, as expected, turned out to provide the best redshift estimates over the two redshift ranges.
Figure \[fig:zdistr\] shows the galaxy photometric redshift distribution obtained from the optical UBVRI colour catalogues in field DEEP1b, chosen as reference. Most of the galaxies lie at $z<1$, as expected, with a significant number of objects extending up to $z\sim 3$. On the other hand, the excess at $z\sim 5.5$ is mainly due to objects classified by [*Hyperz*]{} as Sc galaxies and is clearly spurious. For such objects the photometric redshift determination clearly fails. Noteworthy are the two narrow peaks at $z\sim 0.7$ and $z\sim 1.5$. The latter is also present in fields DEEP1a and DEEP1c, and most probably indicates a degeneracy in the Hyperz routine, due to the fact that the spectral range covered by UBVRI-bands at $z\geq 1.5$ does not probe the Balmer $4000$ Å break. More interesting is the peak at $z\sim
0.7$, mainly composed of early type galaxies, which is not replicated in DEEP1a and c, possibly indicating the presence of real large scale structure.
![z$_{phot}$ vs. z$_{spec}$ for the 88 galaxies in DEEP1a, b, c fields with spectra available. $U$–band magnitudes of DEEP1b objects are corrected and NIR information is used, when available. The solid line indicates $z_{phot}=z_{spec}$ and the dotted lines indicate the range that contains 95% of the objects. The error bars represent the limits of the confidence intervals at 68%[]{data-label="fig:ESPcheckcNIR"}](8545fg11.jpg)
![Galaxy photometric redshift distribution for field DEEP1b.[]{data-label="fig:zdistr"}](8545fg12.jpg)
Optical identification of the ATESP radio sources {#sec:optid}
=================================================
In the following we present the cross-correlation between the ATESP radio sources in fields DEEP1a, b and c and the multi–colour optical/NIR catalogues described in Sect. \[sec:colorcat\].
In the literature different statistical techniques are used to cross–correlate radio and optical catalogues, from the simplest, distance-only based criterion, which considers as *good* any identification within a certain fixed radio–optical distance, to more sophisticated techniques, like the likelihood ratio criteria, based on the probability that a given source, at a certain distance and with a certain magnitude, is the true optical counterpart of the radio source (e.g. @deRuiter1977 [@Ciliegi2003; @Sullivan2004; @Simpson2006]).
In [@Mignano2007] a preliminary optical identification of the ATESP sources with the DPS catalogues was proposed, based on distance alone. However, while this choice proves to be appropriate for shallower optical databases (see e.g. the ATESP-EIS case, @Prandoni2001b), it is not very reliable when dealing, like in this case, with very deep (and therefore crowded) optical catalogues. Hence, we adopt the *likelihood ratio* technique in the form described by [@Sutherland1992] and [@Ciliegi2003].
The likelihood ratio [*LR*]{} is defined as the ratio between the probability that the source is the correct identification and the corresponding probability that the source is a background, unrelated object. A threshold value $L_{th}$ of the likelihood ratio is assumed, above which a counterpart is considered as a good identification and below which is dismissed as spurious.
The sample of accepted identifications thus consists of all the radio–optical associations that have $LR$ $>$ $L_{th}$. $L_{th}$ was chosen to be the value of $LR$ that maximises the function $(C+R)/2$, where $C$ is the completeness and $R$ the overall reliability of the sample (@deRuiter1977).
Optical identifications {#subsec:idproc}
-----------------------
The ATESP radio sources were identified in the same reference pass-band (R or I) as chosen to derive the colour catalogues of DEEP1a, DEEP1b and DEEP1c.
Before proceeding with the optical identification, the presence of possible systematic offsets between the radio and the optical astrometry was verified. We note that radio positions always refer to 5 GHz catalogue positions, unless the source is catalogued only at 1.4 GHz (i.e. $S_{\rm peak}(5\, $GHz)$<6\sigma$, see Paper I), while optical positions refer to the reference pass-band catalogue.
As shown in [@Mignano2007], where a preliminary analysis was given, the median radio-optical offsets for our sample are $<\Delta$ RA$> = -0.$213 and $<\Delta$ Dec$> = -0.$073. The source radio positions were corrected for such median offsets before proceeding with the radio source optical identification.
In computing the $LR$ value for each optical counterpart, the radio and optical positional uncertainties have to be taken into account. Here, we adopted $1 \sigma$ positional errors appropriate for the ATESP (see @Prandoni2000b) and for the DPS (see @Mignano2007) catalogues. In addition, we need to assume an expected [*a priori*]{} identification rate ($Q$). We adopted $Q= 0.7$, i.e. 70% of the radio sources are assumed to be truly identified down to the limiting magnitude of the optical catalogues. This choice is based on previous radio-optical identification studies undertaken down to similar optical depths (see e.g. @Ciliegi2005 [@Sullivan2004]).
Figure \[fig:LRdistr\] shows the distribution of $LR$ values as a function of radio–optical offsets for the 85 radio sources in DEEP1a, b, and c. As expected, $LR$ decreases going to large radio–optical offsets ($>1$), and the identifications become less reliable. The horizontal solid line represents the assumed threshold $LR$ value, above which optical counterparts are considered as good identifications. The adopted threshold value, $L_{th}=0.3$, was chosen in agreement with similar works reported in literature [e.g. @Ciliegi2005]. It is worth noting, however, that most of the sources have $LR$ values $\gg$ 10 (see Fig. \[fig:LRdistr\]), which means that most of the optical identifications have very high probability of being real. As reported in Table \[tab:LRtabstat\], 60 radio sources in DEEP1a, b and c were identified down to $L_{th}=0.3$, (see Col. 3).
In order to check the robustness of this identification technique and its dependence on the assumed parameters, the likelihood ratio analysis was repeated using different values of $Q$ in the range 0.5–1.0. No substantial difference in the final number of identifications and in the associated reliability was found.
The contamination due to possible spurious identifications with $LR>L_{th}$ was estimated by shifting the coordinates of the radio sources by several random offsets and then repeating the identification procedure. The average contamination rate ($\%_{sp}$) was 7.4%, 6.8% and 6.3% for DEEP1a, DEEP1b and DEEP1c, respectively (see Table \[tab:LRtabstat\], Col. 5).
![$LR$ values vs. radio–optical offsets for the 85 radio sources located in DEEP1a, b and c. The horizontal solid line indicate the $LR$ value $L_{th}=0.3$, above which counterparts are considered as good identifications (see text for details).[]{data-label="fig:LRdistr"}](8545fg13.jpg)
0.2cm
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Field N$_{RS}$ N$_{id}^{\geq L_{th}}$ C $\%_{sp}$ N$_{id}^{add}$ N$_{id}^{tot}$ $\%_{id}$
-------- -------------------------------------------------------------------------------- ------------------------ ------ ----------- ---------------- ---------------- -----------
DEEP1a 27 16 98.6 7.4 4 20 74.1
DEEP1b 26 21 99.1 6.8 1 22 84.6
DEEP1c 32 23 99.0 6.3 1 24 75.0
**[85]{} & **[60]{}& **[98.9]{} & **[6.8]{} & **[6]{} & **[66]{} & **[77.6]{}\
**************
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: The sample identification statistics. Col. 1 lists the field name, Col. 2 the number of radio sources, Col. 3 the number of objects identified using the $LR$ technique, Col. 4 the completeness of the sample, Col. 5 the average contamination rate, Col. 6 the number of additional identifications, Col. 7 the total number of identifications, and Col. 8 the identification rate.[]{data-label="tab:LRtabstat"}
Additional identifications {#subsec-addid}
--------------------------


The optical counterparts of the ATESP radio sources were all visually inspected on the corresponding optical reference-band images, giving particular attention to the multiple and non–Gaussian radio sources, where radio positions might not precisely coincide with the host galaxy core. From this inspection we decided to include among the identified radio sources three extended radio sources with $LR$ values lower than the threshold: in all cases, the optical counterpart located close to the radio barycenter is very likely to be the host galaxy of the radio source. Two (ATESP5 J225034–401936 and ATESP5 J225426–402442) are classical double radio sources (see Fig. \[fig:extended\], left and middle panels) and one (ATESP5 J225505-401301) has the morphology of a low surface brightness wide angle tail (WAT) source, i.e. an extended radio source located in a cluster (see Fig. \[fig:extended\], right panel). This hypothesis is supported by the fact that ATESP5 J225505-401301 is located in a crowded optical field with several optical galaxies in the field having similar photometric redshifts (see Sect. \[sec:dpsanalysis\] for the derivation of photometric redshifts for the optical sample).
In addition, we checked for any possible additional identification in pass-bands other than the reference one ($R$ or $I$). We found that only one extra identification (source ATESP5 J224827-402515 in DEEP1c field) could be recovered when the reference optical catalogue was extracted from the $I-$band image (see Fig. \[fig:nir\], left panel). This identification was originally missed due to the fact that a larger region around the bright star close to the object was masked in the $R-$band image. This means that in general the reference images were chosen appropriately for our scientific application.
A similar check was performed for the $J-$ and $K_{s}-$band infrared images, available for DEEP1a and b. For two sources (ATESP5 J225511-401513 and ATESP5 J225443-401147), with no optical counterpart in the DEEP1a optical images, a possible counterpart was found within a distance of 2$\,$ in the infrared $K_{s}-$ or $J-$band images (see Fig. \[fig:nir\], middle and right panels). These objects have extremely red colours ($R-K_{s}>5$), probably caused by either high redshifts or reddening due to dust.
Including the six objects discussed above ($N_{id}^{add}$ in Table \[tab:LRtabstat\]), the final identification sample is composed of 66 objects, corresponding to an identification rate of 77.6% (see last column of Table \[tab:LRtabstat\]). On average the completeness $C$ is 98.9% and the contamination rate is 6.8%. Both these quantities refer to the sample of 60 identifications, statistically defined on the base of the likelihood ratio technique. A summary of the sample identification statistics is given in Table \[tab:LRtabstat\].
A list of all the identified radio sources is given in Table \[tab:LRidentification\]. The six objects included a posteriori (see discussion above) are added at the bottom of the table.
A comparison with other similar radio/optical studies is shown in Table \[tab:LRtabcomparison\]. It is worth noting that the identification rate of our sample is consistent with the ones found in similar radio–optical studies taken from the literature. Of particular interest is the comparison with the identification rates reported for the VVDS–VLA sample [@Ciliegi2005], and for the Phoenix survey [@Sullivan2004], where the radio/optical analysis was performed down to the same optical depth.
It is also interesting to compare the present study with the shallower ATESP–EIS sample, where optical identifications were searched down to $I=22.5$ (see @Prandoni2001b). The identification rate increases from $\sim57$% of the ATESP–EIS to $78$% of the ATESP–DEEP1, demonstrating the need for deep follow–up surveys to properly identify the mJy/sub–mJy radio population.
0.2cm
----------------------------------------------------------------------------- ----------- ---------- ------------ ----------- ----------
Survey S$_{lim}$ N$_{RS}$ Area I$_{lim}$ %$_{id}$
(mJy) (sq.degr.)
**[ATESP–DEEP1]{}&**[0.4]{}&**[85]{} & **[0.75]{} & **[24.3]{}& **[77.6]{}\
VVDS–VLA & 0.08 & 1054 & 1 & 24.5 & 74.0\
Phoenix & 0.1 & 839 & 3 & 24.5 & 79.0\
VLA–LH & 0.05 & 63 & 0.03 & 24 & 92.0\
ATESP–EIS & 0.4 & 386 & 3 & 22.5 & 57.3\
************
----------------------------------------------------------------------------- ----------- ---------- ------------ ----------- ----------
: Identification rate in our sample and in other deep radio fields: VVDS–VLA [@Ciliegi2005], Phoenix survey [@Sullivan2004], VLA–LH [@Ciliegi2003] and ATESP–EIS [@Prandoni2001b]. Col. 1 gives the sample name, Col. 2 the radio flux limit, Col. 3 the number of radio sources present in the sample, Col. 4 the area covered by the radio–optical data, Col. 5 the limiting magnitude ($I$), and Col. 6 the identification rate. []{data-label="tab:LRtabcomparison"}
Photometric redshifts for the identified ATESP radio sources {#sec:radiozphot}
============================================================
The optical spectra of radio sources are not exhaustively represented by the standard “stellar” templates used for normal inactive galaxies (Ellipticals, S0, Spirals, Irregulars, Star-bursts). We therefore added to the standard template spectra provided by [*Hyperz*]{} (BC and CWW) a set of spectral templates derived from the SDSS (Sloan Digital Sky Survey, @York2000) quasar samples. In particular we added templates for: a) blue quasars (QSO, @Hatziminaoglou2000); b) composite red quasars (REDQ, @Richards2003); and c) composite broad absorption lines (BAL) quasars (BALQ, @Reichard2003b). These templates are available at the SDSS web pages.
For each identified ATESP radio source [*Hyperz*]{} was used to provide a possible $z_{phot}$ (and corresponding reduced $\chi ^2$ probability) for each set of templates (CWW, BC, blue, red and BAL quasars). Then, the “best” (highest probability) z$_{phot}$ was selected as the correct one, together with the corresponding spectral type.
In eleven cases it was not possible to assign a reliable z$_{phot}$ and spectral type to the optical radio source counterpart. Typically these are very faint objects (mag$>$24 in detected bands), or objects with very limited colour information (e.g. detected only in NIR pass-bands), or objects with bad photometry due to nearby star and/or deblending problems. One of these cases could be recovered thanks to the availability of spectroscopy information (source ATESP J224958-395855).
In summary, it was possible to assign a redshift and a spectral type to 56 of the 66 radio sources identified in DEEP1a, b and c (85%). However, if we restrict our analysis to a magnitude-limited $I<23.5$ complete sample, we get a success rate of 97% (56/58 objects with redshift determination).
The relevant spectral parameters obtained for the 56 radio sources for which a redshift and type could be assigned, are reported in Table \[tab:LRidentification\].
Spectral types reported in Table \[tab:LRidentification\] ( Col. 13) are defined as in [@Prandoni2001b]):
1. *Early type spectra (ETS)*: ellipticals, early spirals (bulge–dominated Sa);
2. *Late type spectra (LTS)*: late spirals (Sb, Sc, Sd) and irregular Magellanic (Im) galaxies;
3. *SB*: star-burst galaxy spectra (typical of HII regions);
4. *PSB*: post star-burst galaxy spectra (K+A and E+A galaxies);
5. *AGNs*: objects with evident characteristics of either Seyfert 1, Seyfert 2, or quasar spectra (respectively labeled as [*Sy1, Sy2, Q*]{});
For the 14 objects with optical spectroscopy available, we have in general a good match between spectral and photometric redshifts ($\Delta z \la 0.1$, see Sect. \[sec:photometric\_z\]). The exception is ATESP J224803-400513, for which we have a $\Delta z >> 0.1$. In this case, the obtained photometric redshift ($z_{phot}=1.0$) is much lower than the spectroscopic value of 1.72 (the published value, $z_{spec}=2.33$ by @Prandoni2001b, was over-estimated).
We also find very good agreement between photometric and spectral types. There is only one case (source ATESP J225400-402204) where the photometric type (very old Sa) disagrees with the spectral type (LTS). However, note that passively evolving single bursts (defined as [*Burst*]{} in [*Hyperz*]{} BC templates) can be considered early or late, depending on their age. As a general rule, very old galaxies (age $\apprge 1$ Gyr) are included among the ETS, while very young galaxies (age $\apprle 0.1$ Gyr) are included among the LTS. For intermediate cases (ages between 0.1 and 1 Gyr) the classification is not straightforward, from wide-band information on the continuum shape alone. It is difficult to distinguish between LTS, ETS and PSB, with no information on the presence of narrow absorption and/or emission lines. For the sake of simplicity we decided to make a sharp separation between LTS and ETS at age $=0.3$ Gyr, with the caveat that among such objects we could have some mis–classification. The value of 0.3 Gyr was chosen from a comparison between spectral type and [*Burst*]{} age in the few cases where spectroscopy was available. One probably mis–classified object is source ATESP5 J225321-402317 (Burst age 0.18 Gyr), which has linear size $\sim 200$ kpc and extended radio morphology, clearly indicating an AGN origin of its radio emission (see Fig. \[fig:si\_plots\], middle panel).
The final classification given to the optical counterpart on the base of the present discussion is reported in last column of Table \[tab:LRidentification\]. The classes are defined following the spectral type definitions listed above. In the one case where photometric and spectral types disagree, we rely on the latter to define the object class.
To further check the reliability of our photometric redshift determinations, we compared our redshift distribution with the one expected for ETS on the base of the well-known K–z correlation found for radio source host galaxies (see e.g. @Willott2003) and with the R–z relation found for host galaxies of gigahertz peaked spectrum (GPS) radio sources (see @Snellen1996; @Rigby2007). We find that the photometric redshift distribution obtained for objects classified as ETS in our sample follows, within a $\Delta z\sim 0.1-0.2$ dispersion, the one expected on the base of the quoted relations.
The ATESP–DEEP1 source properties {#sec:comp}
=================================
------------- ----------- ----------- ---------- ---------- ---------
Sample $I_{lim}$ ETS LTS+SB AGNs UNCL
(%) (%) (%) (%)
ATESP–EIS $19$ $49\pm8$ $43\pm8$ $9\pm3$ $-$
ATESP–DEEP1 $23.5$ $64\pm10$ $19\pm6$ $14\pm5$ $3\pm2$
------------- ----------- ----------- ---------- ---------- ---------
: The ATESP sample composition.[]{data-label="tab:ATDEEP1comp"}
We exploited the photometric redshift and spectral type determinations for the ATESP sources in the DEEP1a, b and c regions, to study the composition of the ATESP sample and the radio/optical properties of mJy and sub–mJy sources. It is important to note that the ATESP–DEEP1 sample overlaps with the ATESP–EIS sample. This means that photometric (or spectroscopic) redshifts obtained for sources in DEEP1a, b and c could be in principle complemented by the sparse spectroscopic information available from the ATESP–EIS for DEEP1d sources. Nevertheless, in the following it was preferred to limit our analysis to the sources in DEEP1a, b, and c, (see Table \[tab:LRidentification\]), which represent a much more reliable sample, thanks to the very high identification/redshift determination statistics.
Of the 66 identified radio sources, it was found that 37 are ETS, 8 are quasars (AGNs), 10 are LTS, 1 is a SB, while 10 objects could not be classified (UNCL).
In Table \[tab:ATDEEP1comp\], the ATESP–DEEP1 composition is compared with the one found for the “brighter” ATESP–EIS sample (70 objects with complete spectroscopy down to $I=19$, @Prandoni2001b). The ATESP–DEEP1 sample provides insight into the composition of the faint radio population associated with optically faint galaxies, albeit in this comparison we restrict our analysis to the magnitude-limited sample of 58 objects with $I<23.5$, to reduce the number of unclassified objects. Table \[tab:ATDEEP1comp\] shows that, as suggested by previous studies (e.g. @Gruppioni1999 [@Prandoni2001b]), the contribution of star–forming (LTS plus SB) galaxies decreases dramatically with the magnitude, going from 43% of the ATESP–EIS “bright” ($I<19$) sample to 19% of the “deeper” ($I<23.5$) ATESP–DEEP1 sample. The fraction of ETS and AGNs, on the other hand, increases going to deeper magnitudes, even though the statistical uncertainties are large.
Redshift distribution
---------------------
Figure \[fig:radiozphotdistr\] shows the redshift distribution of the 56 ATESP radio sources in regions DEEP1a, b and c, for which a reliable redshift estimate was obtained (see Table \[tab:LRidentification\]). Whenever spectroscopy is available (14 objects), we rely on the spectral redshift determination. The distribution of ETS shows a significant peak at z=0.4, with a tail extending up to z$\sim2 $, while, as expected, quasars have typically higher redshifts $1<z<2$, and LTS are found at z$<<$1. This reflects the fact that radio sources triggered by star formation are usually characterised by lower radio powers than sources triggered by AGN activity (see also Fig. \[fig:radiopowerdistr\]). The only star-burst galaxy in the sample has either a redshift ($z\sim 2$) or a radio power ($P_{\rm 1.4~GHz}$ close to $10^{26}$ W/Hz, see Fig.\[fig:radiopowerdistr\]) that is much higher than for the LTS galaxy population. While this could be due to evolutionary effects in the population of the radio–selected star forming galaxies, it is also possible that the photometric classification is wrong. In fact the SB spectra are notoriously similar to narrow–line AGN spectra (Seyfert 2) and photometric techniques based on wide–band colours could easily fail in classifying such objects. In addition, the photometric routine applied to this sample ([*Hyperz*]{}) does not provide template spectra for Seyfert 2 galaxies.
![Redshift distribution for the 56 radio sources in the ATESP–DEEP1 sample with photometric redshift determination. The sample is divided into four different classes. From top to bottom: AGNs, ETS, LTS, and star-burst galaxies.[]{data-label="fig:radiozphotdistr"}](8545fg20.jpg)
Radio and optical luminosities
------------------------------
![1.4 GHz radio power distribution for the 56 ATESP–DEEP1 radio sources with photometric redshift determination. The sample is divided into four different classes. From top to bottom: AGNs, ETS, LTS and star-burst galaxies. Light shading indicates two upper limits. []{data-label="fig:radiopowerdistr"}](8545fg21.jpg)
![Absolute $B$–band magnitude distribution. The sample is divided into four different classes. From top to bottom: AGNs, ETS, LTS and star-burst galaxies.[]{data-label="fig:Babsmagdistr"}](8545fg22.jpg)
![Absolute $R$–band magnitude distribution. The sample is divided into four different classes. From top to bottom: AGNs, ETS, LTS and star-burst galaxies.[]{data-label="fig:Rabsmagdistr"}](8545fg23.jpg)
For the 56 ATESP radio sources in DEEP1a, b and c with redshift determination we derived radio and optical/NIR luminosities. Radio powers were K–corrected by using the 1.4 – 5 GHz radio spectral index of each source (see Table \[tab:LRidentification\]), while absolute magnitudes (computed by Hyperz) were K–corrected on the base of the optical spectral type.
Figure \[fig:radiopowerdistr\] shows the 1.4 GHz radio power distribution for the sample. Again, the four classes (AGNs, ETS, LTS, and SB) are shown separately. ETS galaxies mostly have $23< \log P \rm \; (W/Hz) <25$, which are typical values of FRI radio sources [@Fanaroff1974], while AGNs are, as expected, characterised by higher radio powers ($10^{25}-10^{26}$ W/Hz). LTS galaxies, on the other hand, have low radio powers, with 7/10 having $P<10^{24}$ W/Hz, typical of radio sources triggered by star formation (see e.g. @Condon1988).
If we assume that the ETS are triggered by low to medium luminosity AGN activity and put both AGNs and ETS objects in a single class, one finds that the sample is largely dominated by galaxies with an active nucleus (78%, see Table \[tab:ATDEEP1comp\]), which further demonstrates that sub–mJy samples like the ATESP are best suited to study the evolutionary behaviour of low–power AGNs.
Figures \[fig:Babsmagdistr\] and \[fig:Rabsmagdistr\] show the absolute magnitude distributions in B– and R–bands for the 56 ATESP–DEEP1 radio sources in fields DEEP1a, b and c with a redshift determination. AGNs are characterised by higher optical luminosities than ETS, LTS and SB galaxies. This is not surprising when we consider that in our sample all AGNs are photometrically and/or spectroscopically classified as quasars (see Table \[tab:LRidentification\]).
Nature of the mJy and sub-mJy radio population {#sec:nature}
==============================================
In order to probe the origin (nuclear or on a larger scale) of the radio emission in mJy and sub–mJy sources and the physical processes responsible for the flattening of the radio spectral index found in sub-mJy samples like the ATESP (see §\[sec:introduction\]), we made an overall comparison of the radio spectral index, the radio morphology and the optical properties of the entire ATESP–DEEP1 sample.
In Fig. \[fig:si\_plots\] (top panel) the radio–to–optical ratio is plotted as a function of spectral index for the whole ATESP–DEEP1 sample (fields a, b, c and d). The radio–to–optical ratio was defined following [@Condon1980], as $R=S\cdot 10^{0.4(m-12.5)}$, where $S$ is the source 1.4 GHz flux density (in mJy) and $m$ is the optical magnitude (here we assume the I–band magnitude). We thus can include sources without known redshifts. In the following we use both DPS and ATESP-EIS optical data (see @Prandoni2001b), when available, while lower limits to $R$ are given whenever a source was not identified down to the limiting magnitude of the optical surveys ($I\sim 22.5$ for EIS–WIDE and $I\sim 24$ for DPS DEEP1). For the sources with spectral type/redshift estimates available (either from multi–colour photometry or spectroscopy) we can distinguish between ETS (red filled circles), LTS plus star-burst galaxies (blue stars) and AGNs (green double triangles).
Figure \[fig:si\_plots\] clearly shows that most of the flat–spectrum sources have high radio–to–optical ratios ($R>1000$), typically associated with the classical powerful radio galaxies and quasars. Flat–spectrum sources with low $R$ values are preferentially identified with ETS, where the radio emission is again probably triggered by nuclear activity (typical radio powers $P \sim 10^{23-25}$ W/Hz, see Fig. \[fig:radiopowerdistr\] and discussion therein). Star–forming galaxies (LTS and SB), on the other hand, are typically associated to steep–spectrum sources, as expected for synchrotron emission in galactic disks or in nuclear star-bursts.
A further radio/optical analysis of the ETS in the ATESP–DEEP1 sample has shown that ETS with flat and/or inverted spectrum are preferentially compact (linear sizes $d< 10-30$ kpc, see Fig. \[fig:si\_plots\], middle panel). Their rather low radio luminosities ($P_{1.4 \rm{GHz}}\sim 10^{22-24}$ WHz$^{-1}$, see Fig. \[fig:si\_plots\], bottom panel) and the absence of emission lines in the optical spectra may suggest that these objects belong to the class of FRI radio galaxies; but FRI radio galaxies are characterised, on average, by steeper radio spectra and larger linear sizes (but see the linear size – radio power relation found for B2 radio galaxies, @deRuiter1990 and references therein).
The compactness of the sources, together with the flat/inverted spectra, suggests core emission with strong synchrotron or free-free self-absorption. This could be associated to either very early phases of nuclear radio-activity (the so-called GHz peaked spectrum - GPS - radio sources, @Odea1998 [@Snellen2000]) or late phases of the evolution of AGNs, characterised by low accretion/radiative efficiency (advection-dominated accretion flow, i.e. ADAF; advection dominated inflow-outflow solutions, i.e. ADIOS). In the first case, however, larger luminosities are expected ($P_{1.4 \rm{GHz}}>10^{25}$ WHz$^{-1}$), while in the latter case very low radio powers are predicted ($P_{5 \rm{GHz}}<10^{21}$ WHz$^{-1}$; see @Doi2005). Another intriguing possibility is that in these sources ADAF and radio jets coexist, as suggested for low luminosity AGNs, (LLAGNs, see e.g. @Doi2005 and reference therein). This would explain the somewhat brighter luminosities than expected by simple ADAF and can still be consistent with the presence of flat/inverted radio spectra (see ADAF-jet model by @Falcke1999).\
This class of objects may also be very similar to the composite class of the so-called low power ($P_{408 \rm{MHz}}<10^{25.5}$ WHz$^{-1}$) compact ($<10$ kpc) – LPC – radio sources studied by [@Giroletti2005]. Their host galaxies do not show signatures of strong nuclear activity in the optical (and X-ray) bands. Preliminary results indicate that multiple causes can produce LPC sources: geometrical-relativistic effects (low power BL-Lacertae objects), youth, instabilities in the jets, frustration by a denser than average ISM, and a premature end of nuclear activity.
Summary {#sec:summary}
=======
In this paper we have discussed the nature of the faint, sub-mJy, radio population, using a sample of 131 radio sources that were observed at 1.4 and 5 GHz with the ATCA (the ATESP–DEEP1 sample). A smaller sample of 85 radio sources is covered by deep multi–colour images. These were optically identified down to very faint magnitudes, which was possible thanks to the availability of very deep multi–colour optical material (in U, B, V, R, I, and sometimes J and K bands). The high percentage of identifications ($\sim 78\%$) makes this a sample that is well suited for follow up studies concerning the composition of the sub-mJy population and, in general, the cosmological evolution of the various classes of objects associated with faint radio sources.
We summarise our main results here.
- For 85% of the identification sample we succeeded in deriving reliable photometric redshifts, based on the available accurate colours (UBVRIJK).
- Based on spectral types determined either directly from spectroscopy or from the photometry (or both), we find that at the sub-mJy level the large majority of sources are associated with objects that have early type (64%) and AGN (14%) spectra; these are of course what we would normally call radio galaxies and quasars.
- Although earlier work (based on shallower optical imaging and spectroscopy) revealed the presence of a conspicuous component of late type and star-burst objects, such objects appear to be important only at brighter magnitudes ($I<19$), and are rare at fainter magnitudes ($19<I<23.5$).
- From an overall comparison of the radio spectral index with other radio and optical properties of the entire ATESP–DEEP1 sample, we find that most sources with flat radio spectra have high radio-to-optical ratios, as expected for classical radio galaxies and quasars. Flat-spectrum sources with low radio-to-optical ratios are preferentially associated with ETS, in which the radio emission is most plausibly triggered by nuclear activity as well, while star-forming galaxies are associated to steep-spectrum radio sources.
- ETS with flat or inverted spectra are mostly compact, with linear size $<10-30$ kpc, suggesting core-dominated radio emission. Their low radio luminosities (in the range $10^{22}-10^{24}$ W/Hz at 1.4 GHz) and the absence of emission lines in their spectra (when available) suggest that they are FRI sources, although these would normally have steeper spectra and be more extended. They may therefore represent specific phases in the life of a radio source, or may be similar to the low power compact radio sources discussed by [@Giroletti2005].
AM thanks Luiz da Costa and the ESO Imaging Survy (EIS) Team for their hospitality and for the assistance with the optical and NIR data reduction during his stay in Garching.
0.2cm
[lr|rrrrrrr|rcc|ccc|c]{} & U & B & V & R & I & J & K$_s$ & $z_{phot}$ & SED & Age (Gyr) & $z_{sp}$ & Sp. Type & Notes & Class\
& & & & & & & & & & & & & & &\
ATESP5 &J224750-400148 & $>25.1$& 21.8 & 20.7 & 19.3 & 18.4 & - &- & 0.35 & Burst & 8.5 & 0.442 & ETS & (a) & ETS\
ATESP5 &J224753-400455 & $>25.1$& 23.4 & 22.5 & 21.3 & 20.1 & - & - & 0.61 & Burst & 1.7 & - & - & & ETS\
ATESP &J224759-400825 & $>25.1$& 25.4 & 25.2 & 25.2 & 24.3 & - & - & - & - & - & - & - & & -\
ATESP5 &J224801-400542 & $>25.1$& 23.1 & 22.1 & 20.9 & 19.8 & - & - & 0.56 & Burst & 2.0 & - & - & & ETS\
ATESP &J224803-400513 & 17.7 & 18.4 & 17.9 & 17.6 & 17.2 & - & - & 1.00 & QSO & - & 1.72 & QSO & (bc) & AGN\
ATESP5 &J224809-402211 & 22.5 & 23.0 & 22.7 & 21.5 & 20.8 & - & - & 0.55 & Sc & 4.5 & - & - & & LTS\
ATESP &J224811-402455 & 22.0 & 22.7 & 22.4 & 21.6 & 21.1 & - & - & 0.55 & Burst & 0.05 & - & - && LTS\
ATESP &J224817-400819 & $>$25.1& 23.9 & 22.9 & 21.8 & 20.3 & - & - & 0.80 & Burst & 1.0 & - & - & & ETS\
ATESP5 &J224822-401808 & $>$25.1& 23.5 & 22.2 & 20.6 & 19.4 & - & - & 0.37 & Ell & 6.5 & - & - & & ETS\
ATESP &J224843-400456 & 24.3 & 25.6 & 24.8 & 24.9 & 24.3 & - & - & - & - & - & - & - & & -\
ATESP5 &J224850-400027 & 21.8 & 22.3 & 21.8 & 20.8 & 19.7 & - & - & - & - & - & - & - & & -\
ATESP5 &J224858-402708 & 22.1 & 22.3 & 21.6 & 20.5 & 19.8 & - & - & 0.48 & Burst & 0.09 & - & - && LTS\
ATESP &J224911-400859 & 17.8 & 17.4 & 16.9 & 16.2 & 15.6 & - & - & 0.11 & Sbc & - & 0.065 & LTS & (b) & LTS\
ATESP5 &J224919-400037 & $>$25.1& 22.6 & 21.5 & 20.0 & 19.0 & - & - & 0.35 & Burst & 10.5 & - & - && ETS\
ATESP5 &J224932-395801 & 23.6 & 24.1 & 23.0 & 21.6 & 20.4 & - & - & 0.60 & Burst & 2.0 & 0.713 & ETS & (ad) & ETS\
ATESP5 &J224935-400816 & 18.3 & 17.6 & 16.6 & 15.9 & 15.2 & - & - & 0.11 & Burst & 12.5 & 0.153 & ETS & (a) & ETS\
ATESP5 &J224948-395918 & $>$25.1& 25.3 & 25.9 & 24.5 & 23.5 & - & - & - & - & - & - & - & & -\
ATESP5 &J224951-402035 & 14.6 & 18.6 & 18.4 & 18.0 & 17.6 & - & - & - & - & - & - & - & & -\
ATESP5 &J224958-395855 & 13.6 & 16.7 & 16.4 & 16.3 & 15.2 & - & - & - & - & - & 0.249 & ETS & (b) & ETS\
ATESP5 &J225004-402412 & $>$25.1& 23.4 & 22.3 & 21.2 & 20.0 & - & - & 0.61 & Burst & 1.7 & - & - & & ETS\
ATESP5 &J225008-400425 & 18.5 & 18.0 & 17.1 & 16.4 & 15.7 & - & - & 0.10 & Burst & 3.5 & 0.126 & ETS& (e) & ETS\
ATESP &J225009-400605 & $>$25.1& 25.1 & 23.8 & 22.4 & 21.0 & - & - & 0.74 & Burst & 1.0 & - & - & & ETS\
ATESP5 &J225028-400333 & 20.7 & 21.0 & 20.4 & 19.2 & 18.5 & - &- & 0.51 & Burst & 0.18 & 0.540 & LTS & (b) & LTS\
ATESP5 &J225048-400147 & $>$24.6& $>$25.7 & 26.4 & 25.9 & 23.8 & - & - & - & - & - & - & - & & -\
ATESP5 &J225056-400033 & 23.9 & 24.7 & 24.6 & 23.2 & 22.2 & - & - & 1.43 & REDQ & - & - & - & & AGN\
ATESP5 &J225056-402254 & 21.2 & 20.4 & 19.1 & 18.3 & 17.4 & - & - & 0.21 & Burst & 11.5 & - & - & & ETS\
ATESP5 &J225057-401522 & 15.8 & 15.3 & 14.6 & 14.0 & 13.2 & - & 13.0 & 0.01 & Burst & 0.72 & 0.033 & ETS & (af) & ETS\
ATESP5 &J225058-401645 & $>$24.6& 26.3 & 24.7 & 23.1 & 21.0 & - & 18.6 & 0.96 & Burst & 1.02 & - & - & & ETS\
ATESP5 &J225100-400934 & $>$24.6& $>$25.7 & 26.9 & 24.5 & 22.5 & - & 18.0 & 1.21 & Ell & 5.5 & - & - & & ETS\
ATESP5 &J225112-402230 & 26.0 & $>$25.7 & 27.8 & 26.5 & 24.4 & - & 18.5 & - & - & - & - & - & & -\
ATESP5 &J225122-402524 & 23.1 & 23.7 & 23.5 & 23.1 & 22.6 & - & 19.5 & 2.23 & SB2 & - & - & - & & SB\
ATESP5 &J225138-401747 & 19.4 & 19.3 & 18.4 & 17.8 & 17.0& 16.0 & 14.5 & 0.21 & Burst & 0.26 & 0.235 & LTS & (e) & LTS\
ATESP &J225206-401947 & 20.3 & 20.6 & 20.1 & 19.7 & 19.1 & 18.4 & 17.1 & 2.06 & QSO & - & - & - & & AGN\
ATESP5 &J225217-402135 & 22.7 & 23.8 & 23.4 & 22.9 & 22.3 & 20.9 & 19.2 & 0.93 & BALQ & - & - & - & & AGN\
ATESP5 &J225223-401841 & 15.7 & 15.4 & 14.9 & 14.3 & 13.6 & 12.7 & 11.5 & 0.04 & Sa & 12.5 & 0.033 & ETS & (af) & ETS\
ATESP5 &J225239-401949 & 14.9 & 14.8 & 14.3 & 13.8 & 13.1 & 12.2 & 11.3 & 0.06 & Sa & 8.5 &0.033 & ETS & (e)& ETS\
ATESP5 &J225242-395949 & 23.9 & 24.2 & 22.9 & 21.7 & 20.7 & - & - & 0.41 & Burst & 0.36 & - & - & & ETS\
ATESP5 &J225249-401256 & 24.0 & 25.7 & 24.3 & 22.6 & 20.9 & 19.4 & 17.5 & 0.59 & Ell & 5.5 & - & - & & ETS\
ATESP5 &J225316-401200 & 23.7 & 23.6 & 22.5 & 21.2 & 20.0 & - & - & 0.36 & Ell & 5.5 & - & - & & ETS\
ATESP5 &J225321-402317 & 22.1 & 23.3 & 22.8 & 21.9 & 21.2 & - & - & 0.70 & Burst & 0.18 & - & - & & LTS\
ATESP5 &J225322-401931 & $>$24.6& 25.3 & 23.1 & 21.7 & 20.0 & - & - & 0.30 & Burst & 10.5 & - & - & & ETS\
ATESP5 &J225323-400453 & $>$24.6& 25.0 & 24.0 & 22.8 & 21.5 & - & 18.2 & 0.36 & Sa & 6.5 & - & - & & ETS\
ATESP5 &J225325-400221 & 23.1 & 22.8 & 21.7 & 20.4 & 19.1 & - & 16.1 & 0.37 & Ell & 5.5 & - & - & & ETS\
ATESP5 &J225332-402721 & $>$24.6& 25.6 & 26.9 & 24.2 & 24.0 & - & - & - & - & - & - & - & & -\
ATESP5 &J225344-401928 & 23.8 & 24.9 & 24.4 & 24.1 & 22.7 & - & 18.8 & 1.40 & Ell & 3.5 & - & - & & ETS\
ATESP5 &J225345-401845 & 19.9 & 19.2 & 18.1 & 17.3 & 16.3 & - & 14.3 & 0.29 & Burst & 1.7 & - & - & & ETS\
ATESP &J225351-400441 & $>$25.3& 23.9 & 23.7 & 23.2 & 22.4 & - & - & 2.20 & QSO & - & - & - & & AGN\
ATESP5 &J225353-400154 & 21.3 & 21.3 & 21.4 & 20.9 & 19.7 & - & 17.1 & 2.00 & REDQ & - & - & - & & AGN\
ATESP5 &J225400-402204 & 16.8 & 16.4 & 15.8 & 15.1 & 14.2 & - & - & 0.03 & Sa & 13.5 & 0.033 & LTS & (g) & LTS\
ATESP5 &J225404-402226 & 21.4 & 21.4 & 20.7 & 19.5 & 18.6 & - & 16.2 & 0.43 & Ell & 4.5 & - & - & &ETS\
ATESP5 &J225414-400853 & 15.1 & 15.3 & 15.0 & 14.6 & 14.1& 13.6 & 12.8 & 0.07 & Burst & 0.09 & 0.032 & LTS & (e) & LTS\
ATESP5 &J225430-400334 & $>$25.3& 23.4 & 22.3 & 20.9 & 19.4 & - & 16.4 & 0.56 & Burst & 2.6 & - & - && ETS\
ATESP &J225430-402329 & $>$25.3& 25.6 & 25.1 & 24.1 & 22.3 & - & 18.7 & 1.03 & Ell & 4.5 & - & - & & ETS\
ATESP5 &J225434-401343 & $>$25.3& 25.8 & 25.8 & 24.6 & 23.2 & 20.7 & 19.0 & 1.64 & Burst & 0.36 & - & - & & ETS\
ATESP5 &J225436-400531 & $>$25.3& 22.2 & 21.1 & 19.7 & 18.4 & 17.7 & 16.0 & 0.44 & Burst & 4.5 & - & - & & ETS\
ATESP5 &J225442-400353 & $>$25.3& 25.1 & 24.1 & 22.9 & 21.0 & 19.5 & 17.6 & 0.81 & Ell & 5.5 & - & - & & ETS\
ATESP5 &J225449-400918 & 23.2 & 23.6 & 23.3 & 23.2 & 22.3 & 20.6 & 18.9 & 1.58 & Ell & 2.6 & - & - & & ETS\
ATESP5 &J225509-402658 & 21.6 & 22.3 & 21.8 & 21.0 & 19.4 & - & 16.5 & 0.97 & Burst & 0.13 & - & - & & LTS\
ATESP5 &J225515-401835 & 23.3 & 23.7 & 23.0 & 22.5 & 21.1 & 19.2 & 15.9 & 2.20 & REDQ & - & - & - & & AGN\
ATESP5 &J225529-401101 & $>$25.3& 23.3 & 22.2 & 21.0 & 19.5 & 18.7 & 16.6 & 0.73 & Burst & 0.36 & - & - & & ETS\
& & & & & & & & & & & & & & &\
& & & & & & & & & & & & & & &\
ATESP5 &J224827-402515 & 22.2 & 22.8 & 22.8 & 22.0 & 21.0 & - & - & 1.25 & Burst & 0.36 & - & - & & ETS\
ATESP5 &J225034-401936 & $>$24.6& 24.5 & 24.0 & 22.9 & 21.1 & - & - & 1.17 & Burst & 0.72 & - & - & & ETS\
ATESP5 &J225426-402442 & $>$25.3& 24.1 & 23.9 & 23.4 & 22.2 & - & 18.7 & 1.92 & REDQ & - & - & - & & AGN\
ATESP5 &J225443-401147 & $>$25.3& $>$25.9 & $>$25.8 & $>$25.7& $>$23.8& $>$22.2& 20.0 & - & - & - & - & - && -\
ATESP5 &J225505-401301 & 20.6 & 19.8 & 18.6 & 17.7 & 16.8 & 16.0 & 14.7 & 0.35 & Burst & 1.0 & - & - & & ETS\
ATESP5 &J225511-401513 & $>$25.3& $>$25.9 & $>$25.8 & $>$25.7& $>$23.8& 21.2 & 18.3 &- &- &- & - & - &&-\
& & & & & & & & & & & & & & &\
\
\
\
\
\
\
\
[^1]: Based on observations carried out at the European Southern Observatory, La Silla, Chile under program Nos. 75.A-0280 and 77.A-0211
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A complex Hadamard matrix is a square matrix $H\in M_N(\mathbb C)$ whose entries are on the unit circle, $|H_{ij}|=1$, and whose rows and pairwise orthogonal. The main example is the Fourier matrix, $F_N=(w^{ij})$ with $w=e^{2\pi i/N}$. We discuss here the basic theory of such matrices, with emphasis on geometric and analytic aspects.'
address: 'T.B.: Department of Mathematics, University of Cergy-Pontoise, F-95000 Cergy-Pontoise, France. [[email protected]]{}'
author:
- Teo Banica
title: Complex Hadamard matrices and applications
---
Introduction1
1\. Hadamard matrices5
2\. Complex matrices21
3\. Roots of unity37
4\. Geometry, defect53
5\. Special matrices69
6\. Circulant matrices85
7\. Bistochastic matrices101
8\. Glow computations117
9\. Norm maximizers133
10\. Quantum groups149
11\. Subfactor theory165
12\. Fourier models181
References197
Introduction {#introduction .unnumbered}
============
A complex Hadamard matrix is a square matrix $H\in M_N(\mathbb C)$ whose entries belong to the unit circle in the complex plane, $H_{ij}\in\mathbb T$, and whose rows are pairwise orthogonal with respect to the usual scalar product of $\mathbb C^N$, given by $<x,y>=\sum_ix_i\bar{y}_i$.
The orthogonality condition tells us that the rescaled matrix $U=H/\sqrt{N}$ must be unitary. Thus, these matrices form a real algebraic manifold, given by: $$X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$$
The basic example is the Fourier matrix, $F_N=(w^{ij})$ with $w=e^{2\pi i/N}$. In standard matrix form, and with indices $i,j=0,1,\ldots,N-1$, this matrix is as follows: $$F_N=\begin{pmatrix}
1&1&1&\ldots&1\\
1&w&w^2&\ldots&w^{N-1}\\
1&w^2&w^4&\ldots&w^{2(N-1)}\\
\vdots&\vdots&\vdots&&\vdots\\
1&w^{N-1}&w^{(2N-1)}&\ldots&w^{(N-1)^2}
\end{pmatrix}$$
More generally, we have as example the Fourier coupling of any finite abelian group $G$, regarded via the isomorphism $G\simeq\widehat{G}$ as a square matrix, $F_G\in M_G(\mathbb C)$: $$F_G=<i,j>_{i\in G,j\in\widehat{G}}$$
Observe that for the cyclic group $G=\mathbb Z_N$ we obtain in this way the above standard Fourier matrix $F_N$. In general, we obtain a tensor product of Fourier matrices $F_N$.
There are many other examples of such matrices, for the most coming from various combinatorial constructions, basically involving design theory, and roots of unity. In addition, there are several deformation procedures for such matrices, leading to some more complicated constructions as well, or real algebraic geometry flavor.
In general, the complex Hadamard matrices can be thought of as being “generalized Fourier matrices”, of somewhat exotic type. Due to their generalized Fourier nature, these matrices appear in a wide array of questions in mathematics and physics:
[**1. Operator algebras.**]{} One important concept in the theory of von Neumann algebras is that of maximal abelian subalgebra (MASA). In the finite case, where the algebra has a trace, one can talk about pairs of orthogonal MASA. In the simplest case, of the matrix algebra $M_N(\mathbb C)$, the orthogonal MASA are, up to conjugation, $A=\Delta,B=H\Delta H^*$, where $\Delta\subset M_N(\mathbb C)$ are the diagonal matrices, and $H\in M_N(\mathbb C)$ is Hadamard.
[**2. Subfactor theory.**]{} Along the same lines, but at a more advanced level, associated to any Hadamard matrix $H\in M_N(\mathbb C)$ is the square diagram $\mathbb C\subset\Delta,H\Delta H^*\subset M_N(\mathbb C)$ formed by the associated MASA, which is a commuting square in the sense of subfactor theory. The Jones basic construction produces, out of this diagram, an index $N$ subfactor of the Murray-von Neumann factor $R$, whose computation a key problem.
[**3. Quantum groups.**]{} Associated to any complex Hadamard matrix $H\in M_N(\mathbb C)$ is a certain quantum permutation group $G\subset S_N^+$, obtained by factorizing the flat representation $\pi:C(S_N^+)\to M_N(\mathbb C)$ associated to $H$. As a basic example here, the Fourier matrix $F_G$ produces in this way the group $G$ itself. In general, the above-mentioned subfactor can be recovered from $G$, whose computation is a key problem.
[**4. Lattice models.**]{} According to the work of Jones, the combinatorics of the subfactor associated to an Hadamard matrix $H\in M_N(\mathbb C)$, which by the above can be recovered from the representation theory of the associated quantum permutation group $G\subset S_N^+$, can be thought of as being the combinatorics of a “spin model”, in the context of link invariants, or of statistical mechanics, in an abstract, mathematical sense.
From a more applied point of view, the Hadamard matrices can be used in order to construct mutually unbiased bases (MUB) and other useful objects, which can help in connection with quantum information theory, and other quantum physics questions.
All this is quite recent, basically going back to the 00s. Regarding the known facts about the Hadamard matrices, most of them are in fact of purely mathematical nature. There are indeed many techniques that can be applied, leading to various results:
[**1. Algebra.**]{} In the real case, $H\in M_N(\pm1)$, the study of such matrices goes back to the beginning of the 20th century, and is quite advanced. The main problems, however, namely the Hadamard conjecture (HC) and the circulant Hadamard conjecture (CHC) are not solved yet, with no efficient idea of approach in sight. Part of the real matrix techniques apply quite well to the root of unity case, $H\in M_N(\mathbb Z_s)$, with $s<\infty$.
[**2. Geometry.**]{} As already explained above, the $N\times N$ complex Hadamard matrices form a real algebraic manifold, $X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$. This manifold is highly singular, but several interesting geometric results about it have been obtained, notably about the general structure of the singularity at a given point $H\in X_N$, about the neighborhood of the Fourier matrices $F_G$, and about the various isolated points as well.
[**3. Analysis.**]{} One interesting point of view on the Hadamard matrices, real or complex, comes from the fact that these are precisely the rescaled versions, $H=\sqrt{N}U$, of the matrices which maximize the 1-norm $||U||_1=\sum_{ij}|U_{ij}|$ on $O_N,U_N$ respectively. When looking more generally at the local maximizers of the 1-norm, one is led into a notion of “almost Hadamard matrices”, having interesting algebraic and analytic aspects.
[**4. Probability.**]{} Another speculative approach, this time probabilistic, is by playing a Gale-Berlekamp type game with the matrix, in the hope that the invariants which are obtained in this way are related to the various geometric and quantum algebraic invariants, which are hard to compute. All this is related to the subtle fact that any unitary matrix, and so any complex Hadamard matrix as well, can be put in bistochastic form.
Our aim here is to survey this material, theory and applications. Organizing all this is not easy, and we have chosen an algebra/geometry/analysis/physics lineup for our presentation, vaguely coming from the amount of background which is needed.
The present text is organized in 4 parts, as follows:
1. Sections 1-3 contain basic definitions and various algebraic results.
2. Sections 4-6 deal with differential and algebraic geometric aspects.
3. Sections 7-9 are concerned with various analytic considerations.
4. Sections 10-12 deal with various mathematical physics aspects.
There are of course many aspects of the theory which are missing from our presentation, but we will provide of course some information here, comments and references.
[**Acknowledgements.**]{}
I would like to thank Vaughan Jones for suggesting me, back to a discussion that we had in 1997, when we first met, to look at vertex models, and related topics.
Stepping into bare Hadamard matrices is quite an experience, and very inspiring was the work of Uffe Haagerup on the subject, and his papers [@bha], [@ha1], [@ha2].
The present text is heavily based on a number of research papers on the subject that I wrote or co-signed, mostly during 2005–2015, and I would like to thank my coworkers Julien Bichon, Benoît Collins, Ion Nechita, Remus Nicoară, Duygu Özteke, Lorenzo Pittau, Jean-Marc Schlenker, Adam Skalski and Karol Życzkowski.
Finally, many thanks go to my cats, for advice with hunting techniques, martial arts, and more. When doing linear algebra, all this knowledge is very useful.
Hadamard matrices
=================
We are interested here in the complex Hadamard matrices, but we will start with some beautiful pure mathematics, regarding the real case. The definition that we need, going back to 19th century work of Sylvester [@syl], is as follows:
An Hadamard matrix is a square binary matrix, $$H\in M_N(\pm1)$$ whose rows are pairwise orthogonal, with respect to the scalar product on $\mathbb R^N$.
As a first observation, we do not really need real numbers in order to talk about the Hadamard matrices, because the orthogonality condition tells us that, when comparing two rows, the number of matchings should equal the number of mismatchings. Thus, we can replace if we want the $1,-1$ entries of our matrix by any two symbols, of our choice. Here is an example of an Hadamard matrix, with this convention: $$\begin{matrix}
\heartsuit&\heartsuit&\heartsuit&\heartsuit\\
\heartsuit&\clubsuit&\heartsuit&\clubsuit\\
\heartsuit&\heartsuit&\clubsuit&\clubsuit\\
\heartsuit&\clubsuit&\clubsuit&\heartsuit
\end{matrix}$$
However, it is probably better to run away from this, and use real numbers instead, as in Definition 1.1, with the idea in mind of connecting the Hadamard matrices to the foundations of modern mathematics, namely Calculus 1 and Calculus 2.
So, getting back now to the real numbers, here is a first result:
The set of the $N\times N$ Hadamard matrices is $$Y_N=M_N(\pm 1)\cap\sqrt{N}O_N$$ where $O_N$ is the orthogonal group, the intersection being taken inside $M_N(\mathbb R)$.
Let $H\in M_N(\pm1)$. Since the rows of the rescaled matrix $U=H/\sqrt{N}$ have norm 1, with respect to the usual scalar product on $\mathbb R^N$, we conclude that $H$ is Hadamard precisely when $U$ belongs to the orthogonal group $O_N$, and so when $H\in Y_N$, as claimed.
As an interesting consequence of the above result, which is not exactly obvious when using the design theory approach, we have the following result:
Let $H\in M_N(\pm1)$ be an Hadamard matrix.
1. The columns of $H$ must be pairwise orthogonal.
2. The transpose matrix $H^t\in M_N(\pm1)$ is Hadamard as well.
Since the orthogonal group $O_N$ is stable under transposition, so is the set $Y_N$ constructed in Proposition 1.2, and this gives both the assertions.
Let us study now the examples. There are many such matrices, and in order to cut a bit from the complexity, we can use the following notions:
Two Hadamard matrices are called equivalent, and we write $H\sim K$, when it is possible to pass from $H$ to $K$ via the following operations:
1. Permuting the rows, or the columns.
2. Multiplying the rows or columns by $-1$.
Also, we say that $H$ is dephased when its first row and column consist of $1$ entries.
Observe that we do not include the transposition operation $H\to H^t$ in our list of allowed operations. This is because Proposition 1.3 above, while looking quite elementary, rests however on a deep linear algebra fact, namely that the transpose of an orthogonal matrix is orthogonal as well, and this can produce complications later on.
Regarding the equivalence, there is of course a certain group $G$ acting there, made of two copies of $S_N$, one for the rows and one for the columns, and of two copies of $\mathbb Z_2^N$, once again one for the rows, and one for the columns. The equivalence classes of the Hadamard matrices are then the orbits of the action $G\curvearrowright Y_N$. It is possible to be a bit more explicit here, with a formula for $G$ and so on, but we will not need this.
As for the dephasing, here the terminology comes from physics, or rather from the complex Hadamard matrices. Indeed, when regarding $H\in M_N(\pm1)$ as a complex matrix, $H\in M_N(\mathbb T)$, the $-1$ entries have “phases”, equal to $\pi$, and assuming that $H$ is dephased means to assume that we have no phases, on the first row and the first column.
Observe that, up to the equivalence relation, any Hadamard matrix $H\in M_N(\pm1)$ can be put in dephased form. Moreover, the dephasing operation is unique, if we use only the operations (2) in Definition 1.4, namely row and column multiplications by $-1$.
With these notions in hand, we can formulate our first classification result:
There is only one Hadamard matrix at $N=2$, namely $$W_2=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}$$ up to the above equivalence relation for such matrices.
The matrix in the statement $W_2$, called Walsh matrix, is clearly Hadamard. Conversely, given $H\in M_N(\pm1)$ Hadamard, we can dephase it, as follows: $$\begin{pmatrix}a&b\\c&d\end{pmatrix}
\to\begin{pmatrix}1&1\\ac&bd\end{pmatrix}
\to\begin{pmatrix}1&1\\1&abcd\end{pmatrix}$$
Now since the dephasing operation preserves the class of the Hadamard matrices, we must have $abcd=-1$, and so we obtain by dephasing the matrix $W_2$.
At $N=3$ we cannot have examples, due to the orthogonality condition, which forces $N$ to be even. At $N=4$ now, we have several examples, as for instance: $$W_4=\begin{pmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\end{pmatrix}$$
This matrix is a particular case of the following construction:
If $H\in M_M(\pm1)$ and $K\in M_N(\pm1)$ are Hadamard matrices, then so is their tensor product, constructed in double index notation as follows: $$H\otimes K\in M_{MN}(\pm1)\quad,\quad (H\otimes K)_{ia,jb}=H_{ij}K_{ab}$$ In particular the Walsh matrices, $W_N=W_2^{\otimes n}$ with $N=2^n$, are all Hadamard.
The matrix in the statement $H\otimes K$ has indeed $\pm1$ entries, and its rows $R_{ia}$ are pairwise orthogonal, as shown by the following computation: $$\begin{aligned}
<R_{ia},R_{kc}>
&=&\sum_{jb}H_{ij}K_{ab}\cdot H_{kj}K_{cb}\\
&=&\sum_jH_{ij}H_{kj}\sum_bK_{ab}K_{cb}\\
&=&MN\delta_{ik}\delta_{ac}\end{aligned}$$
As for the second assertion, this follows from this, $W_2$ being Hadamard.
Before going further, we should perhaps clarify a bit our tensor product notations. In order to write $H\in M_N(\pm1)$ the indices of $H$ must belong to $\{1,\ldots,N\}$, or at least to an ordered set $\{I_1,\ldots,I_N\}$. But with double indices we are indeed in this latter situation, because we can use the lexicographic order on these indices. To be more precise, by using the lexicographic order on the double indices, we have the following formula: $$H\otimes K=
\begin{pmatrix}
H_{11}K&\ldots&H_{1M}K\\
\vdots&&\vdots\\
H_{M1}K&\ldots&H_{MM}K
\end{pmatrix}$$
As an example, by tensoring $W_2$ with itself, we obtain the above matrix $W_4$.
Getting back now to our classification work, here is the result at $N=4$:
There is only one Hadamard matrix at $N=4$, namely $$W_4=W_2\otimes W_2$$ up to the standard equivalence relation for such matrices.
Consider an Hadamard matrix $H\in M_4(\pm1)$, assumed to be dephased: $$H=\begin{pmatrix}1&1&1&1\\ 1&a&b&c\\ 1&d&e&f\\ 1&g&h&i\end{pmatrix}$$
By orthogonality of the first 2 rows we must have $\{a,b,c\}=\{-1,-1,1\}$, and so by permuting the last 3 columns, we can further assume that our matrix is as follows: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&m&n&o\\ 1&p&q&r\end{pmatrix}$$
By orthogonality of the first 2 columns we must have $\{m,p\}=\{-1,1\}$, and so by permuting the last 2 rows, we can further assume that our matrix is as follows: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&1&x&y\\ 1&-1&z&t\end{pmatrix}$$
But this gives the result, because from the orthogonality of the rows we obtain $x=y=-1$, and then, with these values of $x,y$ plugged in, from the orthogonality of the columns we obtain $z=-1,t=1$. Thus, up to equivalence we have $H=W_4$, as claimed.
The case $N=5$ is excluded, because the orthogonality condition forces $N\in 2\mathbb N$. The point now is that the case $N=6$ is excluded as well, because we have:
The size of an Hadamard matrix must be $N\in\{2\}\cup 4\mathbb N$.
By permuting the rows and columns or by multiplying them by $-1$, as to rearrange the first 3 rows, we can always assume that our matrix looks as follows: $$H=\begin{pmatrix}
1\ldots\ldots 1&1\ldots\ldots 1&1\ldots\ldots 1&1\ldots\ldots 1\\
1\ldots\ldots 1&1\ldots\ldots 1&-1\ldots -1&-1\ldots -1\\
1\ldots\ldots 1&-1\ldots -1&1\ldots\ldots 1&-1\ldots -1\\
\underbrace{\ldots\ldots\ldots}_x&\underbrace{\ldots\ldots\ldots}_y&\underbrace{\ldots\ldots\ldots}_z&\underbrace{\ldots\ldots\ldots}_t
\end{pmatrix}$$
Now if we denote by $x,y,z,t$ the sizes of the 4 block columns, as indicated, the orthogonality conditions between the first 3 rows give the following system of equations: $$(1\perp 2)\quad:\quad x+y=z+t$$ $$(1\perp 3)\quad:\quad x+z=y+t$$ $$(2\perp 3)\quad:\quad x+t=y+z$$
The solution of this system being $x=y=z=t$, we conclude that the size of our matrix $N=x+y+z+t$ must be a multiple of 4, as claimed.
As a consequence, we are led to the study of the Hamadard matrices at: $$N=8,12,16,20,24,\ldots$$
This study can be done either abstractly, via various algebraic methods, or with a computer, and this leads to the conclusion that the number of Hadamard matrices of size $N\in4\mathbb N$ grows with $N$, and this in a rather exponential fashion.
In particular, we are led in this way into the following statement:
There is at least one Hadamard matrix $$H\in M_N(\pm1)$$ for any integer $N\in 4\mathbb N$.
This conjecture, going back to the 19th century, is probably one of the most beautiful statements in combinatorics, linear algebra, and mathematics in general. Quite remarkably, the numeric verification so far goes up to the number of the beast: $$\mathfrak N=666$$
Our purpose now will be that of gathering some evidence for this conjecture. At $N=8$ we have the Walsh matrix $W_8$. Thus, the next existence problem comes at $N=12$. And here, we can use the following key construction, due to Paley [@pal]:
Let $q=p^r$ be an odd prime power, define $\chi:\mathbb F_q\to\{-1,0,1\}$ by $\chi(0)=0$, $\chi(a)=1$ if $a=b^2$ for some $b\neq0$, and $\chi(a)=-1$ otherwise, and finally set $Q_{ab}=\chi(a-b)$. We have then constructions of Hadamard matrices, as follows:
1. Paley $1$: if $q=3(4)$ we have a matrix of size $N=q+1$, as follows: $$P_N^1=1+\begin{pmatrix}
0&1&\ldots&1\\
-1\\
\vdots&&Q\\
-1
\end{pmatrix}$$
2. Paley $2$: if $q=1(4)$ we have a matrix of size $N=2q+2$, as follows: $$P_N^2=\begin{pmatrix}
0&1&\ldots&1\\
1\\
\vdots&&Q\\
1
\end{pmatrix}\quad:\quad 0\to\begin{pmatrix}1&-1\\ -1&-1\end{pmatrix}\quad,\quad\pm1\to\pm\begin{pmatrix}1&1\\1&-1\end{pmatrix}$$
These matrices are skew-symmetric $(H+H^t=2)$, respectively symmetric $(H=H^t)$.
In order to simplify the presentation, we will denote by $1$ all the identity matrices, of any size, and by $\mathbb I$ all the rectangular all-one matrices, of any size as well.
It is elementary to check that the matrix $Q_{ab}=\chi(a-b)$ has the following properties: $$QQ^t=q1-\mathbb I\quad,\quad Q\mathbb I=\mathbb IQ=0$$
In addition, we have the following formulae, which are elementary as well, coming from the fact that $-1$ is a square in $\mathbb F_q$ precisely when $q=1(4)$: $$q=1(4)\implies Q=Q^t\ \ \,$$ $$q=3(4)\implies Q=-Q^t$$
With these observations in hand, the proof goes as follows:
\(1) With our conventions for the symbols $1$ and $\mathbb I$, explained above, the matrix in the statement is as follows: $$P_N^1=\begin{pmatrix}1&\mathbb I\\ -\mathbb I&1+Q\end{pmatrix}$$
With this formula in hand, the Hadamard matrix condition follows from: $$\begin{aligned}
P_N^1(P_N^1)^t
&=&\begin{pmatrix}1&\mathbb I\\ -\mathbb I&1+Q\end{pmatrix}\begin{pmatrix}1&-\mathbb I\\ \mathbb I&1-Q\end{pmatrix}\\
&=&\begin{pmatrix}N&0\\ 0&\mathbb I+1-Q^2\end{pmatrix}\\
&=&\begin{pmatrix}N&0\\ 0&N\end{pmatrix}\end{aligned}$$
\(2) If we denote by $G,F$ the matrices in the statement, which replace respectively the $0,1$ entries, then we have the following formula for our matrix: $$P_N^2=\begin{pmatrix}0&\mathbb I\\ \mathbb I&Q\end{pmatrix}\otimes F+1\otimes G$$
With this formula in hand, the Hadamard matrix condition follows from: $$\begin{aligned}
(P_N^2)^2
&=&\begin{pmatrix}0&\mathbb I\\ \mathbb I&Q\end{pmatrix}^2\otimes F^2+\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\otimes G^2+\begin{pmatrix}0&\mathbb I\\ \mathbb I&Q\end{pmatrix}\otimes(FG+GF)\\
&=&\begin{pmatrix}q&0\\ 0&q\end{pmatrix}\otimes 2+\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\otimes 2+\begin{pmatrix}0&\mathbb I\\ \mathbb I&Q\end{pmatrix}\otimes0\\
&=&\begin{pmatrix}N&0\\ 0&N\end{pmatrix}\end{aligned}$$
Finally, the last assertion is clear, from the above formulae relating $Q,Q^t$.
These constructions allow us to get well beyond the Walsh matrix level, and we have the following result:
The HC is verified at least up to $N=88$, as follows:
1. At $N=4,8,16,32,64$ we have Walsh matrices.
2. At $N=12,20,24,28,44,48,60,68,72,80,84,88$ we have Paley 1 matrices.
3. At $N=36,52,76$ we have Paley 2 matrices.
4. At $N=40,56$ we have Paley 1 matrices tensored with $W_2$.
However, at $N=92$ these constructions (Walsh, Paley, tensoring) don’t work.
First of all, the numbers in (1-4) are indeed all the multiples of 4, up to 88. As for the various assertions, the proof here goes as follows:
\(1) This is clear.
\(2) Since $N-1$ takes the values $q=11,19,23,27,43,47,59,67,71,79,83,87$, all prime powers, we can indeed apply the Paley 1 construction, in all these cases.
\(3) Since $N=4(8)$ here, and $N/2-1$ takes the values $q=17,25,37$, all prime powers, we can indeed apply the Paley 2 construction, in these cases.
\(4) At $N=40$ we have indeed $P_{20}^1\otimes W_2$, and at $N=56$ we have $P_{28}^1\otimes W_2$.
Finally, we have $92-1=7\times13$, so the Paley 1 construction does not work, and $92/2=46$, so the Paley 2 construction, or tensoring with $W_2$, does not work either.
At $N=92$ the situation is considerably more complicated, and we have:
Assuming that $A,B,C,D\in M_K(\pm1)$ are circulant, symmetric, pairwise commute and satisfy $A^2+B^2+C^2+D^2=4K$, the following $4K\times4K$ matrix $$H=\begin{pmatrix}
A&B&C&D\\
-B&A&-D&C\\
-C&D&A&-B\\
-D&-C&B&A
\end{pmatrix}$$ is Hadamard, called of Williamson type. Moreover, such a matrix exists at $K=23$.
The matrix $H$ can be written as follows, where $1,i,j,k\in M_4(0,1)$, called the quaternion units, are the matrices describing the positions of the $A,B,C,D$ entries: $$H=A\otimes 1+B\otimes i+C\otimes j+D\otimes k$$
Assuming now that $A,B,C,D$ are symmetric, we have: $$\begin{aligned}
HH^t
&=&(A\otimes 1+B\otimes i+C\otimes j+D\otimes k)(A\otimes 1-B\otimes i-C\otimes j-D\otimes k)\\
&=&(A^2+B^2+C^2+D^2)\otimes 1-([A,B]-[C,D])\otimes i\\
&&-([A,C]-[B,D])\otimes j-([A,D]-[B,C])\otimes k\end{aligned}$$
Thus, if we further assume that $A,B,C,D$ pairwise commute, and satisfy the condition $A^2+B^2+C^2+D^2=4K$, we obtain indeed an Hadamard matrix.
In general, finding such matrices is a difficult task, and this is where Williamson’s extra assumption that $A,B,C,D$ should be taken circulant comes from.
Regarding now the $K=23$ construction, which produces an Hadamard matrix of order $N=92$, this comes via a computer search. We refer here to [@bgh], [@wil].
Things get even worse at higher values of $N$, where more and more complicated constructions are needed. The whole subject is quite technical, and, as already mentioned, human knowledge here stops so far at $\mathfrak N=666$. See [@aga], [@dda], [@dgo], [@hor], [@kta], [@sya].
As a conceptual finding on this subject, however, we have the recent theory of the cocyclic Hadamard matrices. The basic definition here is as follows:
A cocycle on a finite group $G$ is a matrix $H\in M_G(\pm1)$ satisfying: $$H_{11}=1\quad,\quad H_{gh}H_{gh,k}=H_{g,hk}H_{hk}$$ If the rows of $H$ are pairwise orthogonal, we say that $H$ is a cocyclic Hadamard matrix.
Here the definition of the cocycles is the usual one, with the equations coming from the fact that $F=\mathbb Z_2\times G$ must be a group, with multiplication as follows: $$(u,g)(v,h)=(H_{gh}\cdot uv,gh)$$
As a basic example, the Walsh matrix $H=W_{2^n}$ is cocyclic, coming from the group $G=\mathbb Z_2^n$, with cocycle $H_{gh}=(-1)^{<g,h>}$. As explained in [@dfh], many other known examples of Hadamard matrices are cocyclic, and this leads to the following conjecture:
There is at least one cocyclic Hadamard matrix $H\in M_N(\pm1)$, for any $N\in 4\mathbb N$.
Having such a statement formulated is certainly a big advance with respect to the HC, and this is probably the main achievement of modern Hadamard matrix theory. However, in what regards a potential proof, there is no serious idea here, at least so far.
One potential way of getting away from these questions is that of looking at various special classes of Hadamard matrices. However, this is not really the case, because passed a few trivialities, the existence of special Hadamard matrices is generally subject to an improvement of the HC, as in the cocyclic case, or to difficult non-existence questions.
Illustrating and quite famous here is the situation in the circulant case. Given a vector $\gamma\in(\pm 1)^N$, one can ask whether the matrix $H\in M_N(\pm 1)$ defined by $H_{ij}=\gamma_{j-i}$ is Hadamard or not. Here is a solution to the problem: $$K_4=\begin{pmatrix}-1&1&1&1\\ 1&-1&1&1\\ 1&1&-1&1\\ 1&1&1&-1\end{pmatrix}$$
More generally, any vector $\gamma\in(\pm 1)^4$ satisfying $\sum\gamma_i=\pm 1$ is a solution to the problem. The following conjecture, going back to [@sya], states that there are no other solutions:
There is no circulant Hadamard matrix of size $N\times N$, for any $N\neq 4$.
The fact that such a simple-looking problem is still open might seem quite surprising. Indeed, if we denote by $S\subset\{1,\ldots,N\}$ the set of positions of the $-1$ entries of $\gamma$, the Hadamard matrix condition is simply $|S\cap(S+k)|=|S|-N/4$, for any $k\neq 0$, taken modulo $N$. Thus, the above conjecture simply states that at $N\neq 4$, such a set $S$ cannot exist. Let us record here this latter statement, originally due to Ryser [@rys]:
Given an integer $N>4$, there is no set $S\subset\{1,\ldots,N\}$ satisfying the condition $$|S\cap(S+k)|=|S|-N/4$$ for any $k\neq 0$, taken modulo $N$.
There has been a lot of work on this conjecture, starting with [@rys]. However, as it was the case with the HC, all this leads to complicated combinatorics, design theory, algebra and number theory, and so on, and there is no serious idea here, at least so far.
All this might seem a bit depressing, but there are at least two potential exits from the combinatorial and algebraic theory of Hadamard matrices, namely:
1. Do analysis. There are many things that can be done here, starting with the Hadamard determinant bound [@had], and we will discuss this below. Whether all this can help or not in relation with the HC and CHC remains to be seen, but at least we’ll have some fun, and do some interesting mathematics.
2. Do physics. When allowing the entries of $H$ to be complex numbers, both the HC and the CHC dissapear, because the Fourier matrix $F_N=(w^{ij})$ with $w=e^{2\pi i/N}$ is an example of such matrix at any $N\in\mathbb N$, which in addition can be put in circulant form. We will discuss this later, starting from section 2 below.
So, let us step now into analytic questions. The first result here, found in 1893 by Hadamard [@had], about 25 years after Sylvester’s 1867 founding paper [@syl], and which actually led to such matrices being called Hadamard, is as follows:
Given a matrix $H\in M_N(\pm1)$, we have $$|\det(H)|\leq N^{N/2}$$ with equality precisely when $H$ is Hadamard.
We use here the fact, which often tends to be forgotten, that the determinant of a system of $N$ vectors in $\mathbb R^N$ is the signed volume of the associated parallelepiped: $$\det(H_1,\ldots,H_N)=\pm vol<H_1,\ldots,H_N>$$
This is actually the definition of the determinant, in case you have forgotten the basics (!), with the need for the sign coming for having good additivity properties.
In the case where our vectors take their entries in $\pm1$, we therefore have the following inequality, with equality precisely when our vectors are pairwise orthogonal: $$|\det(H_1,\ldots,H_N)|\leq||H_1||\times\ldots\times||H_N||=(\sqrt{N})^N$$
Thus, we have obtained the result, straight from the definition of $\det$.
The above result is quite interesting, philosophically speaking. Let us recall indeed from Proposition 1.2 that the set formed by the $N\times N$ Hadamard matrices is: $$Y_N=M_N(\pm1)\cap\sqrt{N}O_N$$
Thus, what we have in Theorem 1.17 is an analytic method for locating $Y_N$ inside $M_N(\pm1)$. This suggests doing many geometric and analytic things, as for instance looking at the maximizers of $|\det(H)|$ at values $N\in\mathbb N$ which are not multiples of 4. These latter matrices are called “quasi-Hadamard”, and we refer here to [@pso].
From a “dual” point of view, the question of locating $Y_N$ inside $\sqrt{N}O_N$, once again via analytic methods, makes sense as well. The result here, from [@bcs], is as follows:
Given a matrix $U\in O_N$ we have $$||U||_1\leq N\sqrt{N}$$ with equality precisely when $H=U/\sqrt{N}$ is Hadamard.
We have indeed the following estimate, valid for any $U\in O_N$: $$||U||_1=\sum_{ij}|U_{ij}|\leq N\left(\sum_{ij}|U_{ij}|^2\right)^{1/2}=N\sqrt{N}$$
The equality case holds when $|U_{ij}|=\sqrt{N}$ for any $i,j$, which amounts in saying that $H=U/\sqrt{N}$ must satisfy $H\in M_N(\pm1)$, and so that $H$ must be Hadamard.
As a first comment here, the above Cauchy-Schwarz estimate can be improved with a Hölder estimate, the conclusion being that the rescaled Hadamard matrices maximize the $p$-norm on $O_N$ at any $p\in[1,2)$, and minimize it at any $p\in(2,\infty]$. We will discuss this in section 9 below, with full details, directly in the complex case.
As it was the case with the Hadamard determinant bound, all this suggests doing some further geometry and analysis, this time on the Lie group $O_N$, notably with a notion of “almost Hadamard matrix” at stake. We will be back to this in section 9 below.
Let us discuss now, once again as an introduction to analytic topics, yet another such result. We recall that a matrix $H\in M_N(\mathbb R)$ is called row-stochastic when the sums on the rows are all equal, column-stochastic when the same is true for columns, and bistochastic when this is true for both rows and columns, the common sum being the same.
With this terminology, we have the following well-known result:
For an Hadamard matrix $H\in M_N(\pm1)$, the excess, $$E(H)=\sum_{ij}H_{ij}$$ satisfies $|E(H)|\leq N\sqrt{N}$, with equality if and only if $H$ is bistochastic.
In terms of the all-one vector $\xi=(1)_i\in\mathbb R^N$, we have: $$E(H)=\sum_{ij}H_{ij}=\sum_{ij}H_{ij}\xi_j\xi_i=\sum_i(H\xi)_i\xi_i=<H\xi,\xi>$$
Now by using the Cauchy-Schwarz inequality, along with the fact that $U=H/\sqrt{N}$ is orthogonal, and hence of norm 1, we obtain, as claimed: $$|E(H)|\leq||H\xi||\cdot||\xi||\leq||H||\cdot||\xi||^2=N\sqrt{N}$$
Regarding now the equality case, this requires the vectors $H\xi,\xi$ to be proportional, and so our matrix $H$ to be row-stochastic. But since $U=H/\sqrt{N}$ is orthogonal, $H\xi\sim\xi$ is equivalent to $H^t\xi\sim\xi$, and so $H$ must be bistochastic, as claimed.
There are many known interesting results on the bistochastic Hadamard matrices, and we will be back to this in section 7 below, directly in the complex setting.
One interesting question, that we would like to discuss now, is that of computing the law of the excess over the equivalence class of $H$. Let us start with:
The glow of $H\in M_N(\pm1)$ is the probability measure $\mu\in\mathcal P(\mathbb Z)$ describing the distribution of the excess, $E=\sum_{ij}H_{ij}$, over the equivalence class of $H$.
Since the excess is invariant under permutations of rows and columns, we can restrict the attention to the matrices $\widetilde{H}\simeq H$ obtained by switching signs on rows and columns. More precisely, let $(a,b)\in\mathbb Z_2^N\times\mathbb Z_2^N$, and consider the following matrix: $$\widetilde{H}_{ij}=a_ib_jH_{ij}$$
We can regard the sum of entries of $\widetilde{H}$ as a random variable, over the group $\mathbb Z_2^N\times\mathbb Z_2^N$, and we have the following equivalent description of the glow:
Given a matrix $H\in M_N(\pm 1)$, if we define $\varphi:\mathbb Z_2^N\times\mathbb Z_2^N\to\mathbb Z$ by $$\varphi(a,b)=\sum_{ij}a_ib_jH_{ij}$$ then the glow $\mu$ is the probability measure on $\mathbb Z$ given by $\mu(\{k\})=P(\varphi=k)$.
The function $\varphi$ in the statement can indeed be regarded as a random variable over the group $\mathbb Z_2^N\times\mathbb Z_2^N$, with this latter group being endowed with its uniform probability measure $P$. The distribution $\mu$ of this variable $\varphi$ is then given by: $$\mu(\{k\})=\frac{1}{4^N}\#\left\{(a,b)\in \mathbb Z_2^N\times\mathbb Z_2^N\Big|\varphi(a,b)=k\right\}$$
By the above discussion, this distribution is exactly the glow.
The terminology in Definition 1.20 comes from the following picture. Assume that we have a square city, with $N$ horizontal streets and $N$ vertical streets, and with street lights at each crossroads. When evening comes the lights are switched on at the positions $(i,j)$ where $H_{ij}=1$, and then, all night long, they are randomly switched on and off, with the help of $2N$ master switches, one at the end of each street: $$\begin{matrix}
\to&&\diamondsuit&\diamondsuit&\diamondsuit&\diamondsuit\\
\to&&\diamondsuit&\times&\diamondsuit&\times\\
\to&&\diamondsuit&\diamondsuit&\times&\times\\
\to&&\diamondsuit&\times&\times&\diamondsuit\\
\\
&&\uparrow&\uparrow&\uparrow&\uparrow
\end{matrix}$$
With this picture in mind, $\mu$ describes indeed the glow of the city.
At a more advanced level now, all this is related to the Gale-Berlekamp game [@fsl], [@rvi], and this is where our main motivation for studying it comes from.
In order to compute the glow, it is useful to have in mind the following picture: $$\begin{matrix}
&&b_1&\ldots&b_N\\
&&\downarrow&&\downarrow\\
(a_1)&\to&H_{11}&\ldots&H_{1N}&\Rightarrow&S_1\\
\vdots&&\vdots&&\vdots&&\vdots\\
(a_N)&\to&H_{N1}&\ldots&H_{NN}&\Rightarrow&S_N
\end{matrix}$$
Here the columns of $H$ have been multiplied by the entries of the horizontal switching vector $b$, the resulting sums on rows are denoted $S_1,\ldots,S_N$, and the vertical switching vector $a$ still has to act on these sums, and produce the glow component at $b$.
With this picture in mind, we first have the following result, from [@ba6]:
The glow of a matrix $H\in M_N(\pm 1)$ is given by $$\mu=\frac{1}{2^N}\sum_{b\in\mathbb Z_2^N}\beta_1(c_1)*\ldots*\beta_N(c_N)$$ where $\beta_r(c)=\left(\frac{\delta_r+\delta_{-r}}{2}\right)^{*c}$, and $c_r=\#\{r\in|S_1|,\ldots,|S_N|\}$, with $S=Hb$.
We use the interpretation of the glow explained above. So, consider the decomposition of the glow over $b$ components: $$\mu=\frac{1}{2^N}\sum_{b\in\mathbb Z_2^N}\mu_b$$
With the notation $S=Hb$, the numbers $S_1,\ldots,S_N$ are the sums on the rows of the matrix $\widetilde{H}_{ij}=H_{ij}a_ib_j$. Thus the glow components are given by: $$\mu_b=law\left(\pm S_1\pm S_2\ldots\pm S_N\right)$$
By permuting now the sums on the right, we have the following formula: $$\mu_b=law\big(\underbrace{\pm 0\ldots\pm 0}_{c_0}\underbrace{\pm 1\ldots\pm 1}_{c_1}\ldots\ldots\underbrace{\pm N\ldots\pm N}_{c_N}\big)$$
Now since the $\pm$ variables each follow a Bernoulli law, and these Bernoulli laws are independent, we obtain a convolution product as in the statement.
We will need the following elementary fact:
Let $H\in M_N(\pm1)$ be an Hadamard matrix of order $N\geq 4$.
1. The sums of entries on rows $S_1,\ldots,S_N$ are even, and equal modulo $4$.
2. If the sums on the rows $S_1,\ldots,S_N$ are all $0$ modulo $4$, then the number of rows whose sum is $4$ modulo $8$ is odd for $N=4(8)$, and even for $N=0(8)$.
\(1) Let us pick two rows of our matrix, and then permute the columns such that these two rows look as follows: $$\begin{pmatrix}
1\ldots\ldots1&1\ldots\ldots1&-1\ldots-1&-1\ldots-1\\
\underbrace{1\ldots\ldots1}_a&\underbrace{-1\ldots-1}_b&\underbrace{1\ldots\ldots1}_c&\underbrace{-1\ldots-1}_d
\end{pmatrix}$$
We have $a+b+c+d=N$, and by orthogonality $a+d=b+c$, so $a+d=b+c=\frac{N}{2}$. Now since $N/2$ is even, we conclude that $b=c(2)$, and this gives the result.
\(2) In the case where $H$ is “row-dephased”, in the sense that its first row consists of $1$ entries only, the row sums are $N,0,\ldots,0$, and so the result holds. In general now, by permuting the columns we can assume that our matrix looks as follows: $$H=\begin{pmatrix}1\ldots\ldots1&-1\ldots-1\\ \underbrace{\vdots}_x&\underbrace{\ \vdots\ }_y\end{pmatrix}$$
We have $x+y=N=0(4)$, and since the first row sum $S_1=x-y$ is by assumption 0 modulo 4, we conclude that $x,y$ are even. In particular, since $y$ is even, the passage from $H$ to its row-dephased version $\widetilde{H}$ can be done via $y/2$ double sign switches.
Now, in view of the above, it is enough to prove that the conclusion in the statement is stable under a double sign switch. So, let $H\in M_N(\pm1)$ be Hadamard, and let us perform to it a double sign switch, say on the first two columns. Depending on the values of the entries on these first two columns, the total sums on the rows change as follows: $$\begin{aligned}
\begin{pmatrix}+&+&\ldots&\ldots\end{pmatrix}&:&S\to S-4\\
\begin{pmatrix}+&-&\ldots&\ldots\end{pmatrix}&:&S\to S\\
\begin{pmatrix}-&+&\ldots&\ldots\end{pmatrix}&:&S\to S\\
\begin{pmatrix}-&-&\ldots&\ldots\end{pmatrix}&:&S\to S+4\end{aligned}$$
We can see that the changes modulo 8 of the row sum $S$ occur precisely in the first and in the fourth case. But, since the first two columns of our matrix $H\in M_N(\pm1)$ are orthogonal, the total number of these cases is even, and this finishes the proof.
Observe that Proposition 1.22 and Proposition 1.23 (1) show that the glow of an Hadamard matrix of order $N\geq 4$ is supported by $4\mathbb Z$. With this in hand, we have:
Let $H\in M_N(\pm1)$ be an Hadamard matrix of order $N\geq 4$, and denote by $\mu^{even},\mu^{odd}$ the mass one-rescaled restrictions of $\mu\in\mathcal P(4\mathbb Z)$ to $8\mathbb Z,8\mathbb Z+4$.
1. At $N=0(8)$ we have $\mu=\frac{3}{4}\mu^{even}+\frac{1}{4}\mu^{odd}$.
2. At $N=4(8)$ we have $\mu=\frac{1}{4}\mu^{even}+\frac{3}{4}\mu^{odd}$.
We use the glow decomposition over $b$ components, from Proposition 1.22: $$\mu=\frac{1}{2^N}\sum_{b\in\mathbb Z_2^N}\mu_b$$
The idea is that the decomposition formula in the statement will occur over averages of the following type, over truncated sign vectors $c\in\mathbb Z_2^{N-1}$: $$\mu'_c=\frac{1}{2}(\mu_{+c}+\mu_{-c})$$
Indeed, we know from Proposition 1.23 (1) that modulo 4, the sums on rows are either $0,\ldots,0$ or $2,\ldots,2$. Now since these two cases are complementary when pairing switch vectors $(+c,-c)$, we can assume that we are in the case $0,\ldots,0$ modulo 4.
Now by looking at this sequence modulo 8, and letting $x$ be the number of 4 components, so that the number of 0 components is $N-x$, we have: $$\frac{1}{2}(\mu_{+c}+\mu_{-c})=\frac{1}{2}\left(law(\underbrace{\pm0\ldots\pm 0}_{N-x}\underbrace{\pm4\ldots\pm 4}_x)+law(\underbrace{\pm 2\ldots\pm 2}_N)\right)$$
Now by using Proposition 1.23 (2), the first summand splits $1-0$ or $0-1$ on $8\mathbb Z,8\mathbb Z+4$, depending on the class of $N$ modulo 8. As for the second summand, since $N$ is even this always splits $\frac{1}{2}-\frac{1}{2}$ on $8\mathbb Z,8\mathbb Z+4$. So, by making the average we obtain either a $\frac{3}{4}-\frac{1}{4}$ or a $\frac{1}{4}-\frac{3}{4}$ splitting on $8\mathbb Z,8\mathbb Z+4$, depending on the class of $N$ modulo 8, as claimed.
Various computer simulations suggest that the measures $\mu^{even},\mu^{odd}$ don’t have further general algebraic properties. Analytically speaking now, we have:
The glow moments of $H\in M_N(\pm1)$ are given by: $$\int_{\mathbb Z_2^N\times\mathbb Z_2^N}\left(\frac{E}{N}\right)^{2p}=(2p)!!+O(N^{-1})$$ In particular the variable $E/N$ becomes Gaussian in the $N\to\infty$ limit.
Let $P_{even}(r)\subset P(r)$ be the set of partitions of $\{1,\ldots,r\}$ having all blocks of even size. The moments of the variable $E=\sum_{ij}a_ib_jH_{ij}$ are then given by: $$\begin{aligned}
\int_{\mathbb Z_2^N\times\mathbb Z_2^N}E^r
&=&\sum_{ix}H_{i_1x_1}\ldots H_{i_rx_r}\int_{\mathbb Z_2^N}a_{i_1}\ldots a_{i_r}\int_{\mathbb Z_2^N}b_{x_1}\ldots b_{x_r}\\
&=&\sum_{\pi,\sigma\in P_{even}(r)}\sum_{\ker i=\pi,\ker x=\sigma}H_{i_1x_1}\ldots H_{i_rx_r}\end{aligned}$$
Thus the moments decompose over partitions $\pi\in P_{even}(r)$, with the contributions being obtained by integrating the following quantities: $$C(\sigma)=\sum_{\ker x=\sigma}\sum_iH_{i_1x_1}\ldots H_{i_rx_r}\cdot a_{i_1}\ldots a_{i_r}$$
Now by Möbius inversion, we obtain a formula as follows: $$\int_{\mathbb Z_2^N\times\mathbb Z_2^N}E^r=\sum_{\pi\in P_{even}(r)}K(\pi)N^{|\pi|}I(\pi)$$
Here $K(\pi)=\sum_{\sigma\in P_{even}(r)}\mu(\pi,\sigma)$, where $\mu$ is the Möbius function of $P_{even}(r)$, and, with the convention that $H_1,\ldots,H_N\in\mathbb Z_2^N$ are the rows of $H$: $$I(\pi)=\sum_i\prod_{b\in\pi}\frac{1}{N}\left\langle\prod_{r\in b}H_{i_r},1\right\rangle$$
With this formula in hand, the first assertion follows, because the biggest elements of the lattice $P_{even}(2p)$ are the $(2p)!!$ partitions consisting of $p$ copies of a $2$-block.
As for the second assertion, this follows from the moment formula, and from the fact that the glow of $H\in M_N(\pm1)$ is real, and symmetric with respect to $0$. See [@ba5].
We will be back to all this in section 8 below, in the complex matrix setting.
Finally, some interesting analytic results can be obtained by exiting the square matrix setting, and looking at the rectangular matrix case. Let us start with:
A partial Hadamard matrix (PHM) is a matrix $$H\in M_{M\times N}(\pm1)$$ having its rows pairwise orthogonal.
These matrices are quite interesting objects, appearing in connection with various questions in combinatorics. The motivating examples are the Hadamard matrices $H\in M_N(\pm1)$, and their $M\times N$ submatrices, with $M\leq N$. See [@hal], [@ito], [@sya], [@ver].
In their paper [@dle], de Launey and Levin were able to count these matrices, in the asymptotic limit $N\in 4\mathbb N$, $N\to\infty$. Their method is based on:
The probability for a random $H\in M_{M\times N}(\pm1)$ to be partial Hadamard equals the probability for a length $N$ random walk with increments drawn from $$E=\left\{(e_i\bar{e}_j)_{i<j}\Big|e\in\mathbb Z_2^M\right\}$$ regarded as a subset $\mathbb Z_2^{\binom{M}{2}}$, to return at the origin.
Indeed, with $T(e)=(e_i\bar{e}_j)_{i<j}$, a matrix $X=[e_1,\ldots,e_N]\in M_{M\times N}(\mathbb Z_2)$ is partial Hadamard if and only if $T(e_1)+\ldots+T(e_N)=0$, and this gives the result.
As explained in [@dle] the above probability can be indeed computed, and we have:
The probability for a random $H\in M_{M\times N}(\pm1)$ to be PHM is $$P_M\simeq\frac{2^{(M-1)^2}}{\sqrt{(2\pi N)^{\binom{M}{2}}}}$$ in the $N\in 4\mathbb N$, $N\to\infty$ limit.
According to Proposition 1.27 above, we have: $$P_M
=\frac{1}{q^{(M-1)N}}\#\left\{\xi_1,\ldots,\xi_N\in E\Big|\sum_i\xi_i=0\right\}
=\frac{1}{q^{(M-1)N}}\sum_{\xi_1,\ldots,\xi_N\in E}\delta_{\Sigma\xi_i,0}$$
By using the Fourier inversion formula we have, with $D=\binom{M}{2}$: $$\delta_{\Sigma\xi_i,0}=\frac{1}{(2\pi)^D}\int_{[-\pi,\pi]^D}e^{i<\lambda,\Sigma\xi_i>}d\lambda$$
After many non-trivial computations, this leads to the result. See [@dle].
All this is extremely interesting. Let us mention as well that for the general matrices $H\in M_{M\times N}(\pm1)$, which are not necessarily PHM, such statistics can be deduced from the work of Tau-Vu [@tvu]. Finally, there is an extension of the notion of PHM in the complex case, and we will be back to this later on, in section 3 below.
Complex matrices
================
We have seen that the Hadamard matrices $H\in M_N(\pm1)$ are very interesting combinatorial objects. In what follows, we will be interested in their complex versions:
A complex Hadamard matrix is a square complex matrix $$H\in M_N(\mathbb C)$$ whose entries are on the unit circle, $H_{ij}\in\mathbb T$, and whose rows are pairwise orthogonal.
Here, and in what follows, the scalar product is the usual one on $\mathbb C^N$, taken to be linear in the first variable and antilinear in the second one: $$<x,y>=\sum_ix_i\bar{y}_i$$
As basic examples of complex Hamadard matrices, we have of course the real Hadamard matrices, $H\in M_N(\pm1)$. We will see that there are many other examples.
Let us start by extending some basic results from the real case. First, we have:
The set of the $N\times N$ complex Hadamard matrices is the real algebraic manifold $$X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$$ where $U_N$ is the unitary group, the intersection being taken inside $M_N(\mathbb C)$.
Let $H\in M_N(\mathbb T)$. Then $H$ is Hadamard if and only if its rescaling $U=H/\sqrt{N}$ belongs to the unitary group $U_N$, and so when $H\in Y_N$, as claimed.
The above manifold $X_N$, while appearing by definition as an intersection of smooth manifolds, is very far from being smooth. We will be back to this, later on.
As a basic consequence of the above result, we have:
Let $H\in M_N(\mathbb C)$ be an Hadamard matrix.
1. The columns of $H$ must be pairwise orthogonal.
2. The matrices $H^t,\bar{H},H^*\in M_N(\mathbb C)$ are Hadamard as well.
We use the well-known fact that if a matrix is unitary, $U\in U_N$, then so is its complex conjugate $\bar{U}=(\bar{U}_{ij})$, the inversion formulae being as follows: $$U^*=U^{-1}\quad,\quad U^t=\bar{U}^{-1}$$
Thus the unitary group $U_N$ is stable under the operations $U\to U^t,U\to\bar{U},U\to U^*$, and it follows that the algebraic manifold $X_N$ constructed in Proposition 2.2 is stable as well under these operations. But this gives all the assertions.
Let us introduce now the following equivalence notion for the complex Hadamard matrices, taking into account some basic operations which can be performed:
Two complex Hadamard matrices are called equivalent, and we write $H\sim K$, when it is possible to pass from $H$ to $K$ via the following operations:
1. Permuting the rows, or permuting the columns.
2. Multiplying the rows or columns by numbers in $\mathbb T$.
Also, we say that $H$ is dephased when its first row and column consist of $1$ entries.
The same remarks as in the real case apply. For instance, we have not taken into account the results in Proposition 2.3 when formulating the above definition, because the operations $H\to H^t,\bar{H},H^*$ are far more subtle than those in (1,2) above.
At the level of the examples now, we have the following basic construction, which works at any $N\in\mathbb N$, in stark contrast with what happens in the real case:
The Fourier matrix, $F_N=(w^{ij})$ with $w=e^{2\pi i/N}$, which in standard matrix form, with indices $i,j=0,1,\ldots,N-1$, is as follows, $$F_N=\begin{pmatrix}
1&1&1&\ldots&1\\
1&w&w^2&\ldots&w^{N-1}\\
1&w^2&w^4&\ldots&w^{2(N-1)}\\
\vdots&\vdots&\vdots&&\vdots\\
1&w^{N-1}&w^{(2N-1)}&\ldots&w^{(N-1)^2}
\end{pmatrix}$$ is a complex Hadamard matrix, in dephased form.
By using the standard fact that the averages of complex numbers correspond to barycenters, we conclude that the scalar products between the rows of $F_N$ are: $$<R_a,R_b>
=\sum_jw^{aj}w^{-bj}
=\sum_jw^{(a-b)j}
=N\delta_{ab}$$
Thus $F_N$ is indeed a complex Hadamard matrix. As for the fact that $F_N$ is dephased, this follows from our convention $i,j=0,1,\ldots,N-1$, which is there for this.
Thus, there is no analogue of the HC in the complex case. We will see later on, in section 6 below, that the Fourier matrix $F_N$ can be put in circulant form, so there is no analogue of the CHC either, in this setting. This is of course very good news.
We should mention, however, that the HC and CHC do have some complex extensions, which are of technical nature, by restricting the attention to the Hadamard matrices formed by roots of unity of a given order. We will discuss this in section 3 below.
As a first classification result now, in the complex case, we have:
The Fourier matrices $F_2,F_3$, which are given by $$F_2=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\quad,\quad
F_3=\begin{pmatrix}1&1&1\\ 1&w&w^2\\ 1&w^2&w\end{pmatrix}$$ with $w=e^{2\pi i/3}$ are the only Hadamard matrices at $N=2,3$, up to equivalence.
The proof at $N=2$ is similar to the proof of Proposition 1.5. Regarding now the case $N=3$, consider an Hadamard matrix $H\in M_3(\mathbb T)$, in dephased form: $$H=\begin{pmatrix}1&1&1\\ 1&x&y\\ 1&z&t\end{pmatrix}$$
The orthogonality conditions between the rows of this matrix read: $$(1\perp2)\quad:\quad x+y=-1$$ $$(1\perp3)\quad:\quad z+t=-1$$ $$\ \ \ \,(2\perp3)\quad:\quad x\bar{z}+y\bar{t}=-1$$
Now observe that the equation $p+q=-1$ with $p,q\in\mathbb T$ tells us that the triangle having vertices at $1,p,q$ must be equilateral, and so that $\{p,q\}=\{w,w^2\}$, with $w=e^{2\pi i/3}$.
By using this fact, for the first two equations, we conclude that we must have $\{x,y\}=\{w,w^2\}$ and $\{z,t\}=\{w,w^2\}$. As for the third equation, this tells us that we must have $x\neq z$. Thus, our Hadamard matrix $H$ is either the Fourier matrix $F_3$, or is the matrix obtained from $F_3$ by permuting the last two columns, and we are done.
In order to deal now with the case $N=4$, we already know, from our study in the real case, that we will need tensor products. So, let us formulate:
The tensor product of complex Hadamard matrices is given, in double indices, by $(H\otimes K)_{ia,jb}=H_{ij}K_{ab}$. In other words, we have the formula $$H\otimes K=
\begin{pmatrix}
H_{11}K&\ldots&H_{1M}K\\
\vdots&&\vdots\\
H_{M1}K&\ldots&H_{MM}K
\end{pmatrix}$$ by using the lexicographic order on the double indices.
In order to advance, our first task will be that of tensoring the Fourier matrices. And here, we have the following statement, refining and generalizing Theorem 2.5:
Given a finite abelian group $G$, with dual group $\widehat{G}=\{\chi:G\to\mathbb T\}$, consider the Fourier coupling $\mathcal F_G:G\times\widehat{G}\to\mathbb T$, given by $(i,\chi)\to\chi(i)$.
1. Via the standard isomorphism $G\simeq\widehat{G}$, this Fourier coupling can be regarded as a square matrix, $F_G\in M_G(\mathbb T)$, which is a complex Hadamard matrix.
2. In the case of the cyclic group $G=\mathbb Z_N$ we obtain in this way, via the standard identification $\mathbb Z_N=\{1,\ldots,N\}$, the Fourier matrix $F_N$.
3. In general, when using a decomposition $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}$, the corresponding Fourier matrix is given by $F_G=F_{N_1}\otimes\ldots\otimes F_{N_k}$.
This follows indeed from some basic facts from group theory:
\(1) With the identification $G\simeq\widehat{G}$ made our matrix is given by $(F_G)_{i\chi}=\chi(i)$, and the scalar products between the rows are computed as follows: $$<R_i,R_j>
=\sum_\chi\chi(i)\overline{\chi(j)}
=\sum_\chi\chi(i-j)
=|G|\cdot\delta_{ij}$$
Thus, we obtain indeed a complex Hadamard matrix.
\(2) This follows from the well-known and elementary fact that, via the identifications $\mathbb Z_N=\widehat{\mathbb Z_N}=\{1,\ldots,N\}$, the Fourier coupling here is $(i,j)\to w^{ij}$, with $w=e^{2\pi i/N}$.
\(3) We use here the following well-known formula, for the duals of products: $$\widehat{H\times K}=\widehat{H}\times\widehat{K}$$
At the level of the corresponding Fourier couplings, we obtain from this: $$F_{H\times K}=F_H\otimes F_K$$
Now by decomposing $G$ into cyclic groups, as in the statement, and by using (2) for the cyclic components, we obtain the formula in the statement.
As a first application of this result, we have:
The Walsh matrix, $W_N$ with $N=2^n$, which is given by $$W_N=\begin{pmatrix}1&1\\1&-1\end{pmatrix}^{\otimes n}$$ is the Fourier matrix of the finite abelian group $K_N=\mathbb Z_2^n$.
We have indeed $W_2=F_2=F_{K_2}$, and by taking tensor powers we obtain from this that we have $W_N=F_{K_N}$, for any $N=2^n$.
By getting back now to our classification work, the possible abelian groups at $N=4$, that we can use, are the cyclic group $\mathbb Z_4$, which produces the Fourier matrix $F_4$, and the Klein group $K_4=\mathbb Z_2\times\mathbb Z_2$, which produces the Walsh matrix $W_4$.
The point, however, is that, besides $F_4,W_4$, there are many other complex Hadamard matrices at $N=4$. Indeed, we can use here the following version of the tensor product construction, coming from Diţă’s paper [@dit], involving parameters:
If $H\in M_M(\mathbb T)$ and $K\in M_N(\mathbb T)$ are Hadamard, then so are the following two matrices, for any choice of a parameter matrix $Q\in M_{M\times N}(\mathbb T)$:
1. $H\otimes_QK\in M_{MN}(\mathbb T)$, given by $(H\otimes_QK)_{ia,jb}=Q_{ib}H_{ij}K_{ab}$.
2. $H\!\!{\ }_Q\!\otimes K\in M_{MN}(\mathbb T)$, given by $(H\!\!{\ }_Q\!\otimes K)_{ia,jb}=Q_{ja}H_{ij}K_{ab}$.
These are called right and left Diţă deformations of $H\otimes K$, with parameter $Q$.
The rows $R_{ia}$ of the matrix $H\otimes_QK$ from (1) are indeed pairwise orthogonal: $$\begin{aligned}
<R_{ia},R_{kc}>
&=&\sum_{jb}Q_{ib}H_{ij}K_{ab}\cdot\bar{Q}_{kb}\bar{H}_{kj}\bar{K}_{cb}\\
&=&M\delta_{ik}\sum_bK_{ab}\bar{K}_{cb}\\
&=&MN\delta_{ik}\delta_{ac}\end{aligned}$$
As for the rows $L_{ia}$ of the matrix $H\!\!{\ }_Q\!\otimes K$ from (2), these are orthogonal as well: $$\begin{aligned}
<L_{ia},L_{kc}>
&=&\sum_{jb}Q_{ja}H_{ij}K_{ab}\cdot\bar{Q}_{jc}\bar{H}_{kj}\bar{K}_{cb}\\
&=&N\delta_{ac}\sum_jH_{ij}\bar{H}_{kj}\\
&=&MN\delta_{ik}\delta_{ac}\end{aligned}$$
Thus, both the matrices in the statement are Hadamard, as claimed.
As a first observation, when the parameter matrix is the all-one matrix $\mathbb I\in M_{M\times N}(\mathbb T)$, we obtain in this way the usual tensor product of our matrices: $$H\otimes_{\mathbb I}K=H\!\!{\ }_{\mathbb I}\!\otimes K=H\otimes K$$
As a non-trivial example now, the right deformations of the Walsh matrix $W_4=F_2\otimes F_2$, with arbitrary parameter matrix $Q=(^p_r{\ }^q_s)$, are given by: $$F_2\otimes_QF_2=
\begin{pmatrix}
1&1\\
1&-1
\end{pmatrix}
\otimes_{\begin{pmatrix}
p&q\\
r&s
\end{pmatrix}}
\begin{pmatrix}
1&1\\
1&-1
\end{pmatrix}=\begin{pmatrix}
p&q&p&q\\
p&-q&p&-q\\
r&s&-r&-s\\
r&-s&-r&s
\end{pmatrix}$$
This follows indeed by carefully working out what happens, by using the lexicographic order on the double indices, as explained after Proposition 1.6 above. To be more precise, the usual tensor product $W_4=F_2\otimes F_2$ appears as follows: $$W_4=
\begin{pmatrix}
ia\backslash jb&&00&01&10&11\\
\\
00&&1&1&1&1\\
01&&1&-1&1&-1\\
10&&1&1&-1&-1\\
11&&1&-1&-1&1
\end{pmatrix}$$
The corresponding values of the parameters $Q_{ib}$ to be inserted are as follows: $$(Q_{ib})=\begin{pmatrix}
ia\backslash jb&&00&01&10&11\\
\\
00&&Q_{00}&Q_{01}&Q_{00}&Q_{01}\\
01&&Q_{00}&Q_{01}&Q_{00}&Q_{01}\\
10&&Q_{10}&Q_{11}&Q_{10}&Q_{11}\\
11&&Q_{10}&Q_{11}&Q_{10}&Q_{11}
\end{pmatrix}$$
With the notation $Q=(^p_r{\ }^q_s)$, this latter matrix becomes: $$(Q_{ib})=\begin{pmatrix}
ia\backslash jb&&00&01&10&11\\
\\
00&&p&q&p&q\\
01&&p&q&p&q\\
10&&r&s&r&s\\
11&&r&s&r&s
\end{pmatrix}$$
Now by pointwise multiplying this latter matrix with the matrix $W_4$ given above, we obtain the announced formula for the deformed tensor product $F_2\otimes_QF_2$.
As for the left deformations of $W_4=F_2\otimes F_2$, once again with arbitrary parameter matrix $Q=(^p_r{\ }^q_s)$, these are given by a similar formula, as follows: $$F_2\!\!{\ }_Q\!\otimes F_2=
\begin{pmatrix}
1&1\\
1&-1
\end{pmatrix}
\!{\ }_{\begin{pmatrix}
p&q\\
r&s
\end{pmatrix}}\!\otimes
\begin{pmatrix}
1&1\\
1&-1
\end{pmatrix}=\begin{pmatrix}
p&p&r&r\\
q&-q&s&-s\\
p&p&-r&-r\\
q&-q&-s&s
\end{pmatrix}$$
Observe that this latter matrix is transpose to $F_2\otimes_QF_2$. However, this is something accidental, coming from the fact that $F_2$, and so $W_4$ as well, are self-transpose.
With the above constructions in hand, we have the following result:
The only complex Hadamard matrices at $N=4$ are, up to the standard equivalence relation, the matrices $$F_4^s=\begin{pmatrix}
1&1&1&1\\
1&-1&1&-1\\
1&s&-1&-s\\
1&-s&-1&s
\end{pmatrix}$$ with $s\in\mathbb T$, which appear as right Diţă deformations of $W_4=F_2\otimes F_2$.
First of all, the matrix $F_4^s$ is indeed Hadamard, appearing from the construction in Proposition 2.10, assuming that the parameter matrix there $Q\in M_2(\mathbb T)$ is dephased: $$Q=\begin{pmatrix}1&1\\1&s\end{pmatrix}$$
Observe also that, conversely, any right Diţă deformation of $W_4=F_2\otimes F_2$ is of this form. Indeed, if we consider such a deformation, with general parameter matrix $Q=(^p_r{\ }^q_s)$ as above, by dephasing we obtain an equivalence with $F_4^{s'}$, where $s'=ps/qr$: $$\begin{aligned}
\begin{pmatrix}
p&q&p&q\\
p&-q&p&-q\\
r&s&-r&-s\\
r&-s&-r&s
\end{pmatrix}
&\to&
\begin{pmatrix}
1&1&1&1\\
1&-1&1&-1\\
r/p&s/q&-r/p&-s/q\\
r/p&-s/q&-r/p&s/q
\end{pmatrix}\\
&\to&
\begin{pmatrix}
1&1&1&1\\
1&-1&1&-1\\
1&ps/qr&-1&-ps/qr\\
1&-ps/qr&-1&ps/qr
\end{pmatrix}\end{aligned}$$
It remains to prove that the matrices $F_4^s$ are non-equivalent, and that any complex Hadamard matrix $H\in M_4(\mathbb T)$ is equivalent to one of these matrices $F_4^s$.
But this follows by using the same kind of arguments as in the proof of Proposition 1.7, and from the proof of Proposition 2.6. Indeed, let us first dephase our matrix: $$H=\begin{pmatrix}1&1&1&1\\ 1&a&b&c\\ 1&d&e&f\\ 1&g&h&i\end{pmatrix}$$
We use now the fact, coming from plane geometry, that the solutions $x,y,z,t\in\mathbb T$ of the equation $x+y+z+t=0$ are given by $\{x,y,z,t\}=\{p,q,-p,-q\}$, with $p,q\in\mathbb T$.
In our case, we have $1+a+d+g=0$, and so up to a permutation of the last 3 rows, our matrix must look at follows, for a certain $s\in\mathbb T$: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&e&f\\ 1&-s&h&i\end{pmatrix}$$
In the case $s=\pm1$ we can permute the middle two columns, then repeat the same reasoning, and we end up with the matrix in the statement.
In the case $s\neq\pm1$ we have $1+s+e+f=0$, and so $-1\in\{e,f\}$. Up to a permutation of the last columns, we can assume $e=-1$, and our matrix becomes: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&-1&-s\\ 1&-s&h&i\end{pmatrix}$$
Similarly, from $1-s+h+i=0$ we deduce that $-1\in\{h,i\}$. In the case $h=-1$ our matrix must look as follows, and we are led to the matrix in the statement: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&-1&-s\\ 1&-s&-1&i\end{pmatrix}$$
As for the remaining case $i=-1$, here our matrix must look as follows: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&-1&-s\\ 1&-s&h&-1\end{pmatrix}$$
We obtain from the last column $c=s$, then from the second row $b=-s$, then from the third column $h=s$, and so our matrix must be as follows: $$H=\begin{pmatrix}1&1&1&1\\ 1&-1&-s&s\\ 1&s&-1&-s\\ 1&-s&s&-1\end{pmatrix}$$
But, in order for the second and third row to be orthogonal, we must have $s\in\mathbb R$, and so $s=\pm1$, which contradicts our above assumption $s\neq\pm1$.
Thus, we are done with the proof of the main assertion. As for the fact that the matrices in the statement are indeed not equivalent, this is standard as well. See [@tz1].
At $N=5$ now, the situation is considerably more complicated, with $F_5$ being the only known example, but with the proof of its uniqueness being highly nontrivial.
The key technical result here, due to Haagerup [@ha1], is as follows:
Given an Hadamard matrix $H\in M_5(\mathbb T)$, chosen dephased, $$H=\begin{pmatrix}
1&1&1&1&1\\
1&a&x&*&*\\
1&y&b&*&*\\
1&*&*&*&*\\
1&*&*&*&*
\end{pmatrix}$$ the numbers $a,b,x,y$ must satisfy $(x-y)(x-ab)(y-ab)=0$.
This is something quite surprising, and tricky, the proof in [@ha1] being as follows. Let us look at the upper 3-row truncation of $H$, which is of the following form: $$H'=\begin{pmatrix}
1&1&1&1&1\\
1&a&x&p&q\\
1&y&b&r&s
\end{pmatrix}$$
By using the orthogonality of the rows, we have: $$\begin{aligned}
&&(1+a+x)(1+\bar{b}+\bar{y})(1+\bar{a}y+b\bar{x})\\
&=&-(p+q)(r+s)(\bar{p}r+\bar{q}s)\end{aligned}$$
On the other hand, by using $p,q,r,s\in\mathbb T$, we have: $$\begin{aligned}
&&(p+q)(r+s)(\bar{p}r+\bar{q}s)\\
&=&(r+p\bar{q}s+\bar{p}qr+s)(\bar{r}+\bar{s})\\
&=&1+p\bar{q}\bar{r}s+\bar{p}q+\bar{r}s+r\bar{s}+p\bar{q}+\bar{p}qr\bar{s}+1\\
&=&2Re(1+p\bar{q}+r\bar{s}+p\bar{q}r\bar{s})\\
&=&2Re[(1+p\bar{q})(1+r\bar{s})]\end{aligned}$$
We conclude that we have the following formula, involving $a,b,x,y$ only: $$(1+a+x)(1+\bar{b}+\bar{y})(1+\bar{a}y+b\bar{x})\in\mathbb R$$
Now this is a product of type $(1+\alpha)(1+\beta)(1+\gamma)$, with the first summand being 1, and with the last summand, namely $\alpha\beta\gamma$, being real as well, as shown by the above general $p,q,r,s\in\mathbb T$ computation. Thus, when expanding, and we are left with: $$\begin{aligned}
&&(a+x)+(\bar{b}+\bar{y})+(\bar{a}y+b\bar{x})+(a+x)(\bar{b}+\bar{y})\\
&+&(a+x)(\bar{a}y+b\bar{x})+(\bar{b}+\bar{y})(\bar{a}y+b\bar{x})\in\mathbb R\end{aligned}$$
By expanding all the products, our formula looks as follows: $$\begin{aligned}
&&a+x+\bar{b}+\bar{y}+\bar{a}y+b\bar{x}+a\bar{b}+a\bar{y}+\bar{b}x+x\bar{y}\\
&+&1+ab\bar{x}+\bar{a}xy+b+\bar{a}\bar{b}y+\bar{x}+\bar{a}+b\bar{x}\bar{y}\in\mathbb R\end{aligned}$$
By removing from this all terms of type $z+\bar{z}$, we are left with: $$a\bar{b}+x\bar{y}+ab\bar{x}+\bar{a}\bar{b}y+\bar{a}xy+b\bar{x}\bar{y}\in\mathbb R$$
Now by getting back to our Hadamard matrix, all this remains true when transposing it, which amounts in interchanging $x\leftrightarrow y$. Thus, we have as well: $$a\bar{b}+\bar{x}y+ab\bar{y}+\bar{a}\bar{b}x+\bar{a}xy+b\bar{x}\bar{y}\in\mathbb R$$
By substracting now the two equations that we have, we obtain: $$x\bar{y}-\bar{x}y+ab(\bar{x}-\bar{y})+\bar{a}\bar{b}(y-x)\in\mathbb R$$
Now observe that this number, say $Z$, is purely imaginary, because $\bar{Z}=-Z$. Thus our equation reads $Z=0$. On the other hand, we have the following formula: $$\begin{aligned}
abxyZ
&=&abx^2-aby^2+a^2b^2(y-x)+xy(y-x)\\
&=&(y-x)(a^2b^2+xy-ab(x+y))\\
&=&(y-x)(ab-x)(ab-y)\end{aligned}$$
Thus, our equation $Z=0$ corresponds to the formula in the statement.
By using the above result, we are led to the following theorem, also from [@ha1]:
The only Hadamard matrix at $N=5$ is the Fourier matrix, $$F_5=\begin{pmatrix}
1&1&1&1&1\\
1&w&w^2&w^3&w^4\\
1&w^2&w^4&w&w^3\\
1&w^3&w&w^4&w^2\\
1&w^4&w^3&w^2&w
\end{pmatrix}$$ with $w=e^{2\pi i/5}$, up to the standard equivalence relation for such matrices.
Assume that have an Hadamard matrix $H\in M_5(\mathbb T)$, chosen dephased, and written as in Proposition 2.12, with emphasis on the upper left $2\times2$ subcorner: $$H=\begin{pmatrix}
1&1&1&1&1\\
1&a&x&*&*\\
1&y&b&*&*\\
1&*&*&*&*\\
1&*&*&*&*
\end{pmatrix}$$
We know from Proposition 2.12, applied to $H$ itself, and to its transpose $H^t$ as well, that the entries $a,b,x,y$ must satisfy the following equations: $$(a-b)(a-xy)(b-xy)=0$$ $$(x-y)(x-ab)(y-ab)=0$$
This is of course something very strong, and these equations are actually valid all across the matrix, by permuting rows and columns. The idea will be that by doing some combinatorics, sometimes combined with a few tricks, this will lead to the result.
Our first claim is that, by doing some combinatorics, we can actually obtain from this $a=b$ and $x=y$, up to the equivalence relation for the Hadamard matrices: $$H\sim\begin{pmatrix}
1&1&1&1&1\\
1&a&x&*&*\\
1&x&a&*&*\\
1&*&*&*&*\\
1&*&*&*&*
\end{pmatrix}$$
Indeed, the above two equations lead to 9 possible cases, the first of which is, as desired, $a=b$ and $x=y$. As for the remaining 8 cases, here once again things are determined by 2 parameters, and in practice, we can always permute the first 3 rows and 3 columns, and then dephase our matrix, as for our matrix to take the above special form.
With this result in hand, the combinatorics of the scalar products between the first 3 rows, and between the first 3 columns as well, becomes something which is quite simple to investigate. By doing a routine study here, and then completing it with a study of the lower right $2\times2$ corner as well, we are led to 2 possible cases, as follows: $$H\sim\begin{pmatrix}
1&1&1&1&1\\
1&a&b&c&d\\
1&b&a&d&c\\
1&c&d&a&b\\
1&d&c&b&a
\end{pmatrix}\qquad:\qquad
H\sim\begin{pmatrix}
1&1&1&1&1\\
1&a&b&c&d\\
1&b&a&d&c\\
1&c&d&b&a\\
1&d&c&a&b
\end{pmatrix}$$
Our claim now is that the first case is in fact not possible. Indeed, we must have: $$\begin{aligned}
a+b+c+d&=&-1\\
2Re(a\bar{b})+2Re(c\bar{d})&=&-1\\
2Re(a\bar{c})+2Re(b\bar{d})&=&-1\\
2Re(a\bar{d})+2Re(b\bar{c})&=&-1\end{aligned}$$
Since we have $|Re(x)|\leq1$ for any $x\in\mathbb T$, we deduce from the second equation that $Re(a\bar{b})\leq 1/2$, and so that the arc length between $a,b$ satisfies $\theta(a,b)\geq\pi/3$. The same argument applies to $c,d$, and to the other pairs of numbers in the last 2 equations.
Now since our equations are invariant under permutations of $a,b,c,d$, we can assume that $a,b,c,d$ are ordered on the circle, and by the above, separated by $\geq\pi/3$ arc lengths. But this implies $\theta(a,c)\geq 2\pi/3$ and $\theta(b,d)\geq 2\pi/3$, which gives $Re(a\bar{c})\leq-1/2$ and $Re(b\bar{d})\leq-1/2$, which contradicts the third equation. Thus, our claim is proved.
Summarizing, we have proved so far that our matrix must be as follows: $$H\sim\begin{pmatrix}
1&1&1&1&1\\
1&a&b&c&d\\
1&b&a&d&c\\
1&c&d&b&a\\
1&d&c&a&b
\end{pmatrix}$$
We are now in position of finishing. The orthogonality equations are as follows: $$\begin{aligned}
a+b+c+d&=&-1\\
2Re(a\bar{b})+2Re(c\bar{d})&=&-1\\
a\bar{c}+c\bar{b}+b\bar{d}+d\bar{a}&=&-1\end{aligned}$$
The third equation can be written in the following equivalent form: $$\begin{aligned}
Re[(a+b)(\bar{c}+\bar{d})]&=&-1\\
Im[(a-b)(\bar{c}-\bar{d})]&=&0\end{aligned}$$
From $a,b,c,d\in\mathbb T$ we obtain $\frac{a+b}{a-b},\frac{c+d}{c-d}\in i\mathbb R$, so we can find $s,t\in\mathbb R$ such that: $$a+b=is(a-b)\quad,\quad c+d=it(c-d)$$
By plugging in these values, our system of equations simplifies, as follows: $$\begin{aligned}
(a+b)+(c+d)&=&-1\\
|a+b|^2+|c+d|^2&=&3\\
(a+b)(\bar{c}+\bar{d})&=&-1\end{aligned}$$
Now observe that the last equation implies in particular that we have: $$|a+b|^2\cdot|c+d|^2=1$$
Thus $|a+b|^2,|c+d|^2$ must be roots of $X^2-3X+1=0$, which gives: $$\Big\{|a+b|\,,\,|c+d|\Big\}=\left\{\frac{\sqrt{5}+1}{2}\,,\,\frac{\sqrt{5}-1}{2}\right\}$$
This is very good news, because we are now into 5-th roots of unity. To be more precise, we have 2 cases to be considered, the first one being as follows, with $z\in\mathbb T$: $$a+b=\frac{\sqrt{5}+1}{2}\,z\quad,\quad c+d=-\frac{\sqrt{5}-1}{2}\,z$$
From $a+b+c+d=-1$ we obtain $z=-1$, and by using this we obtain $b=\bar{a},d=\bar{c}$, and then $Re(a)=\cos(2\pi/5),Re(c)=\cos(\pi/5)$, and so we have $H\sim F_5$.
The second case, with $a,b$ and $c,d$ interchanged, this leads to $H\sim F_5$ as well.
The above result is of course something quite impressive. However, at the level of practical conclusions, we can only say that the $N=5$ case is something very simple.
At $N=6$ now, the situation becomes considerably complicated, with lots of “exotic” solutions, and with the structure of the Hadamard manifold $X_6$ being not understood yet. In fact, this manifold $X_6$ looks as complicated as real algebraic manifolds can get.
The simplest examples of Hadamard matrices at $N=6$ are as follows:
We have the following basic Hadamard matrices, at $N=6$:
1. The Fourier matrix $F_6$.
2. The Diţă deformations of $F_2\otimes F_3$ and of $F_3\otimes F_2$.
3. The Haagerup matrix $H_6^q$.
4. The Tao matrix $T_6$.
All this is elementary, the idea, and formulae of the matrices, being as follows:
\(1) This is something that we know well.
\(2) Consider indeed the dephased Diţă deformations of $F_2\otimes F_3$ and $F_3\otimes F_2$: $$F_6^{(rs)}=F_2
\otimes_{\begin{pmatrix}
1&1&1\\
1&r&s
\end{pmatrix}}
F_3
\qquad,\qquad
F_6^{(^r_s)}=F_3
\otimes_{\begin{pmatrix}
1&1\\
1&r\\
1&s
\end{pmatrix}}F_2$$
Here $r,s$ are two parameters on the unit circle, $r,s\in\mathbb T$. In matrix form: $$F_6^{(rs)}=\begin{pmatrix}
1&1&1&&1&1&1\\
1&w&w^2&&1&w&w^2\\
1&w^2&w&&1&w^2&w\\
\\
1&r&s&&-1&-r&-s\\
1&wr&w^2s&&-1&-wr&-w^2s\\
1&w^2r&ws&&-1&-w^2r&-ws
\end{pmatrix}$$
As for the other deformation, this is given by: $$F_6^{(^r_s)}
=\begin{pmatrix}
1&1&&1&1&&1&1\\
1&-1&&1&-1&&1&-1\\
\\
1&r&&w&wr&&w^2&w^2r\\
1&-r&&w&-wr&&w^2&-w^2r\\
\\
1&s&&w^2&w^2s&&w&ws\\
1&-s&&w^2&-w^2s&&w&-ws
\end{pmatrix}$$
\(3) The matrix here, from [@ha1], is as follows, with $q\in\mathbb T$: $$H_6^q=\begin{pmatrix}
1&1&1&1&1&1\\
1&-1&i&i&-i&-i\\
1&i&-1&-i&q&-q\\
1&i&-i&-1&-q&q\\
1&-i&\bar{q}&-\bar{q}&i&-1\\
1&-i&-\bar{q}&\bar{q}&-1&i
\end{pmatrix}$$
\(4) The matrix here, from [@tao], is as follows, with $w=e^{2\pi i/3}$: $$T_6=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&w&w&w^2&w^2\\
1&w&1&w^2&w^2&w\\
1&w&w^2&1&w&w^2\\
1&w^2&w^2&w&1&w\\
1&w^2&w&w^2&w&1
\end{pmatrix}$$
Observe that both $H_6^q$ and $T_6$ are indeed complex Hadamard matrices.
The point with the matrices in Theorem 2.14 is that they are “regular”, in the sense that the scalar products between rows appear in the simplest possible way, namely from vanishing sums of roots of unity, possibly rotated by a scalar. We will be back to this in section 3 below, with a proof that these matrices are the only regular ones, at $N=6$.
In the non-regular case now, there are many known constructions at $N=6$. Here is one such construction, mildly “exotic”, found by Björck and Fröberg in [@bfr]:
The following is a complex Hadamard matrix, $$BF_6=\begin{pmatrix}
1&ia&-a&-i&-\bar{a}&i\bar{a}\\
i\bar{a}&1&ia&-a&-i&-\bar{a}\\
-\bar{a}&i\bar{a}&1&ia&-a&-i\\
-i&-\bar{a}&i\bar{a}&1&ia&-a\\
-a&-i&-\bar{a}&i\bar{a}&1&ia\\
ia&-a&-i&-\bar{a}&i\bar{a}&1
\end{pmatrix}$$ where $a\in\mathbb T$ is one of the roots of $a^2+(\sqrt{3}-1)a+1=0$.
Observe that the matrix in the statement is circulant, in the sense the rows appear by cyclically permuting the first row. Thus, we only have to check that the first row is orthogonal to the other 5 rows. But this follows from $a^2+(\sqrt{3}-1)a+1=0$.
The obvious question here is perhaps on how Björck and Fröberg were able to construct the above matrix (!) This was done via some general theory for the circulant Hadamard matrices, and some computer simulations. We will discuss this in section 6 below.
Further study in the $N=6$ case leads to a number of horrors, of real algebraic geometric flavor, and we have here, as an illustrating example, the following result from [@ben]:
The self-adjoint $6\times6$ Hadamard matrices are, up to equivalence $$BN_6^q=
\begin{pmatrix}
1&1&1&1&1&1\\
1&-1&\bar{x}&-y&-\bar{x}&y\\
1&x&-1&t&-t&-x\\
1&-\bar{y}&\bar{t}&-1&\bar{y}&-\bar{t}\\
1&-x&-\bar{t}&y&1&\bar{z}\\
1&\bar{y}&-\bar{x}&-t&z&1
\end{pmatrix}$$ with $x,y,z,t\in\mathbb T$ depending on a parameter $q\in\mathbb T$, in a very complicated way.
The study here can be done via a lot of work, and tricks, in the spirit of the Haagerup classification result at $N=5$, and the equations are as follows: $$\begin{aligned}
x&=&\frac{1+2q+q^2-\sqrt{2}\sqrt{1+2q+2q^3+q^4}}{1+2q-q^2}\\
y&=&q\\
z&=&\frac{1+2q-q^2}{q(-1+2q+q^2)}\\
t&=&\frac{1+2q+q^2-\sqrt{2}\sqrt{1+2q+2q^3+q^4}}{-1+2q+q^2}\end{aligned}$$
All this is very technical, and we refer here to [@ben].
There are many other examples at $N=6$, and no classification known. See [@mrs].
Let us discuss now the case $N=7$. We will restrict the attention to case where the combinatorics comes from roots of unity. We use the following result, from [@sz2]:
If $H\in M_N(\pm 1)$ with $N\geq 8$ is dephased symmetric Hadamard, and $$w=\frac{(1\pm i\sqrt{N-5})^2}{N-4}$$ then the following procedure yields a complex Hadamard matrix $M\in M_{N-1}(\mathbb T)$:
1. Erase the first row and column of $H$.
2. Replace all diagonal $1$ entries with $-w$.
3. Replace all off-diagonal $-1$ entries with $w$.
We know from the proof of Proposition 1.8 that the scalar product between any two rows of $H$, normalized as there, appears as follows: $$P=\frac{N}{4}\cdot1\cdot1+\frac{N}{4}\cdot1\cdot(-1)+\frac{N}{4}\cdot(-1)\cdot1+\frac{N}{4}\cdot(-1)\cdot(-1)=0$$
Let us peform now the above operations (1,2,3), in reverse order. When replacing $-1\to w$, all across the matrix, the above scalar product becomes: $$P'=\frac{N}{4}\cdot1\cdot1+\frac{N}{4}\cdot1\cdot\bar{w}+\frac{N}{4}\cdot w\cdot1+\frac{N}{4}\cdot(-1)\cdot(-1)=\frac{N}{2}(1+Re(w))$$
Now when adjusting the diagonal via $w\to-1$ back, and $1\to-w$, this amounts in adding the quantity $-2(1+Re(w))$ to our product. Thus, our product becomes: $$P''=\left(\frac{N}{2}-2\right)(1+Re(w))=\frac{N-4}{2}\left(1+\frac{6-N}{N-4}\right)=1$$
Finally, erasing the first row and column amounts in substracting 1 from our scalar product. Thus, our scalar product becomes $P'''=1-1=0$, and we are done.
Observe that the number $w$ in the above statement is a root of unity precisely at $N=8$, where the only matrix satisfying the conditions in the statement is the Walsh matrix $W_8$. So, let us apply, as in [@sz2], the above construction to this matrix: [$$\begin{pmatrix}
1&1&1&1&1&1&1&1\\
1&-1&1&-1&1&-1&1&-1\\
1&1&-1&-1&1&1&-1&-1\\
1&-1&-1&1&1&-1&-1&1\\
1&1&1&1&-1&-1&-1&-1\\
1&-1&1&-1&-1&1&-1&1\\
1&-1&-1&-1&-1&-1&1&1\\
1&-1&-1&1&-1&1&1&-1
\end{pmatrix}\to
\begin{pmatrix}
*&*&*&*&*&*&*&*\\
*&-1&1&w&1&w&1&w\\
*&1&-1&w&1&1&w&w\\
*&w&w&-w&1&w&w&1\\
*&1&1&1&-1&w&w&w\\
*&w&1&w&w&-w&w&1\\
*&1&w&w&w&w&-w&1\\
*&w&w&1&w&1&1&-1
\end{pmatrix}$$]{}
The matrix on the right is the Petrescu matrix $P_7$, found in [@pet]. Thus, we have:
$P_7$ is the unique matrix formed by roots of unity that can be obtained by the Szöllősi construction. It appears at $N=8$, from $H=W_8$. Its formula is $$(P_7)_{ijk,abc}=
\begin{cases}
-w&{\rm if}\ (ijk)=(abc),\ ia+jb+kc=0(2)\\
w&{\rm if}\ (ijk)\neq(abc),\ ia+jb+kc\neq 0(2)\\
(-1)^{ia+jb+kc}&{\rm otherwise}
\end{cases}$$ where $w=e^{2\pi i/3}$, and with the indices belonging to the set $\{0,1\}^3-\{(0,0,0)\}$.
We know that the Szöllősi construction maps $W_8\to P_7$. Since $(F_2)_{ij}=(-1)^{ij}$, we have $(W_8)_{ijk,abc}=(-1)^{ia+jb+kc}$, and this gives the formula in the statement.
Now observe that we are in the quite special situation $H=F_2\otimes K$, with $K$ being dephased and symmetric. Thus, we can search for a one-parameter affine deformation $K(q)$ which is dephased and symmetric, and then build the following matrix: $$H(q)=\begin{pmatrix}K(q)&K\\ K&-K(\bar{q})\end{pmatrix}$$
In our case, such a deformation $K(q)=W_4(q)$ can be obtained by putting the $q$ parameters in the $2\times 2$ middle block. Now by performing the Szöllősi construction, with the parameters $q,\bar{q}$ left untouched, we obtain the parametric Petrescu matrix [@pet]:
The following is a complex Hadamard matrix, $$P_7^q
=\begin{pmatrix}
-q&q&w&1&w&1&w\\
q&-q&w&1&1&w&w\\
w&w&-w&1&w&w&1\\
1&1&1&-1&w&w&w\\
w&1&w&w&-\bar{q}w&\bar{q}w&1\\
1&w&w&w&\bar{q}w&-\bar{q}w&1\\
w&w&1&w&1&1&-1
\end{pmatrix}$$ where $w=e^{2\pi i/3}$, and $q\in\mathbb T$.
This follows from the above considerations, or from a direct verification of the orthogonality of the rows, which uses either $1-1=0$, or $1+w+w^2=0$.
Observe that the above matrix $P_7^q$ has the property of being “regular”, in the sense that the scalar products between rows appear from vanishing sums of roots of unity, possibly rotated by a scalar. We will be back to this in the next section, with the conjectural statement that $F_7,P_7^q$ are the only regular Hadamard matrices at $N=7$.
Roots of unity
==============
Many interesting examples of complex Hadamard matrices $H\in M_N(\mathbb T)$, including the real ones $H\in M_N(\pm1)$, have as entries roots of unity, of finite order. We discuss here this case, and more generally the “regular” case, where the combinatorics of the scalar products between the rows comes from vanishing sums of roots of unity.
Let us begin with the following definition, going back to the work in [@but]:
An Hadamard matrix is called of Butson type if its entries are roots of unity of finite order. The Butson class $H_N(l)$ consists of the Hadamard matrices $$H\in M_N(\mathbb Z_l)$$ where $\mathbb Z_l$ is the group of the $l$-th roots of unity. The level of a Butson matrix $H\in M_N(\mathbb T)$ is the smallest integer $l\in\mathbb N$ such that $H\in H_N(l)$.
As basic examples, we have the real Hadamard matrices, which form by definition the Butson class $H_N(2)$. The Fourier matrices are Butson matrices as well, because we have $F_N\in H_N(N)$, and more generally $F_G\in H_N(l)$, with $N=|G|$, and with $l\in\mathbb N$ being the smallest common order of the elements of $G$. There are many other examples of such matrices, as for instance those in Theorem 2.14, at 1 values of the parameters.
Generally speaking, the main question regarding the Butson matrices is that of understanding when $H_N(l)\neq 0$, via a theorem providing obstructions, and then a conjecture stating that these obstructions are the only ones. Let us begin with:
The following holds, $$H_N(2)\neq\emptyset\implies N\in\{2\}\cup 4\mathbb N$$ due to the orthogonality of the first $3$ rows.
This is something that we know from section 1, with the obstruction, going back to Sylvester’s paper [@syl], being explained in Proposition 1.8 above.
The above obstruction is fully satisfactory, because according to the Hadamard Conjecture, its converse should hold. Thus, we are fully done with the case $l=2$.
Our purpose now will be that of finding analogous statements at $l\geq3$, theorem plus conjecture. At very small values of $l$ this is certainly possible, and in what regards the needed obstructions, we can get away with the following simple fact, from [@but], [@win]:
For a prime power $l=p^a$, the vanishing sums of $l$-th roots of unity $$\lambda_1+\ldots+\lambda_N=0\quad,\quad\lambda_i\in\mathbb Z_l$$ appear as formal sums of rotated full sums of $p$-th roots of unity.
Consider indeed the full sum of $p$-th roots of unity, taken in a formal sense: $$S=\sum_{k=1}^p(e^{2\pi i/p})^k$$
Let also $w=e^{2\pi i/l}$, and for $r\in\{1,2,\ldots ,l/p\}$ denote by $S_p^r=w^r\cdot S$ the above sum, rotated by $w^r$. We must show that any vanishing sum of $l$-th roots of unity appears as a sum of such quantities $S_p^r$, with all this taken of course in a formal sense.
For this purpose, consider the following map, which assigns to the abstract elements of the group ring $\mathbb Z[\mathbb Z_l]$ their precise numeric values, inside $\mathbb Z(w)\subset\mathbb C$: $$\Phi:\mathbb Z[\mathbb Z_l]\to\mathbb Z(w)$$
Our claim is that the elements $\{S_p^r\}$ form a basis of $\ker\Phi$. Indeed, we obviously have $S_p^r\in\ker\Phi$. Also, these elements are linearly independent, because the support of $S_p^r$ contains a unique element of the subset $\{1,2,\ldots ,p^{a-1}\}\subset\mathbb Z_l$, namely the element $r\in\mathbb Z_l$, so all the coefficients of a vanishing linear combination of sums $S_p^r$ must vanish.
Thus, we are left with proving that $\ker\Phi$ is spanned by $\{S_p^r\}$. For this purpose, let us recall that the minimal polynomial of $w$ is as follows: $$\frac{X^{p^{a}}-1}{X^{{p^{a-1}}}-1}=1+X^{p^{a-1}}+X^{2p^{a-1}}+\ldots+X^{(p-1)p^{a-1}}$$
But this shows that $\ker\Phi$ has dimension $p^a-(p^a-p^{a-1})=p^{a-1}$, and since this is exactly the number of the sums $S_p^r$, this finishes the proof of our claim.
Thus, any vanishing sum of $l$-th roots of unity must be of the form $\sum\pm S_p^r$, and the above support considerations show the coefficients must be positive, as desired.
We can now formulate a result in the spirit of Proposition 3.2, as follows:
The following holds, $$H_N(p^a)\neq\emptyset\implies N\in p\mathbb N$$ due to the orthogonality of the first $2$ rows.
This follows indeed from Proposition 3.3, because the scalar product between the first 2 rows of our matrix is a vanishing sum of $l$-th roots of unity.
WIth these obstructions in hand, we can discuss the case $l\leq5$, as follows:
We have the following results,
1. $H_N(2)\neq\emptyset\implies N\in\{2\}\cup 4\mathbb N$,
2. $H_N(3)\neq\emptyset\implies N\in3\mathbb N$,
3. $H_N(4)\neq\emptyset\implies N\in2\mathbb N$,
4. $H_N(5)\neq\emptyset\implies N\in5\mathbb N$,
with in cases (1,3), a solid conjecture stating that the converse should hold as well.
In this statement (1) is the Sylvester obstruction, and (2,3,4) are particular cases of the Butson obstruction. As for the last assertion, which is of course something rather informal, but which is important for our purposes, the situation is as follows:
\(1) Here, as already mentioned, we have the Hadamard Conjecture, which comes with very solid evidence, as explained in section 1 above.
\(2) Here we have an old conjecture, dealing with complex Hadamard matrices over $\{\pm1,\pm i\}$, going back to the work in [@tur], and called Turyn Conjecture.
At $l=3$ the situation is quite complicated, due to the following result, from [@del]:
The following holds, $$H_N(l)\neq\emptyset\implies\exists\,d\in\mathbb Z[e^{2\pi i/l}],\,|d|^2=N^N$$ due to the orthogonality of all $N$ rows. In particular, we have $$5|N\implies H_N(6)=\emptyset$$ and so $H_{15}(3)=\emptyset$, which shows that the Butson obstruction is too weak at $l=3$.
The obstruction follows from the unitarity condition $HH^*=N$ for the complex Hadamard matrices, by applying the determinant to it, which gives: $$|{\rm det}(H)|^2=N^N$$
Regarding the second assertion, let $w=e^{2\pi i/3}$, and assume that $d=a+bw+cw^2$ with $a,b,c\in\mathbb Z$ satisfies $|d|^2=0(5)$. We have the following computation: $$\begin{aligned}
|d|^2
&=&(a+bw+cw^2)(a+bw^2+cw)\\
&=&a^2+b^2+c^2-ab-bc-ac\\
&=&\frac{1}{2}[(a-b)^2+(b-c)^2+(c-a)^2]\end{aligned}$$
Thus our condition $|d|^2=0(5)$ leads to the following system, modulo 5: $$x+y+z=0\quad,\quad x^2+y^2+z^2=0$$
But this system has no solutions. Indeed, let us look at $x^2+y^2+z^2=0$. If this equality appears as $0+0+0=0$ we can divide $x,y,z$ by $5$ and redo the computation, and if not, this equality can only appear as $0+1+(-1)=0$. Thus, modulo permutations, we must have $x=0,y=\pm1,z=\pm2$, which contradicts $x+y+z=0$.
Finally, the last assertion follows from $H_{15}(3)\subset H_{15}(6)=\emptyset$.
At $l=5$ now, things are a bit unclear, with the converse of Theorem 3.5 (4) being something viable, at the conjectural level, at least to our knowledge.
At $l=6$ the situation becomes again complicated, as follows:
The following holds, due to Haagerup’s $N=5$ classification result, involving the orthogonality of all $5$ rows of the matrix: $$H_5(l)\neq\emptyset\implies 5|l$$ In particular we have $H_5(6)=\emptyset$, which follows by the way from the de Launey obstruction as well, in contrast with the fact that we generally have $H_N(6)\neq\emptyset$.
In this statement the obstruction $H_5(l)=\emptyset\implies 5|l$ comes indeed from Haagerup’s classification result, explained in Theorem 2.13 above. As for the last assertion, this is something very informal, the situation at small values of $N$ being as follows:
– At $N=2,3,4$ we have the matrices $F_2,F_3,W_4$.
– At $N=6,7,8,9$ we have the matrices $F_6,P_7^1,W_8,F_3\otimes F_3$.
– At $N=10$ we have the following matrix, found in [@bbs] by using a computer, and written in logarithmic form, with $k$ standing for $e^{2k\pi i/6}$: $$X^6_{10}=
\left(\begin{array}{cccccccccccccc}
0&0&0&0&0&0&0&0&0&0\\
0&4&1&5&3&1&3&3&5&1\\
0&1&2&3&5&5&1&3&5&3\\
0&5&3&2&1&5&3&5&3&1\\
0&3&5&1&4&1&1&5&3&3\\
0&3&3&3&3&3&0&0&0&0\\
0&1&1&5&3&4&3&0&2&4\\
0&1&5&3&5&2&4&3&2&0\\
0&5&3&5&1&2&0&2&3&4\\
0&3&5&1&1&4&4&2&0&3
\end{array}\right)$$
We refer to [@bbs] for more details on this topic.
All this is not good news. Indeed, there is no hope of conjecturally solving our $H_N(l)\neq\emptyset$ problem in general, because this would have to take into account, and in a simple and conceptual way, both the subtle arithmetic consequences of the de Launey obstruction, and the Haagerup classification result at $N=5$, and this is something not feasible.
In order to further comment on these difficulties, let us discuss now a generalization of Proposition 3.3 above, and of the related Butson obstruction from Proposition 3.4, which has been our main source of obstructions, so far. Let us start with:
A cycle is a full sum of roots of unity, possibly rotated by a scalar, $$C=q\sum_{k=1}^lw^k\quad,\quad w=e^{2\pi i/l}\quad,\quad q\in\mathbb T$$ and taken in a formal sense. A sum of cycles is a formal sum of cycles.
The actual sum of a cycle, or of a sum of cycles, is of course 0. This is why the word “formal” is there, for reminding us that we are working with formal sums.
As an example, here is a sum of cycles, with $w=e^{2\pi i/6}$, and with $|q|=1$: $$1+w^2+w^4+qw+qw^4=0$$
We know from Proposition 3.3 above that any vanishing sum of $l$-th roots of unity must be a sum of cycles, at least when $l=p^a$ is a prime power. However, this is not the case in general, the simplest counterexample being as follows, with $w=e^{2\pi i/30}$: $$w^5+w^6+w^{12}+w^{18}+w^{24}+w^{25}=0$$
The following deep result on the subject is due to Lam and Leung [@lle]:
Let $l=p_1^{a_1}\ldots p_k^{a_k}$, and assume that $\lambda_i\in\mathbb Z_l$ satisfy $\lambda_1+\ldots+\lambda_N=0$.
1. $\sum\lambda_i$ is a sum of cycles, with $\mathbb Z$ coefficients.
2. If $k\leq 2$ then $\sum\lambda_i$ is a sum of cycles (with $\mathbb N$ coefficients).
3. If $k\geq 3$ then $\sum\lambda_i$ might not decompose as a sum of cycles.
4. $\sum\lambda_i$ has the same length as a sum of cycles: $N\in p_1\mathbb N+\ldots+p_k\mathbb N$.
This is something that we will not really need in what follows, but that we included here, in view of its importance. The idea of the proof is as follows:
\(1) This is a well-known result, which follows from basic number theory, by using arguments in the spirit of those in the proof of Proposition 3.3 above.
\(2) This is something that we already know at $k=1$, from Proposition 3.3. At $k=2$ the proof is more technical, along the same lines. See [@lle].
\(3) The smallest possible $l$ potentially producing a counterexample is $l=2\cdot3\cdot 5=30$, and we have here indeed the sum given above, with $w=e^{2\pi i/30}$.
\(4) This is a deep result, due to Lam and Leung, relying on advanced number theory knowledge. We refer to their paper [@lle] for the proof.
As a consequence of the above result, we have the following generalization of the Butson obstruction, which is something final and optimal on this subject:
Assuming $l=p_1^{a_1}\ldots p_k^{a_k}$, the following must hold, due to the orthogonality of the first $2$ rows: $$H_N(l)\neq\emptyset\implies N\in p_1\mathbb N+\ldots+p_k\mathbb N$$ In the case $k\geq2$, the latter condition is automatically satisfied at $N>>0$.
Here the first assertion, which generalizes the $l=p^a$ obstruction from Proposition 3.4 above, comes from Theorem 3.9 (4), applied to the vanishing sum of $l$-th roots of unity coming from the scalar product between the first 2 rows. As for the second assertion, this is something well-known, coming from basic number theory.
Summarizing, our study so far of the condition $H_N(l)\neq\emptyset$ has led us into an optimal obstruction coming from the first 2 rows, namely the Lam-Leung one, then an obstruction coming from the first 3 rows, namely the Sylvester one, and then two subtle obstructions coming from all $N$ rows, namely the de Launey one, and the Haagerup one.
As an overall conclusion, by contemplating all these obstructions, nothing good in relation with our problem $H_N(l)\neq\emptyset$ is going on at small $N$. So, as a natural and more modest objective, we should perhaps try instead to solve this problem at $N>>0$.
The point indeed is that everything simplifies at $N>>0$, with some of the above obstructions dissapearing, and with some other known obstructions, not to be discussed here, dissapearing as well. We are therefore led to the following statement:
The following equivalences should hold, in an asymptotic sense, at $N>>0$,
1. $H_N(2)\neq\emptyset\iff 4|N$,
2. $H_N(p^a)\neq\emptyset\iff p|N$, for $p^a\geq3$ prime power,
3. $H_N(l)\neq\emptyset\iff\emptyset$, for $l\in\mathbb N$ not a prime power,
modulo the de Launey obstruction, $|d|^2=N^N$ for some $d\in\mathbb Z[e^{2\pi i/l}]$.
In short, our belief is that when imposing the condition $N>>0$, only the Sylvester, Butson and de Launey obstructions survive. This is of course something quite nice, but in what regards a possible proof, there is probably no way. Indeed, our above conjecture generalizes the HC in the $N>>0$ regime, which is something beyond reach.
One interesting idea, however, in dealing with such questions, coming from the de Launey-Levin result from [@dle], explained in section 1, is that of looking at the partial Butson matrices, at $N>>0$. Observe in particular that restricting the attention to the rectangular case, and this not even in the $N>>0$ regime, would make dissapear the de Launey obstruction from the ABC, which uses the orthogonality of all $N$ rows.
We will discuss this later on, at the end of this section. For a number of related considerations, we refer as well to the papers [@del], [@dgo].
Getting away now from all this arithmetic madness, let us discuss now, as a more concrete thing, the classification of the regular complex Hadamard matrices of small order. The definition here, which already appeared in the above, is as follows:
A complex Hadamard matrix $H\in M_N(\mathbb T)$ is called regular if the scalar products between rows decompose as sums of cycles.
We should mention that there is some notational clash here, with this notion being sometimes used in order to designate the bistochastic matrices. In this book we use the above notion of regularity, and we call bistochastic the bistochastic matrices.
Our purpose in what follows will be that of showing that the notion of regularity can lead to full classification results at $N\leq6$, and perhaps at $N=7$ too, and all this while covering most of the interesting complex Hadamard matrices that we met, so far.
As a first observation, supporting this last claim, we have the following result:
The following complex Hadamard matrices are regular:
1. The matrices at $N\leq5$, namely $F_2,F_3,F_4^s,F_5$.
2. The main examples at $N=6$, namely $F_6^{(rs)},F_6^{(^r_s)},H_6^q,T_6$.
3. The main examples at $N=7$, namely $F_7,P_7^q$.
The Fourier matrices $F_N$ are all regular, with the scalar products between rows appearing as certain sums of full sums of $l$-th roots of unity, with $l|N$. As for the other matrices appearing in the statement, with the convention that “cycle structure” means the length of the cycles in the regularity property, the situation is as follows:
\(1) $F_4^s$ has cycle structure $2+2$, and this because the verification of the Hadamard condition is always based on the formula $1+(-1)=0$, rotated by scalars.
\(2) $F_6^{(rs)},F_6^{(^r_s)}$ have mixed cycle structure $2+2+2/3+3$, in the sense that both cases appear, $H_6^q$ has cycle structure $2+2+2$, and $T_6$ has cycle structure $3+3$.
\(3) $P_7^q$ has cycle structure $3+2+2$, its Hadamard property coming from $1+w+w^2=0$, with $w=e^{2\pi i/3}$, and from $1+(-1)=0$, applied twice, rotated by scalars.
Let us discuss now the classification of regular matrices. We first have:
The regular Hadamard matrices at $N\leq 5$ are $$F_2,F_3,F_4^s,F_5$$ up to the equivalence relation for the complex Hadamard matrices.
This is something that we already know, coming from the classification results from section 2, and from Proposition 3.13 (1). However, and here comes our point, proving this result does not need in fact all this, the situation being as follows:
\(1) At $N=2$ the cycle structure can be only 2, and we obtain $F_2$.
\(2) At $N=3$ the cycle structure can be only 3, and we obtain $F_3$.
\(3) At $N=4$ the cycle structure can be only $2+2$, and we obtain $F_4^s$.
\(4) At $N=5$ some elementary combinatorics shows that the cycle structure $3+2$ is excluded. Thus we are left with the cycle structure $5$, and we obtain $F_5$.
Let us discuss now the classification at $N=6$. The result here, from [@bbs], states that the above matrices $F_6^{(rs)},F_6^{(^r_s)},H_6^q,T_6$ are the only solutions. The proof of this fact is quite long and technical, but we will present here its main ideas. Let us start with:
The regular Hadamard matrices at $N=6$ fall into $3$ classes:
1. Cycle structure $3+3$, with $T_6$ being an example.
2. Cycle structure $2+2+2$, with $H_6^q$ being an example.
3. Mixed cycle structure $3+3/2+2+2$, with $F_6^{(rs)},F_6^{(^r_s)}$ being examples.
This is a bit of an empty statement, with the above (1,2,3) possibilities being the only ones, and with the various examples coming from Proposition 3.13 (2).
In order to do the classification, we must prove that the examples in (1,2,3) are the only ones. Let us start with the Tao matrix. The result here is as follows:
The matrix $T_6$ is the only one with cycle structure $3+3$.
The proof of this fact, from [@bbs], is quite long and technical, the idea being that of studying first the $3\times 6$ case, then the $4\times6$ case, and finally the $6\times6$ case.
So, consider first a partial Hadamard matrix $A\in M_{3\times 6}(\mathbb T)$, with the scalar products between rows assumed to be all of type $3+3$. By doing some elementary combinatorics, one can show that, modulo equivalence, either all the entries of $A$ belong to $\mathbb Z_3=\{1,w,w^2\}$, or $A$ has the following special form, for certain parameters $r,s\in\mathbb T$: $$A=\begin{pmatrix}
1&1&1&1&1&1\\
1&w&w^2&r&wr&w^2r\\
1&w^2&w&s&w^2s&ws
\end{pmatrix}$$
With this in hand, we can now investigate the $4\times6$ case. Assume indeed that we have a partial Hadamard matrix $B\in M_{4\times 6}(\mathbb T)$, with the scalar products between rows assumed to be all of type $3+3$. By looking at the 4 submatrices $A^{(1)},A^{(2)},A^{(3)},A^{(4)}$ obtained from $B$ by deleting one row, and applying the above $3\times 6$ result, we are led, after doing some combinatorics, to the conclusion that all the possible parameters dissapear: $$B\in M_{4\times 6}(\mathbb Z_3)$$
With this result in hand, we can go now for the general case. Indeed, an Hadamard matrix $M\in M_6(\mathbb T)$ having cycle structure $3+3$ must be as follows: $$M\in M_6(\mathbb Z_3)$$
But the study here is elementary, with $T_6$ as the only solution. See [@bbs].
Regarding now the Haagerup matrix, the result is similar, as follows:
The matrix $H_6^q$ is the only one with cycle structure $2+2+2$.
The proof here, from [@bbs], uses the same idea as in the proof of Proposition 3.16. The study of the $3\times 6$ partial Hadamard matrices with cycle structure $2+2+2$ leads, up to equivalence, to the following 4 solutions, with $q\in\mathbb T$ being a parameter: $$A_1=\begin{pmatrix}
1&1&1&1&1&1\\
1&-i&1&i&-1&-1\\
1&-1&i&-i&q&-q
\end{pmatrix}$$ $$A_2=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&-1&i&-1&-i\\
1&-1&q&-q&iq&-iq
\end{pmatrix}$$ $$A_3=\begin{pmatrix}
1&1&1&1&1&1\\
1&-1&i&-i&q&-q\\
1&-i&i&-1&-q&q
\end{pmatrix}$$ $$A_4=\begin{pmatrix}
1&1&1&1&1&1\\
1&-i&-1&i&q&-q\\
1&-1&-q&-iq&iq&q
\end{pmatrix}$$
With this result in hand, we can go directly for the $6\times6$ case. Indeed, a careful examination of the $3\times6$ submatrices, and of the way that different parameters can overlap vertically, shows that our matrix must have a $3\times 3$ block decomposition as follows: $$M=\begin{pmatrix}
A&B&C\\
D&xE&yF\\
G&zH&tI
\end{pmatrix}$$
Here $A,\ldots,I$ are $2\times 2$ matrices over $\{\pm 1,\pm i\}$, and $x,y,z,t$ are in $\{1,q\}$. A more careful examination shows that the solution must be of the following form: $$M=\begin{pmatrix}
A&B&C\\
D&E&qF\\
G&qH&qI
\end{pmatrix}$$
More precisely, the matrix must be as follows: $$M=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&-i&i&-1&-1\\
1&i&-1&-i&-q&q\\
1&-i&i&-1&-iq&iq\\
1&-1&q&-iq&iq&-q\\
1&-1&-q&iq&q&-iq
\end{pmatrix}$$
But this matrix is equivalent to $H_6^q$, and we are done. See [@bbs].
Regarding now the mixed case, where both $2+2+2$ and $3+3$ situations can appear, this is a bit more complicated, and requires some preliminary discussion.
We can associate to any mixed Hadamard matrix $M\in M_6(\mathbb C)$ its “row graph”, having the 6 rows as vertices, and with each edge being called “binary” or “ternary”, depending on whether the corresponding scalar product is of type $2+2+2$ or $3+3$.
With this convention, we have the following result:
The row graph of a mixed matrix $M\in M_6(\mathbb C)$ can be:
1. Either the bipartite graph having $3$ binary edges.
2. Or the bipartite graph having $2$ ternary triangles.
This is once again something a bit technical, from [@bbs], the idea being as follows. Let $X$ be the row graph in the statement. By doing some combinatorics, quite long but of very elementary type, we are led to the following conclusions about $X$:
– $X$ has no binary triangle.
– $X$ has no ternary square.
– $X$ has at least one ternary triangle.
With these results in hand, we see that there are only two types of squares in our graph $X$, namely those having 1 binary edge and 5 ternary edges, and those consisting of a ternary triangle, connected to the 4-th point with 3 binary edges.
By looking at pentagons, then hexagons that can be built with these squares, we see that the above two types of squares cannot appear at the same time, at that at the level of hexagons, we have the two solutions in the statement. See [@bbs].
We can now complete our classification at $N=6$, as follows:
The matrices $F_6^{(rs)},F_6^{(^r_s)}$ are the only ones with mixed cycle structure.
According to Proposition 3.18, we have two cases:
\(1) Assume first that the row graph is the bipartite one with 3 binary edges. By permuting the rows, the upper $4\times6$ submatrix of our matrix must be as follows: $$B=\begin{pmatrix}
1&1&1&1&1&1\\
1&w&w^2&r&wr&w^2r\\
1&w^2&w&s&w^2s&ws\\
1&1&1&t&t&t
\end{pmatrix}$$
Now since the scalar product between the first and the fourth row is binary, we must have $t=-1$, so the solution is: $$B=\begin{pmatrix}
1&1&1&1&1&1\\
1&w&w^2&r&wr&w^2r\\
1&w^2&w&s&w^2s&ws\\
1&1&1&-1&-1&-1
\end{pmatrix}$$
We can use the same argument for finding the fifth and sixth row, by arranging the matrix formed by the first three rows such as the second, respectively third row consist only of 1’s. This will make appear some parameters of the form $w,w^2,r,s$ in the extra row, and we obtain in this way a matrix which is equivalent to $F_6^{(rs)}$. See [@bbs].
\(2) Assume now that the row graph is the bipartite one with 2 ternary triangles. By permuting the rows, the upper $4\times6$ submatrix of our matrix must be as follows: $$B=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&w&w&w^2&w^2\\
1&1&w^2&w^2&w&w\\
1&-1&r&-r&s&-s
\end{pmatrix}$$
We can use the same argument for finding the fifth and sixth row, and we conclude that the matrix is of the following type: $$M=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&w&w&w^2&w^2\\
1&1&w^2&w^2&w&w\\
1&-1&r&-r&s&-s\\
1&-1&a&-a&b&-b\\
1&-1&c&-c&d&-d
\end{pmatrix}$$
Now since the last three rows must form a ternary triangle, we conclude that the matrix must be of the following form: $$M=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&w&w&w^2&w^2\\
1&1&w^2&w^2&w&w\\
1&-1&r&-r&s&-s\\
1&-1&wr&-wr&w^2s&-w^2s\\
1&-1&w^2r&-w^2r&ws&-ws
\end{pmatrix}$$
But this matrix is equivalent to $F_6^{(^r_s)}$, and we are done. See [@bbs].
Summing up all the above, we have proved the following theorem:
The regular complex Hadamard matrices at $N=6$ are:
1. The deformations $F_6^{(rs)},F_6^{(^r_s)}$ of the Fourier matrix $F_6$.
2. The Haagerup matrix $H_6^q$.
3. The Tao matrix $T_6$.
This follows indeed from the trichotomy from Proposition 3.15, and from the results in Proposition 3.16, Proposition 3.17 and Proposition 3.19. See [@bbs].
All this is quite nice, and our belief is that the $N=7$ classification is doable as well. Here we have 3 possible cycle structures, namely $3+2+2$, $5+2$, $7$, and some elementary number theory shows that $5+2$ is excluded, and that $3+2+2$ and $7$ cannot interact. Thus we have a dichotomy, and our conjecture is as follows:
The regular complex Hadamard matrices at $N=7$ are:
1. The Fourier matrix $F_7$.
2. The Petrescu matrix $P_7^q$.
Regarding (1), one can show indeed that $F_7$ is the only matrix having cycle structure 7, with this being related to some more general results from [@hsc]. As for (2), the problem is that of proving that $P_7^q$ is the only matrix having cycle structure $3+2+2$. The computations here are unfortunately far more involved than those at $N=6$, briefly presented above, and finishing the classification work here is not an easy question.
As a conclusion to all this, when imposing the regularity condition, things simplify a bit, with respect to the general case, according to a kind of $N\to N+1$ rule. To be more precise, the difficulties in the general case are basically of real algebraic geometry nature, and can be labeled as easy at $N\leq4$, hard at $N=5$, and not solved yet at $N=6$. As for the regular case, here the difficulties are basically of design theory nature, and can be labeled as easy at $N\leq5$, hard at $N=6$, and not solved yet at $N=7$.
Besides the classification questions, there are as well a number of theoretical questions in relation with the notion of regularity, that we believe to be very interesting. We have for instance the following conjecture, going back to [@bbs], and then to [@bop]:
The following hold:
1. Any Butson matrix $H\in M_N(\mathbb C)$ is regular.
2. Any regular matrix $H\in M_N(\mathbb C)$ is an affine deformation of a Butson matrix.
In other words, the first conjecture is that a “tricky vanishing sum” of roots of unity, like the $l=30$ one given after Definition 3.8 above, cannot be used in order to construct a complex Hadamard matrix. This is a quite difficult question, coming however with substantial computer evidence. We have no idea on how to approach it. See [@bbs].
As for the second conjecture, this simply comes from the known examples of regular Hadamard matrices, which all appear from certain Butson matrices, by inserting parameters, in an affine way. This conjecture is from [@bop], and we will further discuss the notion of affine deformation, with some general results on the subject, in section 4 below.
We would like to end this section, which was depressingly algebraic and difficult, with no simple and conceptual result in sight, by doing some analysis. As explained after Conjecture 3.11 above (ABC), one way of getting into analysis, in connection with root of unity questions, is that of looking at the partial Butson matrices, at $N>>0$.
The idea here comes of course from the de Launey-Levin counting result from [@dle], explained in section 1 above. Let us first discuss the prime power case. We have:
When $q=p^k$ is a prime power, the standard form of the dephased partial Butson matrices at $M=2$ is $$H=\begin{pmatrix}
1&1&\ldots&1&\ldots&\ldots&1&1&\ldots&1\\
\underbrace{1}_{a_1}&\underbrace{w}_{a_2}&\ldots&\underbrace{w^{q/p-1}}_{a_{q/p}}&\ldots&\ldots&\underbrace{w^{q-q/p}}_{a_1}&\underbrace{w^{q-q/p+1}}_{a_2}&\ldots&\underbrace{w^{q-1}}_{a_{q/p}}
\end{pmatrix}$$ where $w=e^{2\pi i/q}$ and where $a_1,\ldots,a_{q/p}\in\mathbb N$ are multiplicities, summing up to $N/p$.
Indeed, it is well-known that for $q=p^k$ the solutions of $\lambda_1+\ldots+\lambda_N=0$ with $\lambda_i\in\mathbb Z_q$ are, up to permutations of the terms, exactly those in the statement.
Our objective will be to count the matrices in Proposition 3.23. We will need:
We have the estimate $$\sum_{a_1+\ldots+a_s=n}\binom{n}{a_1,\ldots,a_s}^p
\simeq s^{pn}\sqrt{\frac{s^{s(p-1)}}{p^{s-1}(2\pi n)^{(s-1)(p-1)}}}$$ in the $n\to\infty$ limit.
This is proved by Richmond and Shallit in [@rsh] at $p=2$, and the proof in the general case, $p\in\mathbb N$, is similar. More precisely, let us denote by $c_{sp}$ the sum on the left. By setting $a_i=\frac{n}{s}+x_i\sqrt{n}$ and then by using the various formulae in [@rsh], we obtain: $$\begin{aligned}
&&c_{sp}\\
&\simeq&s^{pn}(2\pi n)^{\frac{(1-s)p}{2}}s^{\frac{sp}{2}}\exp\left(-\frac{sp}{2}\sum_{i=1}^sx_i^2\right)\\
&\simeq&s^{pn}(2\pi n)^{\frac{(1-s)p}{2}}s^{\frac{sp}{2}}\underbrace{\int_0^n\ldots\int_0^n}_{s-1}\exp\left(-\frac{sp}{2}\sum_{i=1}^sx_i^2\right)da_1\ldots da_{s-1}\\
&=&s^{pn}(2\pi n)^{\frac{(1-s)p}{2}}s^{\frac{sp}{2}}n^{\frac{s-1}{2}}\underbrace{\int_0^n\ldots\int_0^n}_{s-1}\exp\left(-\frac{sp}{2}\sum_{i=1}^{s-1}x_i^2-\frac{sp}{2}\left(\sum_{i=1}^{s-1}x_i\right)^2\right)dx_1\ldots dx_{s-1}\\
&=&s^{pn}(2\pi n)^{\frac{(1-s)p}{2}}s^{\frac{sp}{2}}n^{\frac{s-1}{2}}\times\pi^{\frac{s-1}{2}}s^{-\frac{1}{2}}\left(\frac{sp}{2}\right)^{\frac{1-s}{2}}\\
&=&s^{pn}(2\pi n)^{\frac{(1-s)p}{2}}s^{\frac{sp}{2}-\frac{1}{2}+\frac{1-s}{2}}\left(\frac{p}{2\pi n}\right)^{\frac{1-s}{2}}\\
&=&s^{pn}(2\pi n)^{\frac{(1-s)(p-1)}{2}}s^{\frac{sp-s}{2}}p^{\frac{1-s}{2}}\end{aligned}$$
Thus we have obtained the formula in the statement, and we are done.
Now with Proposition 3.24 in hand, we can prove:
When $q=p^k$ is a prime power, the probability for a randomly chosen $M\in M_{2\times N}(\mathbb Z_q)$, with $N\in p\mathbb N$, $N\to\infty$, to be partial Butson is: $$P_2\simeq\sqrt{\frac{p^{2-\frac{q}{p}}q^{q-\frac{q}{p}}}{(2\pi N)^{q-\frac{q}{p}}}}$$ In particular, for $q=p$ prime, $P_2\simeq\sqrt{\frac{p^p}{(2\pi N)^{p-1}}}$. Also, for $q=2^k$, $P_2\simeq2\sqrt{\left(\frac{q/2}{2\pi N}\right)^{q/2}}$.
First, the probability $P_M$ for a random $M\in M_{M\times N}(\mathbb Z_q)$ to be PBM is: $$P_M=\frac{1}{q^{MN}}\#PBM_{M\times N}$$
Thus, according to Proposition 3.23, we have the following formula: $$\begin{aligned}
P_2
&=&\frac{1}{q^N}\sum_{a_1+\ldots +a_{q/p}=N/p}\binom{N}{\underbrace{a_1\ldots a_1}_p\ldots\ldots\underbrace{a_{q/p}\ldots a_{q/p}}_p}\\
&=&\frac{1}{q^N}\binom{N}{\underbrace{N/p\ldots N/p}_p}\sum_{a_1+\ldots +a_{q/p}=N/p}\binom{N/p}{a_1\ldots a_{q/p}}^p\\
&=&\frac{1}{p^N}\binom{N}{\underbrace{N/p\ldots N/p}_p}\times\frac{1}{(q/p)^N}\sum_{a_1+\ldots +a_{q/p}=N/p}\binom{N/p}{a_1\ldots a_{q/p}}^p\end{aligned}$$
Now by using the Stirling formula for the left term, and Proposition 3.24 with $s=q/p$ and $n=N/p$ for the right term, we obtain: $$\begin{aligned}
P_2
&=&\sqrt{\frac{p^p}{(2\pi N)^{p-1}}}\times\sqrt{\frac{(q/p)^{\frac{q}{p}(p-1)}}{p^{\frac{q}{p}-1}(2\pi N/p)^{(\frac{q}{p}-1)(p-1)}}}\\
&=&\sqrt{\frac{p^{p-\frac{q}{p}(p-1)-\frac{q}{p}+1+(\frac{q}{p}-1)(p-1)}q^{\frac{q}{p}(p-1)}}{(2\pi N)^{p-1+(\frac{q}{p}-1)(p-1)}}}\\
&=&\sqrt{\frac{p^{2-\frac{q}{p}}q^{q-\frac{q}{p}}}{(2\pi N)^{q-\frac{q}{p}}}}\end{aligned}$$
Thus we have obtained the formula in the statement, and we are done.
Let us discuss now the case where $M=2$ and $q=p_1^{k_1}p_2^{k_2}$ has two prime factors. We first examine the simplest such case, namely $q=p_1p_2$, with $p_1,p_2$ primes:
When $q=p_1p_2$ is a product of distinct primes, the standard form of the dephased partial Butson matrices at $M=2$ is $$H=\begin{pmatrix}
1&1&\ldots&1&\ldots&\ldots&1&1&\ldots&1\\
\underbrace{1}_{A_{11}}&\underbrace{w}_{A_{12}}&\ldots&\underbrace{w^{p_2-1}}_{A_{1p_2}}&\ldots&\ldots&\underbrace{w^{q-p_2}}_{A_{p_11}}&\underbrace{w^{q-p_2+1}}_{A_{p_12}}&\ldots&\underbrace{w^{q-1}}_{A_{p_1p_2}}
\end{pmatrix}$$ where $w=e^{2\pi i/q}$, and $A\in M_{p_1\times p_2}(\mathbb N)$ is of the form $A_{ij}=B_i+C_j$, with $B_i,C_j\in\mathbb N$.
We use the fact that for $q=p_1p_2$ any vanishing sum of $q$-roots of unity decomposes as a sum of cycles. Now if we denote by $B_i,C_j\in\mathbb N$ the multiplicities of the various $p_2$-cycles and $p_1$-cycles, then we must have $A_{ij}=B_i+C_j$, as claimed.
Regarding the matrices of type $A_{ij}=B_i+C_j$, when taking them over integers, $B_i,C_j\in\mathbb Z$, these form a vector space of dimension $p_1+p_2-1$. Given $A\in M_{p_1\times p_2}(\mathbb Z)$, the “test” for deciding if we have $A_{ij}=B_i+C_j$ or not is $A_{ij}+A_{kl}=A_{il}+A_{jk}$.
The problem comes of course from the assumption $B_i,C_j\geq0$, which is quite a subtle one. In what follows we restrict attention to the case $p_1=2$. Here we have:
For $q=2p$ with $p\geq 3$ prime, $P_2$ equals the probability for a random walk on $\mathbb Z^p$ to end up on the diagonal, i.e. at a position of type $(t,\ldots,t)$, with $t\in\mathbb Z$.
According to Proposition 3.26, we must understand the matrices $A\in M_{2\times p}(\mathbb N)$ which decompose as $A_{ij}=B_i+C_j$, with $B_i,C_j\geq0$. But this is an easy task, because depending on $A_{11}$ vs. $A_{21}$ we have 3 types of solutions, as follows: $$\begin{pmatrix}
a_1&\ldots&a_p\\
a_1&\ldots&a_p
\end{pmatrix}\quad,\quad
\begin{pmatrix}
a_1&\ldots&a_p\\
a_1+t&\ldots&a_p+t
\end{pmatrix}\quad,\quad
\begin{pmatrix}
a_1+t&\ldots&a_p+t\\
a_1&\ldots&a_p
\end{pmatrix}$$
Here $a_i\geq0$ and $t\geq1$. Now since cases 2,3 contribute in the same way, we obtain: $$\begin{aligned}
P_2
&=&\frac{1}{(2p)^N}\sum_{2\Sigma a_i=N}\binom{N}{a_1,a_1,\ldots,a_p,a_p}\\
&+&\frac{2}{(2p)^N}\sum_{t\geq1}\sum_{2\Sigma a_i+pt=N}\binom{N}{a_1,a_1+t,\ldots,a_p,a_p+t}\end{aligned}$$
We can write this formula in a more compact way, as follows: $$P_2=\frac{1}{(2p)^N}\sum_{t\in\mathbb Z}\sum_{2\Sigma a_i+p|t|=N}\binom{N}{a_1,a_1+|t|,\ldots,a_p,a_p+|t|}$$
Now since the sum on the right, when rescaled by $\frac{1}{(2p)^N}$, is exactly the probability for a random walk on $\mathbb Z^p$ to end up at $(t,\ldots,t)$, this gives the result.
According to the above result we have $P_2=\sum_{t\in\mathbb Z}P_2^{(t)}$, where $P_2^{(t)}$ with $t\in\mathbb Z$ is the probability for a random walk on $\mathbb Z^p$ to end up at $(t,\ldots,t)$. Observe that, by using Proposition 3.24 above with $s,p,n$ equal respectively to $p,2,N/2$, we obtain: $$\begin{aligned}
P_2^{(0)}
&=&\frac{1}{(2p)^N}\binom{N}{N/2}\sum_{a_1+\ldots+a_p=N/2}\binom{N/2}{a_1,\ldots,a_p}^2\\
&\simeq&\sqrt{\frac{2}{\pi N}}\times\sqrt{\frac{p^p}{2^{p-1}(\pi N)^{p-1}}}\\
&=&2\sqrt{\left(\frac{p}{2\pi N}\right)^p}\end{aligned}$$
Regarding now the probability $P_2^{(t)}$ of ending up at $(t,\ldots,t)$, in principle for small $t$ this can be estimated by using a modification of the method in [@rsh]. However, it is not clear on how to compute the full diagonal return probability in Theorem 3.27.
It is possible to establish a few more results in this direction, and we refer here to [@ba4]. However, the main question remains that of adapting the methods in [@lle] to the root of unity case. As a preliminary observation here, also from [@ba4], we have:
The probability $P_M$ for a random $H\in M_{M\times N}(\mathbb Z_q)$ to be partial Butson equals the probability for a length $N$ random walk with increments drawn from $$E=\left\{(e_i\bar{e}_j)_{i<j}\Big|e\in\mathbb Z_q^M\right\}$$ regarded as a subset $\mathbb Z_q^{\binom{M}{2}}$, to return at the origin.
Indeed, with $T(e)=(e_i\bar{e}_j)_{i<j}$, a matrix $X=[e_1,\ldots,e_N]\in M_{M\times N}(\mathbb Z_q)$ is partial Butson if and only if $T(e_1)+\ldots+T(e_N)=0$, and this gives the result.
Observe now that, according to the above result, we have: $$\begin{aligned}
P_M
&=&\frac{1}{q^{(M-1)N}}\#\left\{\xi_1,\ldots,\xi_N\in E\Big|\sum_i\xi_i=0\right\}\\
&=&\frac{1}{q^{(M-1)N}}\sum_{\xi_1,\ldots,\xi_N\in E}\delta_{\Sigma\xi_i,0}\end{aligned}$$
The problem is to continue the computation in the proof of the inversion formula. More precisely, the next step at $q=2$, which is the key one, is as follows: $$\delta_{\Sigma\xi_i,0}=\frac{1}{(2\pi)^D}\int_{[-\pi,\pi]^D}e^{i<\lambda,\Sigma\xi_i>}d\lambda$$
Here $D=\binom{M}{2}$. The problem is that this formula works when $\Sigma\xi_i$ is real, as is the case in [@dle], but not when $\Sigma\xi_i$ is complex, as is the case in Theorem 3.28.
Geometry, defect
================
In this section and in the next two ones we discuss various geometric aspects of the complex Hadamard matrices. Let us recall that the complex Hadamard manifold appears as an intersection of smooth real algebraic manifolds, as follows: $$X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$$
This intersection is very far from being smooth. Given a point $H\in X_N$, the problem is that of understanding the structure of $X_N$ around $H$, which is often singular.
There are several ways of discussing this question, a quite straightforward approach, going back to the work in [@kar], and then in [@nic], [@tz1], being via the 1-parameter deformations of the complex Hadamard matrices. In what follows we will use an equivalent approach, of more real algebraic geometry flavor, developed in [@ba2], [@ba3].
We denote by $X_p$ an unspecified neighborhood of a point in a manifold, $p\in X$. Also, for $q\in\mathbb T_1$, meaning that $q\in\mathbb T$ is close to $1$, we define $q^r$ with $r\in\mathbb R$ by $(e^{it})^r=e^{itr}$.
With these conventions, we have the following result:
For $H\in X_N$ and $A\in M_N(\mathbb R)$, the following are equivalent:
1. $H_{ij}^q=H_{ij}q^{A_{ij}}$ is an Hadamard matrix, for any $q\in\mathbb T_1$.
2. $\sum_kH_{ik}\bar{H}_{jk}q^{A_{ik}-A_{jk}}=0$, for any $i\neq j$ and any $q\in\mathbb T_1$.
3. $\sum_kH_{ik}\bar{H}_{jk}\varphi(A_{ik}-A_{jk})=0$, for any $i\neq j$ and any $\varphi:\mathbb R\to\mathbb C$.
4. $\sum_{k\in E_{ij}^r}H_{ik}\bar{H}_{jk}=0$ for any $i\neq j$ and $r\in\mathbb R$, where $E_{ij}^r=\{k|A_{ik}-A_{jk}=r\}$.
These equivalences are all elementary, and can be proved as follows:
$(1)\iff(2)$ Indeed, the scalar products between the rows of $H^q$ are: $$<H^q_i,H^q_j>=\sum_kH_{ik}q^{A_{ik}}\bar{H}_{jk}\bar{q}^{A_{jk}}=\sum_kH_{ik}\bar{H}_{jk}q^{A_{ik}-A_{jk}}$$
$(2)\implies(4)$ This follows from the following formula, and from the fact that the power functions $\{q^r|r\in\mathbb R\}$ over the unit circle $\mathbb T$ are linearly independent: $$\sum_kH_{ik}\bar{H}_{jk}q^{A_{ik}-A_{jk}}=\sum_{r\in\mathbb R}q^r\sum_{k\in E_{ij}^r}H_{ik}\bar{H}_{jk}$$
$(4)\implies(3)$ This follows from the following formula: $$\sum_kH_{ik}\bar{H}_{jk}\varphi(A_{ik}-A_{jk})=\sum_{r\in\mathbb R}\varphi(r)\sum_{k\in E_{ij}^r}H_{ik}\bar{H}_{jk}$$
$(3)\implies(2)$ This simply follows by taking $\varphi(r)=q^r$.
Observe that in the above statement the condition (4) is purely combinatorial.
In order to understand the above deformations, which are “affine” in a certain sense, it is convenient to enlarge the attention to all types of deformations.
We keep using the neighborhood notation $X_p$ introduced above, and we consider functions of type $f:X_p\to Y_q$, which by definition satisfy $f(p)=q$.
With these conventions, let us introduce the following notions:
Let $H\in M_N(\mathbb C)$ be a complex Hadamard matrix.
1. A deformation of $H$ is a smooth function $f:\mathbb T_1\to (X_N)_H$.
2. The deformation is called “affine” if $f_{ij}(q)=H_{ij}q^{A_{ij}}$, with $A\in M_N(\mathbb R)$.
3. We call “trivial” the deformations of type $f_{ij}(q)=H_{ij}q^{a_i+b_j}$, with $a,b\in\mathbb R^N$.
Here the adjective “affine” comes from $f_{ij}(e^{it})=H_{ij}e^{iA_{ij}t}$, because the function $t\to A_{ij}t$ which produces the exponent is indeed affine. As for the adjective “trivial”, this comes from the fact that $f(q)=(H_{ij}q^{a_i+b_j})_{ij}$ is obtained from $H$ by multiplying the rows and columns by certain numbers in $\mathbb T$, so it is automatically Hadamard.
The basic example of an affine deformation comes from the Diţă deformations $H\otimes_QK$, by taking all parameters $q_{ij}\in\mathbb T$ to be powers of $q\in\mathbb T$. As an example, here are the exponent matrices coming from the left and right Diţă deformations of $F_2\otimes F_2$: $$A_l=
\begin{pmatrix}
a&a&b&b\\
c&c&d&d\\
a&a&b&b\\
c&c&d&d
\end{pmatrix}\quad\quad\quad
A_r=
\begin{pmatrix}
a&b&a&b\\
a&b&a&b\\
c&d&c&d\\
c&d&c&d
\end{pmatrix}$$
In order to investigate the above types of deformations, we will use the corresponding tangent vectors. So, let us recall that the manifold $X_N$ is given by: $$X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$$
This observation leads to the following definition, where in the first part we denote by $T_pX$ the tangent space to a point in a smooth manifold, $p\in X$:
Associated to a point $H\in X_N$ are the following objects:
1. The enveloping tangent space: $\widetilde{T}_HX_N=T_HM_N(\mathbb T)\cap T_H\sqrt{N}U_N$.
2. The tangent cone $T_HX_N$: the set of tangent vectors to the deformations of $H$.
3. The affine tangent cone $T_H^\circ X_N$: same as above, using affine deformations only.
4. The trivial tangent cone $T_H^\times X_N$: as above, using trivial deformations only.
Observe that $\widetilde{T}_HX_N,T_H^\times X_N$ are real linear spaces, and that $T_HX_N,T_H^\circ X_N$ are two-sided cones, in the sense that they satisfy the following condition: $$\lambda\in\mathbb R,A\in T\implies\lambda A\in T$$
Observe also that we have inclusions of cones, as follows: $$T_H^\times X_N\subset T_H^\circ X_N\subset T_HX_N\subset\widetilde{T}_HX_N$$
In more algebraic terms now, these various tangent cones are best described by the corresponding matrices, and we have here the following result:
The cones $T_H^\times X_N\subset T_H^\circ X_N\subset T_HX_N\subset\widetilde{T}_HX_N$ are as follows:
1. $\widetilde{T}_HX_N$ can be identified with the linear space formed by the matrices $A\in M_N(\mathbb R)$ satisfying $\sum_kH_{ik}\bar{H}_{jk}(A_{ik}-A_{jk})=0$, for any $i,j$.
2. $T_HX_N$ consists of those matrices $A\in M_N(\mathbb R)$ appearing as $A_{ij}=g_{ij}'(0)$, where $g:M_N(\mathbb R)_0\to M_N(\mathbb R)_0$ satisfies $\sum_kH_{ik}\bar{H}_{jk}e^{i(g_{ik}(t)-g_{jk}(t))}=0$ for any $i,j$.
3. $T^\circ_HX_N$ is formed by the matrices $A\in M_N(\mathbb R)$ satisfying $\sum_kH_{ik}\bar{H}_{jk}q^{A_{ik}-A_{jk}}=0$, for any $i\neq j$ and any $q\in\mathbb T$.
4. $T^\times_HX_N$ is formed by the matrices $A\in M_N(\mathbb R)$ which are of the form $A_{ij}=a_i+b_j$, for certain vectors $a,b\in\mathbb R^N$.
All these assertions can be deduced by using basic differential geometry:
\(1) This result is well-known, the idea being as follows. First, $M_N(\mathbb T)$ is defined by the algebraic relations $|H_{ij}|^2=1$, and with $H_{ij}=X_{ij}+iY_{ij}$ we have: $$d|H_{ij}|^2=d(X_{ij}^2+Y_{ij}^2)=2(X_{ij}\dot{X}_{ij}+Y_{ij}\dot{Y}_{ij})$$
Now since an arbitrary vector $\xi\in T_HM_N(\mathbb C)$, written as $\xi=\sum_{ij}\alpha_{ij}\dot{X}_{ij}+\beta_{ij}\dot{Y}_{ij}$, belongs to $T_HM_N(\mathbb T)$ if and only if $<\xi,d|H_{ij}|^2>=0$ for any $i,j$, we obtain: $$T_HM_N(\mathbb T)=\left\{\sum_{ij}A_{ij}(Y_{ij}\dot{X}_{ij}-X_{ij}\dot{Y}_{ij})\Big|A_{ij}\in\mathbb R\right\}$$
We also know that $\sqrt{N}U_N$ is defined by the algebraic relations $<H_i,H_j>=N\delta_{ij}$, where $H_1,\ldots,H_N$ are the rows of $H$. The relations $<H_i,H_i>=N$ being automatic for the matrices $H\in M_N(\mathbb T)$, if for $i\neq j$ we let $L_{ij}=<H_i,H_j>$, then we have: $$\widetilde{T}_HC_N=\left\{\xi\in T_HM_N(\mathbb T)|<\xi,\dot{L}_{ij}>=0,\,\forall i\neq j\right\}$$
On the other hand, differentiating the formula of $L_{ij}$ gives: $$\dot{L}_{ij}=\sum_k(X_{ik}+iY_{ik})(\dot{X}_{jk}-i\dot{Y}_{jk})+(X_{jk}-iY_{jk})(\dot{X}_{ik}+i\dot{Y}_{ik})$$
Now if we pick $\xi\in T_HM_N(\mathbb T)$, written as above in terms of $A\in M_N(\mathbb R)$, we obtain: $$<\xi,\dot{L}_{ij}>=i\sum_k\bar{H}_{ik}H_{jk}(A_{ik}-A_{jk})$$
Thus we have reached to the description of $\widetilde{T}_HX_N$ in the statement.
\(2) Pick an arbitrary deformation, and write it as $f_{ij}(e^{it})=H_{ij}e^{ig_{ij}(t)}$. Observe first that the Hadamard condition corresponds to the equations in the statement, namely: $$\sum_kH_{ik}\bar{H}_{jk}e^{i(g_{ik}(t)-g_{jk}(t))}=0$$
Observe also that by differentiating this formula at $t=0$, we obtain: $$\sum_kH_{ik}\bar{H}_{jk}(g_{ik}'(0)-g_{jk}'(0))=0$$
Thus the matrix $A_{ij}=g_{ij}'(0)$ belongs indeed to $\widetilde{T}_HX_N$, so we obtain in this way a certain map $T_HX_N\to\widetilde{T}_HX_N$. In order to check that this map is indeed the correct one, we have to verify that, for any $i,j$, the tangent vector to our deformation is given by: $$\xi_{ij}=g_{ij}'(0)(Y_{ij}\dot{X}_{ij}-X_{ij}\dot{Y}_{ij})$$
But this latter verification is just a one-variable problem. So, by dropping all $i,j$ indices, which is the same as assuming $N=1$, we have to check that for any point $H\in\mathbb T$, written $H=X+iY$, the tangent vector to the deformation $f(e^{it})=He^{ig(t)}$ is: $$\xi=g'(0)(Y\dot{X}-X\dot{Y})$$
But this is clear, because the unit tangent vector at $H\in\mathbb T$ is $\eta=-i(Y\dot{X}-X\dot{Y})$, and its coefficient coming from the deformation is $(e^{ig(t)})'_{|t=0}=-ig'(0)$.
\(3) Observe first that by taking the derivative at $q=1$ of the condition (2) in Proposition 4.1, of just by using the condition (3) there with the function $\varphi(r)=r$, we get: $$\sum_kH_{ik}\bar{H}_{jk}\varphi(A_{ik}-A_{jk})=0$$
Thus we have a map $T_H^\circ X_N\to\widetilde{T}_HX_N$, and the fact that is map is indeed the correct one comes for instance from the computation in (2), with $g_{ij}(t)=A_{ij}t$.
\(4) Observe first that the Hadamard matrix condition is satisfied: $$\sum_kH_{ik}\bar{H}_{jk}q^{A_{ik}-A_{jk}}
=q^{a_i-a_j}\sum_kH_{ik}\bar{H}_{jk}
=\delta_{ij}$$
As for the fact that $T_H^\times X_N$ is indeed the space in the statement, this is clear.
Let $Z_N\subset X_N$ be the real algebraic manifold formed by all the dephased $N\times N$ complex Hadamard matrices. Observe that we have a quotient map $X_N\to Z_N$, obtained by dephasing. With this notation, we have the following refinement of (4) above:
We have a direct sum decomposition of cones $$T_H^\circ X_N=T_H^\times X_N\oplus T_H^\circ Z_N$$ where at right we have the affine tangent cone to the dephased manifold $X_N\to Z_N$.
If we denote by $M_N^\circ(\mathbb R)$ the set of matrices having $0$ outside the first row and column, we have a direct sum decomposition, as follows: $$\widetilde{T}_H^\circ X_N=M_N^\circ(\mathbb R)\oplus\widetilde{T}_H^\circ Z_N$$
Now by looking at the affine cones, and using Theorem 4.4, this gives the result.
Summarizing, we have so far a number of theoretical results about the tangent cones $T_HX_N$ that we are interested in, and their versions coming from the trivial and affine deformations, and from the intersection formula $X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$ as well.
In practice now, passed a few special cases where all these cones collapse to the trivial cone $T_N^\times X_N$, which by Proposition 4.5 means that the image of $H\in X_N$ must be isolated in the dephased manifold $X_N\to Z_N$, things are quite difficult to compute.
However, as a concrete numerical invariant arising from all this, which can be effectively computed in many cases of interest, we have, following [@tz1]:
The real dimension $d(H)$ of the enveloping tangent space $$\widetilde{T}_HX_N=T_HM_N(\mathbb T)\cap T_H\sqrt{N}U_N$$ is called undephased defect of a complex Hadamard matrix $H\in X_N$.
In view of Proposition 4.5, it is sometimes convenient to replace $d(H)$ by the related quantity $d'(H)=d(H)-2N+1$, called dephased defect of $H$. See [@tz1]. In what follows we will rather use $d(H)$ as defined above, and simply call it “defect” of $H$.
We already know, from Theorem 4.4, what is the precise geometric meaning of the defect, and how to compute it. Let us record again these results, that we will use many times in what follows, in a slightly different form, closer to the spirit of [@tz1]:
The defect $d(H)$ is the real dimension of the linear space $$\widetilde{T}_HX_N=\left\{A\in M_N(\mathbb R)\Big|\sum_kH_{ik}\bar{H}_{jk}(A_{ik}-A_{jk})=0,\forall i,j\right\}$$ and the elements of this space are those making $H^q_{ij}=H_{ij}q^{A_{ij}}$ Hadamard at order $1$.
Here the first assertion is something that we already know, from Theorem 4.4 (1), and the second assertion follows either from Theorem 4.4 and its proof, or directly from the definition of the enveloping tangent space $\widetilde{T}_HX_N$, as used in Definition 4.6.
Here are a few basic properties of the defect:
Let $H\in X_N$ be a complex Hadamard matrix.
1. If $H\simeq\widetilde{H}$ then $d(H)=d(\widetilde{H})$.
2. We have $2N-1\leq d(H)\leq N^2$.
3. If $d(H)=2N-1$, the image of $H$ in the dephased manifold $X_N\to Z_N$ is isolated.
All these results are elementary, the proof being as follows:
\(1) If we let $K_{ij}=a_ib_jH_{ij}$ with $|a_i|=|b_j|=1$ be a trivial deformation of our matrix $H$, the equations for the enveloping tangent space for $K$ are: $$\sum_ka_ib_kH_{ik}\bar{a}_j\bar{b}_k\bar{H}_{jk}(A_{ik}-A_{jk})=0$$
By simplifying we obtain the equations for $H$, so $d(H)$ is invariant under trivial deformations. Since $d(H)$ is invariant as well by permuting rows or columns, we are done.
\(2) Consider the inclusions $T_H^\times X_N\subset T_HX_N\subset\widetilde{T}_HX_N$. Since $\dim(T_H^\times X_N)=2N-1$, the inequality at left holds indeed. As for the inequality at right, this is clear.
\(3) If $d(H)=2N-1$ then $T_HX_N=T_H^\times X_N$, so any deformation of $H$ is trivial. Thus the image of $H$ in the quotient manifold $X_N\to Z_N$ is indeed isolated, as stated.
Let us discuss now the computation of the defect for the most basic examples of complex Hadamard matrices that we know, namely the real ones, and the Fourier ones.
In order to deal with the real case, it is convenient to modify the general formula from Theorem 4.7 above, via a change of variables, as follows:
We have a linear space isomorphism as follows, $$\widetilde{T}_HX_N\simeq\left\{E\in M_N(\mathbb C)\Big|E=E^*,(EH)_{ij}\bar{H}_{ij}\in\mathbb R,\forall i,j\right\}$$ the correspondences $A\to E$ and $E\to A$ being given by the formulae $$E_{ij}=\sum_kH_{ik}\bar{H}_{jk}A_{ik}\quad,\quad A_{ij}=(EH)_{ij}\bar{H}_{ij}$$ with $A\in\widetilde{T}_HX_N$ being the usual components, from Theorem 4.7 above.
Given a matrix $A\in M_N(\mathbb C)$, if we set $R_{ij}=A_{ij}H_{ij}$ and $E=RH^*$, the correspondence $A\to R\to E$ is then bijective onto $M_N(\mathbb C)$, and we have: $$E_{ij}=\sum_kH_{ik}\bar{H}_{jk}A_{ik}$$
In terms of these new variables, the equations in Theorem 4.7 become: $$E_{ij}=\bar{E}_{ji}$$
Thus, when taking into account these conditions, we are simply left with the conditions $A_{ij}\in\mathbb R$. But these correspond to the conditions $(EH)_{ij}\bar{H}_{ij}\in\mathbb R$, as claimed.
With the above result in hand, we can now compute the defect of the real Hadamard matrices. The result here, from [@sz1], is as follows:
For any real Hadamard matrix $H\in M_N(\pm1)$ we have $$\widetilde{T}_HX_N\simeq M_N(\mathbb R)^{symm}$$ and so the corresponding defect is $d(H)=N(N+1)/2$.
We use Proposition 4.9. Since $H$ is now real the condition $(EH)_{ij}\bar{H}_{ij}\in\mathbb R$ there simply tells us that $E$ must be real, and this gives the result.
We should mention that the above result, as well as the whole basic theory of the tangent cones and defect, can be extended in a quite straightforward way to the case of the partial Hadamard matrices [@bop]. We will be back to this, later on.
Let us discuss now the computation of the defect of the Fourier matrix $F_G$. The main idea here goes back to [@kar], with some supplementary contributions from [@nic], the main formula, in the cyclic group case, was obtained in [@tz1], the extension to the general case was done in [@ba2], and the corresponding deformations were studied in [@nwh].
As a first result on this subject, we have, following [@tz1]:
For a Fourier matrix $F=F_G$, the matrices $A\in\widetilde{T}_FX_N$, with $N=|G|$, are those of the form $A=PF^*$, with $P\in M_N(\mathbb C)$ satisfying $$P_{ij}=P_{i+j,j}=\bar{P}_{i,-j}$$ where the indices $i,j$ are by definition taken in the group $G$.
We use the system of equations in Theorem 4.7, namely: $$\sum_kF_{ik}\bar{F}_{jk}(A_{ik}-A_{jk})=0$$
By decomposing our finite abelian group as $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_r}$ we can assume $F=F_{N_1}\otimes\ldots\otimes F_{N_r}$, so that with $w_k=e^{2\pi i/k}$ we have: $$F_{i_1\ldots i_r,j_1\ldots j_r}=(w_{N_1})^{i_1j_1}\ldots (w_{N_r})^{i_rj_r}$$
With $N=N_1\ldots N_r$ and $w=e^{2\pi i/N}$, we obtain: $$F_{i_1\ldots i_r,j_1\ldots j_r}=w^{\left(\frac{i_1j_1}{N_1}+\ldots+\frac{i_rj_r}{N_r}\right)N}$$
Thus the matrix of our system is given by: $$F_{i_1\ldots i_r,k_1\ldots k_r}\bar{F}_{j_1\ldots j_r,k_1\ldots k_r}=w^{\left(\frac{(i_1-j_1)k_1}{N_1}+\ldots+\frac{(i_r-j_r)k_r}{N_r}\right)N}$$
Now by plugging in a multi-indexed matrix $A$, our system becomes: $$\sum_{k_1\ldots k_r}w^{\left(\frac{(i_1-j_1)k_1}{N_1}+\ldots+\frac{(i_r-j_r)k_r}{N_r}\right)N}(A_{i_1\ldots i_r,k_1\ldots k_r}-A_{j_1\ldots j_r,k_1\ldots k_r})=0$$
Now observe that in the above formula we have in fact two matrix multiplications, so our system can be simply written as: $$(AF)_{i_1\ldots i_r,i_1-j_1\ldots i_r-j_r}-(AF)_{j_1\ldots j_r,i_1-j_1\ldots i_r-j_r}=0$$
Now recall that our indices have a “cyclic” meaning, so they belong in fact to the group $G$. So, with $P=AF$, and by using multi-indices, our system is simply: $$P_{i,i-j}=P_{j,i-j}$$
With $i=I+J,j=I$ we obtain the condition $P_{I+J,J}=P_{IJ}$ in the statement.
In addition, $A=PF^*$ must be a real matrix. But, if we set $\tilde{P}_{ij}=\bar{P}_{i,-j}$, we have: $$\begin{aligned}
\overline{(PF^*)}_{i_1\ldots i_r,j_1\ldots j_r}
&=&\sum_{k_1\ldots k_r}\bar{P}_{i_1\ldots i_r,k_1\ldots k_r}F_{j_1\ldots j_r,k_1\ldots k_r}\\
&=&\sum_{k_1\ldots k_r}\tilde{P}_{i_1\ldots i_r,-k_1\ldots -k_r}(F^*)_{-k_1\ldots -k_r,j_1\ldots j_r}\\
&=&(\tilde{P}F^*)_{i_1\ldots i_r,j_1\ldots j_r}\end{aligned}$$
Thus we have $\overline{PF^*}=\tilde{P}F^*$, so the fact that the matrix $PF^*$ is real, which means by definition that we have $\overline{PF^*}=PF^*$, can be reformulated as $\tilde{P}F^*=PF^*$, and hence as $\tilde{P}=P$. So, we obtain the conditions $P_{ij}=\bar{P}_{i,-j}$ in the statement.
We can now compute the defect, and we are led to the following formula:
The defect of a Fourier matrix $F_G$ is given by $$d(F_G)=\sum_{g\in G}\frac{|G|}{ord(g)}$$ and equals as well the number of $1$ entries of the matrix $F_G$.
According to the formula $A=PF^*$ from Theorem 4.11, the defect $d(F_G)$ is the dimension of the real vector space formed by the matrices $P\in M_N(\mathbb C)$ satisfying: $$P_{ij}=P_{i+j,j}=\bar{P}_{i,-j}$$
Here, and in what follows, the various indices $i,j,\ldots$ will be taken in $G$. Now the point is that, in terms of the columns of our matrix $P$, the above conditions are:
\(1) The entries of the $j$-th column of $P$, say $C$, must satisfy $C_i=C_{i+j}$.
\(2) The $(-j)$-th column of $P$ must be conjugate to the $j$-th column of $P$.
Thus, in order to count the above matrices $P$, we can basically fill the columns one by one, by taking into account the above conditions. In order to do so, consider the subgroup $G_2=\{j\in G|2j=0\}$, and then write $G$ as a disjoint union, as follows: $$G=G_2\sqcup X\sqcup(-X)$$
With this notation, the algorithm is as follows. First, for any $j\in G_2$ we must fill the $j$-th column of $P$ with real numbers, according to the periodicity rule $C_i=C_{i+j}$. Then, for any $j\in X$ we must fill the $j$-th column of $P$ with complex numbers, according to the same periodicity rule $C_i=C_{i+j}$. And finally, once this is done, for any $j\in X$ we just have to set the $(-j)$-th column of $P$ to be the conjugate of the $j$-th column.
So, let us compute the number of choices for filling these columns. Our claim is that, when uniformly distributing the choices for the $j$-th and $(-j)$-th columns, for $j\notin G_2$, there are exactly $[G:<j>]$ choices for the $j$-th column, for any $j$. Indeed:
\(1) For the $j$-th column with $j\in G_2$ we must simply pick $N$ real numbers subject to the condition $C_i=C_{i+j}$ for any $i$, so we have indeed $[G:<j>]$ such choices.
\(2) For filling the $j$-th and $(-j)$-th column, with $j\notin G_2$, we must pick $N$ complex numbers subject to the condition $C_i=C_{i+j}$ for any $i$. Now since there are $[G:<j>]$ choices for these numbers, so a total of $2[G:<j>]$ choices for their real and imaginary parts, on average over $j,-j$ we have $[G:<j>]$ choices, and we are done again.
Summarizing, the dimension of the vector space formed by the matrices $P$, which is equal to the number of choices for the real and imaginary parts of the entries of $P$, is: $$d(F_G)=\sum_{j\in G}[G:<j>]$$
But this is exactly the number in the statement.
Regarding now the second assertion, according to the abstract definition of the Fourier matrix $F_G$, from Theorem 2.8 above, the number of $1$ entries of $F_G$ is given by: $$\begin{aligned}
\#(1\in F_G)
&=&\#\left\{(g,\chi)\in G\times\widehat{G}\Big|\chi(g)=1\right\}\\
&=&\sum_{g\in G}\#\left\{\chi\in\widehat{G}\Big|\chi(g)=1\right\}\\
&=&\sum_{g\in G}\frac{|G|}{ord(g)}\end{aligned}$$
Thus, the second assertion follows from the first one.
Let us finish now the work, and explicitely compute the defect of $F_G$. It is convenient to consider the following quantity, which behaves better: $$\delta(G)=\sum_{g\in G}\frac{1}{ord(g)}$$
As a first example, consider a cyclic group $G=\mathbb Z_N$, with $N=p^a$ power of a prime. The count here is very simple, over sets of elements having a given order: $$\begin{aligned}
\delta(\mathbb Z_{p^a})
&=&1+(p-1)p^{-1}+(p^2-p)p^{-2}+\ldots+(p^a-p^{a-1})p^{-1}\\
&=&1+a-\frac{a}{p}\end{aligned}$$
In order to extend this kind of count to the general abelian case, we use two ingredients. First is the following result, which splits the computation over isotypic components:
For any finite groups $G,H$ we have: $$\delta(G\times H)\geq\delta(G)\delta(H)$$ In addition, if $(|G|,|H|)=1$, we have equality.
Indeed, we have the following estimate: $$\begin{aligned}
\delta(G\times H)
&=&\sum_{gh}\frac{1}{ord(g,h)}\\
&=&\sum_{gh}\frac{1}{[ord(g),ord(h)]}\\
&\geq&\sum_{gh}\frac{1}{ord(g)\cdot ord(h)}\\
&=&\delta(G)\delta(H)\end{aligned}$$
Now in the case $(|G|,|H|)=1$, the least common multiple appearing on the right becomes a product, $[ord(g),ord(h)]=ord(g)\cdot ord(h)$, so we have equality.
We deduce from this that we have the following result:
For a finite abelian group $G$ we have $$\delta(G)=\prod_p\delta(G_p)$$ where $G_p$ with $G=\times_pG_p$ are the isotypic components of $G$.
This is clear from Proposition 4.13, because the order of $G_p$ is a power of $p$.
As an illustration, we can recover in this way the defect computation in [@tz2]:
The defect of a usual Fourier matrix $F_N$ is given by $$d(F_N)=N\prod_{i=1}^s\left(1+a_i-\frac{a_i}{p_i}\right)$$ where $N=p_1^{a_1}\ldots p_s^{a_s}$ is the decomposition of $N$ into prime factors.
The underlying group here is the cyclic group $G=\mathbb Z_N$, whose isotypic components are the cyclic groups $G_{p_i}=\mathbb Z_{p_i^{a_i}}$. By applying now Proposition 4.14, and by using the computation for cyclic $p$-groups performed before Proposition 4.13, we obtain: $$d(F_N)=N\prod_{i=1}^s\left(1+p_i^{-1}(p_i-1)a_i\right)$$
But this is exactly the formula in the statement.
Now back to the general case, where we have an arbitrary Fourier matrix $F_G$, we will need, as a second ingredient for our computation, the following result:
For the $p$-groups, the quantities $$c_k=\#\left\{g\in G\Big|ord(g)\leq p^k\right\}$$ are multiplicative, in the sense that $c_k(G\times H)=c_k(G)c_k(H)$.
Indeed, for a product of $p$-groups we have: $$\begin{aligned}
c_k(G\times H)
&=&\#\left\{(g,h)\Big|ord(g,h)\leq p^k\right\}\\
&=&\#\left\{(g,h)\Big|ord(g)\leq p^k,ord(h)\leq p^k\right\}\\
&=&\#\left\{g\Big|ord(g)\leq p^k\right\}\#\left\{h\Big|ord(h)\leq p^k\right\}\end{aligned}$$
We recognize at right $c_k(G)c_k(H)$, and we are done.
Let us compute now $\delta$ in the general isotypic case:
For $G=\mathbb Z_{p^{a_1}}\times\ldots\times\mathbb Z_{p^{a_r}}$ with $a_1\leq a_2\leq\ldots\leq a_r$ we have $$\delta(G)=1+\sum_{k=1}^rp^{(r-k)a_{k-1}+(a_1+\ldots+a_{k-1})-1}(p^{r-k+1}-1)[a_k-a_{k-1}]_{p^{r-k}}$$ with the convention $a_0=0$, and by using the notation $[a]_q=1+q+q^2+\ldots+q^{a-1}$.
First, in terms of the numbers $c_k$, we have: $$\delta(G)=1+\sum_{k\geq 1}\frac{c_k-c_{k-1}}{p^k}$$
In the case of a cyclic group $G=\mathbb Z_{p^a}$ we have $c_k=p^{\min(k,a)}$. Thus, in the general isotypic case $G=\mathbb Z_{p^{a_1}}\times\ldots\times\mathbb Z_{p^{a_r}}$ we have: $$c_k=p^{\min(k,a_1)}\ldots p^{\min(k,a_r)}=p^{\min(k,a_1)+\ldots+\min(k,a_r)}$$
Now observe that the exponent on the right is a piecewise linear function of $k$. More precisely, by assuming $a_1\leq a_2\leq\ldots\leq a_r$ as in the statement, the exponent is linear on each of the intervals $[0,a_1],[a_1,a_2],\ldots,[a_{r-1},a_r]$. So, the quantity $\delta(G)$ to be computed will be 1 plus the sum of $2r$ geometric progressions, 2 for each interval.
In practice now, the numbers $c_k$ are as follows: $$c_0=1,c_1=p^r,c_2=p^{2r},\ldots,c_{a_1}=p^{ra_1},$$ $$c_{a_1+1}=p^{a_1+(r-1)(a_1+1)},c_{a_1+2}=p^{a_1+(r-1)(a_1+2)},\ldots,c_{a_2}=p^{a_1+(r-1)a_2},$$ $$c_{a_2+1}=p^{a_1+a_2+(r-2)(a_2+1)},c_{a_2+2}=p^{a_1+a_2+(r-2)(a_2+2)},\ldots,c_{a_3}=p^{a_1+a_2+(r-2)a_3},$$ $$\ldots\ldots\ldots$$ $$c_{a_{r-1}+1}=p^{a_1+\ldots+a_{r-1}+(a_{r-1}+1)},c_{a_{r-1}+2}=p^{a_1+\ldots+a_{r-1}+(a_{r-1}+2)},\ldots,c_{a_r}=p^{a_1+\ldots+a_r}$$
Now by separating the positive and negative terms in the above formula of $\delta(G)$, we have indeed $2r$ geometric progressions to be summed, as follows: $$\begin{aligned}
\delta(G)
&=&1+(p^{r-1}+p^{2r-2}+p^{3r-3}+\ldots+p^{a_1r-a_1})\\
&&-(p^{-1}+p^{r-2}+p^{2r-3}+\ldots+p^{(a_1-1)r-a_1})\\
&&+(p^{(r-1)(a_1+1)-1}+p^{(r-1)(a_1+2)-2}+\ldots+p^{a_1+(r-2)a_2})\\
&&-(p^{a_1r-a_1-1}+p^{(r-1)(a_1+1)-2}+\ldots+p^{a_1+(r-1)(a_2-1)-a_2})\\
&&+\ldots\\
&&+(p^{a_1+\ldots+a_{r-1}}+p^{a_1+\ldots+a_{r-1}}+\ldots+p^{a_1+\ldots+a_{r-1}})\\
&&-(p^{a_1+\ldots+a_{r-1}-1}+p^{a_1+\ldots+a_{r-1}-1}+\ldots+p^{a_1+\ldots+a_{r-1}-1})\end{aligned}$$
Now by performing all the sums, we obtain: $$\begin{aligned}
\delta(G)
&=&1+p^{-1}(p^r-1)\frac{p^{(r-1)a_1}-1}{p^{r-1}-1}\\
&&+p^{(r-2)a_1+(a_1-1)}(p^{r-1}-1)\frac{p^{(r-2)(a_2-a_1)}-1}{p^{r-2}-1}\\
&&+p^{(r-3)a_2+(a_1+a_2-1)}(p^{r-2}-1)\frac{p^{(r-3)(a_3-a_2)}-1}{p^{r-3}-1}\\
&&+\ldots\\
&&+p^{a_1+\ldots+a_{r-1}-1}(p-1)(a_r-a_{r-1})\end{aligned}$$
By looking now at the general term, we get the formula in the statement.
Let us go back now to the general defect formula in Theorem 4.12. By putting it together with the various results above, we obtain:
For a finite abelian group $G$, decomposed as $G=\times_pG_p$, we have $$d(F_G)=|G|\prod_p\left( 1+\sum_{k=1}^rp^{(r-k)a_{k-1}+(a_1+\ldots+a_{k-1})-1}(p^{r-k+1}-1)[a_k-a_{k-1}]_{p^{r-k}}\right)$$ where $a_0=0$ and $a_1\leq a_2\leq\ldots\leq a_r$ are such that $G_p=\mathbb Z_{p^{a_1}}\times\ldots\times\mathbb Z_{p^{a_r}}$.
Indeed, we know from Theorem 4.12 that we have $d(F_G)=|G|\delta(G)$, and the result follows from Proposition 4.14 and Proposition 4.17.
As a first illustration, we can recover in this way the formula in Theorem 4.15. Assuming that $N=p_1^{a_1}\ldots p_s^{a_s}$ is the decomposition of $N$ into prime factors, we have: $$\begin{aligned}
d(F_N)
&=&N\prod_{i=1}^s\left(1+p_i^{-1}(p_i-1)a_i\right)\\
&=&N\prod_{i=1}^s\left(1+a_i-\frac{a_i}{p_i}\right)\end{aligned}$$
As a second illustration, for the group $G=\mathbb Z_{p^{a_1}}\times\mathbb Z_{p^{a_2}}$ with $a_1\leq a_2$ we obtain: $$\begin{aligned}
d(F_G)
&=&p^{a_1+a_2}(1+p^{-1}(p^2-1)[a_1]_p+p^{a_1-1}(p-1)(a_2-a_1))\\
&=&p^{a_1+a_2-1}(p+(p^2-1)\frac{p^{a_1}-1}{p-1}+p^{a_1}(p-1)(a_2-a_1))\\
&=&p^{a_1+a_2-1}(p+(p+1)(p^{a_1}-1)+p^{a_1}(p-1)(a_2-a_1))\end{aligned}$$
Finally, let us mention that for general non-abelian groups, there does not seem to be any reasonable algebraic formula for the quantity $\delta(G)$. As an example, consider the dihedral group $D_N$, consisting of $N$ symmetries and $N$ rotations. We have: $$\delta(D_N)=\frac{N}{2}+\delta(\mathbb Z_N)$$
Now by remembering the formula for $\mathbb Z_N$, namely $\delta(\mathbb Z_N)=\prod (1+p_i^{-1}(p_i-1)a_i)$, it is quite clear that the $N/2$ factor can not be incorporated in any nice way. See [@ba2].
Let us prove now, following the paper of Nicoara and White [@nwh], that for the Fourier matrices the defect is “attained”, in the sense that the deformations at order 0 are true deformations, at order $\infty$. This is something quite surprising, and non-trivial.
Let us begin with some generalities. We first recall that we have:
The unitary matrices $U\in U_N$ around $1$ are of the form $$U=e^A$$ with $A$ being an antihermitian matrix, $A=-A^*$, around $0$.
This is something well-known. Indeed, assuming that a matrix $A$ is antihermitian, $A=-A^*$, the matrix $U=e^A$ follows to be unitary: $$UU^*=e^A(e^A)^*=e^Ae^{A^*}=e^Ae^{-A}=1$$
As for the converse, this follows either by using a dimension argument, which shows that the space of antihermitian matrices is the correct one, or by diagonalizing $U$.
Now back to the Hadamard matrices, we will need to rewrite a part of the basic theory of the defect, using deformations of type $t\to U_tH$. First, we have:
Assume that $H\in M_N(\mathbb C)$ is Hadamard, let $A\in M_N(\mathbb C)$ be antihermitian, and consider the matrix $UH$, where $U=e^{tA}$, with $t\in\mathbb R$.
1. $UH$ is Hadamard when $|\sum_{rs}H_{rq}\bar{H}_{sq}(e^{tA})_{pr}(e^{-tA})_{sp}|=1$, for any $p,q$.
2. $UH$ is Hadamard at order $0$ when $|(AH)_{pq}|=1$, for any $p,q$.
We already know that $UH$ is unitary, so we must find the conditions which guarantee that we have $UH\in M_N(\mathbb T)$, in general, and then at order 0.
\(1) We have the following computation, valid for any unitary $U$: $$\begin{aligned}
|(UH)_{pq}|^2
&=&(UH)_{pq}\overline{(UH)_{pq}}\\
&=&(UH)_{pq}(H^*U^*)_{qp}\\
&=&\sum_{rs}U_{pr}H_{rq}(H^*)_{qs}(U^*)_{sp}\\
&=&\sum_{rs}H_{rq}\bar{H}_{sq}U_{pr}\bar{U}_{ps}\end{aligned}$$
Now with $U=e^{tA}$ as in the statement, we obtain: $$|(e^{tA}H)_{pq}|^2=\sum_{rs}H_{rq}\bar{H}_{sq}(e^{tA})_{pr}(e^{-tA})_{sp}$$
Thus, we are led to the conclusion in the statement.
\(2) The derivative of the function computed above, taken at $0$, is as follows: $$\begin{aligned}
\frac{\partial |(e^{tA}H)_{pq}|^2}{\partial t}_{|t=0}
&=&\sum_{rs}H_{rq}\bar{H}_{sq}(e^{tA}A)_{pr}(-e^{tA}A)_{sp}{\,}_{|t=0}\\
&=&\sum_{rs}H_{rq}\bar{H}_{sq}A_{pr}(-A)_{sp}\\
&=&\sum_rA_{pr}H_{rq}\sum_s(H^*)_{qs}(A^*)_{sp}\\
&=&(AH)_{pq}(H^*A^*)_{qp}\\
&=&|(AH)_{pq}|^2\end{aligned}$$
Thus, we obtain the conclusion in the statement.
In the Fourier matrix case we can go beyond this, and we have:
Given a Fourier matrix $F_G\in M_G(\mathbb C)$, and an antihermitian matrix $A\in M_G(\mathbb C)$, the matrix $UF_G$, where $U=e^{tA}$ with $t\in\mathbb R$, is Hadamard when $$\left|\sum_s\sum_m\frac{t^m}{m!}\sum_{k+l=m}\binom{m}{l}\sum_sA^k_{p,s+n}(-A)^l_{sp}\right|=\delta_{n0}$$ for any $p$, with the indices being $k,l,m\in\mathbb N$, and $n,p,s\in G$.
According to the formula in the proof of Theorem 4.20 (1), we have: $$\begin{aligned}
|(UF_G)_{pq}|^2
&=&\sum_{rs}(F_G)_{rq}(\overline{F_G})_{sq}(e^{tA})_{pr}(e^{-tA})_{sp}\\
&=&\sum_{rs}<r,q><-s,q>(e^{tA})_{pr}(e^{-tA})_{sp}\\
&=&\sum_{rs}<r-s,q>(e^{tA})_{pr}(e^{-tA})_{sp}\end{aligned}$$
By setting $n=r-s$, can write this formula in the following way: $$\begin{aligned}
|(UF_G)_{pq}|^2
&=&\sum_{ns}<n,q>(e^{tA})_{p,s+n}(e^{-tA})_{sp}\\
&=&\sum_n<n,q>\sum_s(e^{tA})_{p,s+n}(e^{-tA})_{sp}\end{aligned}$$
Since this quantity must be 1 for any $q$, we must have: $$\sum_s(e^{tA})_{p,s+n}(e^{-tA})_{sp}=\delta_{n0}$$
On the other hand, we have the following computation: $$\begin{aligned}
&&\sum_s(e^{tA})_{p,s+n}(e^{-tA})_{sp}\\
&=&\sum_s\sum_{kl}\frac{(tA)^k_{p,s+n}}{k!}\,\cdot\,\frac{(-tA)^l_{sp}}{l!}\\
&=&\sum_s\sum_{kl}\frac{1}{k!l!}\sum_s(tA)^k_{p,s+n}(-tA)^l_{sp}\\
&=&\sum_s\sum_{kl}\frac{t^{k+l}}{k!l!}\sum_sA^k_{p,s+n}(-A)^l_{sp}\\
&=&\sum_s\sum_mt^m\sum_{k+l=m}\frac{1}{k!l!}\sum_sA^k_{p,s+n}(-A)^l_{sp}\\
&=&\sum_s\sum_m\frac{t^m}{m!}\sum_{k+l=m}\binom{m}{l}\sum_sA^k_{p,s+n}(-A)^l_{sp}\end{aligned}$$
Thus, we obtain the conclusion in the statement.
Following [@nwh], let us construct now the deformations. The result is as follows:
Let $G$ be a finite abelian group, and for any $g,h\in G$, let us set: $$B_{pq}=\begin{cases}
1&{\rm if}\ \exists k\in\mathbb N,p=h^kg,q=h^{k+1}g\\
0&{\rm otherwise}
\end{cases}$$ When $(g,h)\in G^2$ range in suitable cosets, the unitary matrices $$e^{it(B+B^t)}F_G\quad,\quad e^{t(B-B^t)}F_G$$ are both Hadamard, and make the defect of $F_G$ to be attained.
The proof of this result, from [@nwh], is quite long and technical, based on the Fourier computation from Proposition 4.21 above, the idea being as follows:
\(1) First of all, an elementary algebraic study shows that when $(g,h)\in G^2$ range in some suitable cosets, coming from the proof of Theorem 4.12, the various matrices $B=B^{gh}$ constructed above are distinct, the matrices $A=i(B+B^t)$ and $A'=B-B^t$ are linearly independent, and the number of such matrices equals the defect of $F_G$.
\(2) It is also standard to check that each $B=(B_{pq})$ is a partial isometry, and that $B^k,B^{*k}$ are given by simple formulae. With this ingredients in hand, the Hadamard property follows from the Fourier computation from the proof of Proposition 4.21. Indeed, we can compute the exponentials there, and eventually use the binomial formula.
\(3) Finally, the matrices in the statement can be shown to be non-equivalent, and this is something more technical, for which we refer to [@nwh]. With this last ingredient in hand, a comparison with Theorem 4.12 shows that the defect of $F_G$ is indeed attained, in the sense that all order 0 deformations are actually true deformations. See [@nwh].
The above result is something quite surprising, which came a long time after the original defect paper [@tz1], and even more time after the early computations in [@kar].
Let us also mention that [@nwh] was written in terms of subfactor-theoretic commuting squares, with a larger class of squares actually under investigation. We will discuss the relation between Hadamard matrices and commuting squares in section 11 below.
Special matrices
================
We have seen in the previous section that the defect theory from [@tz1] can be successfully applied to the real Hadamard matrices, and to the Fourier matrices.
We discuss here a number of more specialized aspects, regarding the tensor products, the Diţă deformations of such tensor products, the Butson and the regular matrices, the master Hadamard matrices, the McNulty-Weigert matrices, and finally the partial Hadamard matrix case, following [@aff], [@ba2], [@ba3], [@bop], [@mwe], [@tz1], [@tz2].
Let us begin with some generalities. The standard defect equations are those in Theorem 4.7, naturally coming from the computations in the proof of Theorem 4.4: $$d(H)=\dim_\mathbb R\left\{A\in M_N(\mathbb R)\Big|\sum_kH_{ik}\bar{H}_{jk}(A_{ik}-A_{jk})=0,\forall i,j\right\}$$
However, we have seen that for various concrete questions, some manipulations on these equations are needed. To be more precise, the study in the real case was based on the transformation $E_{ij}=\sum_kH_{ik}\bar{H}_{jk}A_{ik}$ from Proposition 4.9, the study in the Fourier matrix case was based on the transformation $P=AF_G$ from Theorem 4.11, and the fine study in the Fourier matrix case was based on the $t\to e^{tA}H$ method from Proposition 4.21.
In short, each type of complex Hadamard matrix seems to require its own general theory, and defect manipulations, and there is no way of escaping from this.
In view of this phenomenon, let us first present some further manipulations on the defect equations, which are all quite natural, and potentially useful. First, we have:
The defect $d(H)$ is the corank of the matrix $$Y_{ij,ab}
=(\delta_{ia}-\delta_{ja})\begin{cases}
Re(H_{ib}\bar{H}_{jb})&{\rm if}\ i<j\\
Im(H_{ib}\bar{H}_{jb})&{\rm if}\ i>j\\
*&{\rm if}\ i=j
\end{cases}$$ where $*$ can be any quantity, its coefficient being $0$ anyway.
The matrix of the system defining the enveloping tangent space is: $$X_{ij,ab}=(\delta_{ia}-\delta_{ja})H_{ib}\bar{H}_{jb}$$
However, since we are only looking for real solutions $A\in M_N(\mathbb R)$, we have to take into account the real and imaginary parts. But this is not a problem, because the $(ij)$ equation coincides with the $(ji)$ one, that we can cut. More precisely, if we set $Y$ as above, then we obtain precisely the original system. Thus the defect of $H$ is the corank of $Y$.
As an illustration, for the Fourier matrix $F_N$ we have the following formula, where $e(i,j)\in\{-1,0,1\}$ is negative if $i<j$, null for $i=j$, and positive for $i>j$: $$Y_{ij,ab}=\frac{1}{2}(\delta_{ia}-\delta_{ja})(w^{(i-j)b}+e(i,j)w^{(j-i)b})$$
Observe in particular that for the Fourier matrix $F_2$ we have: $$Y=\begin{pmatrix}0&0&0&0\\ 1&-1&-1&1\\ -1&1&1&-1\\ 0&0&0&0\end{pmatrix}$$
Here the corank is $3$, but, unfortunately, this cannot be seen on the characteristic polynomial, which is $P(\lambda)=\lambda^4$. The problem is that our matrix, and more precisely its middle $2\times 2$ block, is not diagonalizable. This phenomenon seems to hold in general.
A second possible manipulation, which is of interest in connection with quantum groups and subfactor theory, concerns the reformulation of the defect in terms of the profile matrix of $H$. Indeed, it is known that both the quantum group and subfactor associated to $H$ depend only on this profile matrix, and this will be explained in sections 10-11 below.
We do not have an answer here, but our conjecture is as follows:
The profile matrix of $H$, namely $$M_{ia}^{jb}=\sum_kH_{ik}\bar{H}_{jk}\bar{H}_{ak}H_{bk}$$ determines the enveloping tangent space $\widetilde{T}_HX_N$, or at least the defect $d(H)$.
All this is of course related to the general question on whether the associated quantum group or subfactor can “see” the defect, via various representation theory invariants. For a number of further speculations on all this, and on some related glow questions as well, in relation with the general theory in [@dif], [@dsh], we refer to [@ba2], [@ba3], [@ba5].
Let us get back now to our original goal here, namely computing the defect for various classes of special matrices. For the tensor products, we have the following result:
For a tensor product $L=H\otimes K$ we have $$d(L)\geq d(H)d(K)$$ coming from an inclusion of linear spaces $\widetilde{T}_HX_M\otimes\widetilde{T}_KX_N\subset\widetilde{T}_LX_{MN}$.
For a matrix $A=B\otimes C$, we have the following formulae: $$\begin{aligned}
\sum_{kc}(H\otimes K)_{ia,kc}\overline{(H\otimes K)}_{jb,kc}A_{ia,kc}
&=&\sum_kH_{ik}\bar{H}_{jk}B_{ik}\sum_cK_{ac}\bar{K}_{bc}C_{ac}\\
\sum_{kc}(H\otimes K)_{ia,kc}\overline{(H\otimes K)}_{jb,kc}A_{jb,kc}
&=&\sum_kH_{ik}\bar{H}_{jk}B_{jk}\sum_cK_{ac}\bar{K}_{bc}C_{bc}\end{aligned}$$
Now by assuming $B\in\widetilde{T}_HX_M$ and $C\in\widetilde{T}_KX_N$, the two quantities on the right are equal. Thus we have indeed $A\in\widetilde{T}_LX_{MN}$, and we are done.
Observe that we do not have equality in the tensor product estimate, even in very simple cases. For instance if we consider two Fourier matrices $F_2$, we obtain: $$d(F_2\otimes F_2)=10>9=d(F_2)^2$$
In fact, besides the isotypic decomposition results from section 4 above, valid for the Fourier matrices, there does not seem to be anything conceptual on this subject. We will be back to this, however, in Theorem 5.6 below, with a slight advance on all this.
Let us discuss now the Diţă deformations. Here the study is even more difficult, and we basically have just one result, when the deformation matrix is as follows:
A rectangular matrix $Q\in M_{M\times N}(\mathbb T)$ is called “dephased and elsewhere generic” if the entries on its first row and column are all equal to $1$, and the remaining $(M-1)(N-1)$ entries are algebrically independent over $\mathbb Q$.
Here the last condition takes of course into account the fact that the entries of $Q$ themselves have modulus 1, the independence assumption being modulo this fact.
With this convention made, we have the following result:
If $H\in X_M,K\in X_N$ are dephased, of Butson type, and $Q\in M_{M\times N}(\mathbb T)$ is dephased and elsewhere generic, then $A=(A_{ia,kc})$ belongs to $\widetilde{T}_{H\otimes_QK}X_{MN}$ iff $$A_{ac}^{ij}=A_{bc}^{ij}\quad,\quad A_{ac}^{ij}=\overline{A_{ac}^{ji}}\quad,\quad (A_{xy}^{ii})_{xy}\in\widetilde{T}_KX_N$$ hold for any $a,b,c$ and $i\neq j$, where $A_{ac}^{ij}=\sum_kH_{ik}\bar{H}_{jk}A_{ia,kc}$.
Consider the system for the enveloping tangent space, namely: $$\sum_{kc}(H\otimes_QK)_{ia,kc}\overline{(H\otimes_QK)}_{jb,kc}(A_{ia,kc}-A_{jb,kc})=0$$
We have $(H\otimes_QK)_{ia,jb}=q_{ib}H_{ij}K_{ab}$, and so our system is: $$\sum_cq_{ic}\bar{q}_{jc}K_{ac}\bar{K}_{bc}\sum_kH_{ik}\bar{H}_{jk}(A_{ia,kc}-A_{jb,kc})=0$$
Consider now the variables $A_{ac}^{ij}=\sum_kH_{ik}\bar{H}_{jk}A_{ia,kc}$ in the statement. We have: $$\overline{A_{ac}^{ij}}=\sum_k\bar{H}_{ik}H_{jk}A_{ia,kc}=\sum_kH_{jk}\bar{H}_{ik}A_{ia,kc}$$
Thus, in terms of these variables, our system becomes simply: $$\sum_cq_{ic}\bar{q}_{jc}K_{ac}\bar{K}_{bc}(A_{ac}^{ij}-\overline{A_{bc}^{ji}})=0$$
More precisely, the above equations must hold for any $i,j,a,b$. By distinguishing now two cases, depending on whether $i,j$ are equal or not, the situation is as follows:
\(1) Case $i\neq j$. In this case, let us look at the row vector of parameters, namely: $$(q_{ic}\bar{q}_{jc})_c=(1,q_{i1}\bar{q}_{j1},\ldots,q_{iM}\bar{q}_{jM})$$
Now since $Q$ was assumed to be dephased and elsewhere generic, and because of our assumption $i\neq j$, the entries of the above vector are linearly independent over $\bar{\mathbb Q}$. But, since by linear algebra we can restrict attention to the computation of the solutions over $\bar{\mathbb Q}$, the $i\neq j$ part of our system simply becomes $A_{ac}^{ij}=\overline{A_{bc}^{ji}}$, for any $a,b,c$ and any $i\neq j$. Now by making now $a,b,c$ vary, we are led to the following equations: $$A_{ac}^{ij}=A_{bc}^{ij},\quad A_{ac}^{ij}=\overline{A_{ac}^{ji}},\quad\forall a,b,c,i\neq j$$
\(2) Case $i=j$. In this case the parameters cancel, and our equations become: $$\sum_cK_{ac}\bar{K}_{bc}(A_{ac}^{ii}-\overline{A_{bc}^{ii}})=0,\quad\forall a,b,c,i$$
On the other hand, we have $A_{ac}^{ii}=\sum_kA_{ia,kc}$, and so our equations become: $$\sum_cK_{ac}\bar{K}_{bc}(A_{ac}^{ii}-A_{bc}^{ii})=0,\quad\forall a,b,c,i$$
But these are precisely the equations for the space $\widetilde{T}_KX_N$, and we are done.
Let us go back now to the usual tensor product situation, and look at the affine cones. The problem here is that of finding the biggest subcone of $T_{H\otimes K}^\circ X_{MN}$, obtained by gluing $T_H^\circ X_M,T_K^\circ X_N$. Our answer here, which takes into account the two “semi-trivial” cones coming from the left and right Diţă deformations, is as follows:
The cones $T_H^\circ X_M=\{B\}$ and $T_K^\circ X_N=\{C\}$ glue via the formulae $$A_{ia,jb}=\lambda B_{ij}+\psi_jC_{ab}+X_{ia}+Y_{jb}+F_{aj}$$ $$A_{ia,jb}=\phi_bB_{ij}+\mu C_{ab}+X_{ia}+Y_{jb}+E_{ib}$$ producing in this way two subcones of the affine cone $T_{H\otimes K}^\circ X_{MN}=\{A\}$.
Indeed, the idea is that $X_{ia},Y_{jb}$ are the trivial parameters, and that $E_{ib},F_{aj}$ are the Diţă parameters. In order to prove the result, we use the criterion in Theorem 4.4 (3) above. So, given a matrix $A=(A_{ia,jb})$, consider the following quantity: $$P=\sum_{kc}H_{ik}\bar{H}_{jk}K_{ac}\bar{K}_{bc}q^{A_{ia,kc}-A_{jb,kc}}$$
Let us prove now the first statement, namely that for any choice of matrices $B\in T_H^\circ X_M,C\in T_H^\circ X_N$ and of parameters $\lambda,\psi_j,X_{ia},Y_{jb},F_{aj}$, the first matrix $A=(A_{ia,jb})$ constructed in the statement belongs indeed to $T_{H\otimes K}^\circ X_{MN}$. We have: $$A_{ia,kc}=\lambda B_{ik}+\psi_kC_{ac}+X_{ia}+Y_{kc}+F_{ak}$$ $$A_{jb,kc}=\lambda B_{jk}+\psi_kC_{bc}+X_{jb}+Y_{kc}+F_{bk}$$
Now by substracting, we obtain: $$A_{ia,kc}-A_{jb,kc}=\lambda(B_{ik}-B_{jk})+\psi_k(C_{ac}-C_{bc})+(X_{ia}-X_{jb})+(F_{ak}-F_{bk})$$
It follows that the above quantity $P$ is given by: $$\begin{aligned}
P
&=&\sum_{kc}H_{ik}\bar{H}_{jk}K_{ac}\bar{K}_{bc}q^{\lambda(B_{ik}-B_{jk})+\psi_k(C_{ac}-C_{bc})+(X_{ia}-X_{jb})+(F_{ak}-F_{bk})}\\
&=&q^{X_{ia}-X_{jb}}\sum_kH_{ik}\bar{H}_{jk}q^{F_{ak}-F_{bk}}q^{\lambda(B_{ik}-B_{jk})}\sum_cK_{ac}\bar{K}_{bc}(q^{\psi_k})^{C_{ac}-C_{bc}}\\
&=&\delta_{ab}q^{X_{ia}-X_{ja}}\sum_kH_{ik}\bar{H}_{jk}(q^\lambda)^{B_{ik}-B_{jk}}\\
&=&\delta_{ab}\delta_{ij}\end{aligned}$$
Thus Theorem 4.4 (3) applies and tells us that we have $A\in T_{H\otimes K}^\circ X_{MN}$, as claimed. In the second case now, the proof is similar. First, we have: $$A_{ia,kc}=\phi_cB_{ik}+\mu C_{ac}+X_{ia}+Y_{kc}+E_{ic}$$ $$A_{jb,kc}=\phi_cB_{jk}+\mu C_{bc}+X_{jb}+Y_{kc}+E_{jc}$$
Thus by substracting, we obtain: $$A_{ia,kc}-A_{jb,kc}=\phi_c(B_{ik}-B_{jk})+\mu(C_{ac}-C_{bc})+(X_{ia}-X_{jb})+(E_{ic}-E_{jc})$$
It follows that the above quantity $P$ is given by: $$\begin{aligned}
P
&=&\sum_{kc}H_{ik}\bar{H}_{jk}K_{ac}\bar{K}_{bc}q^{\phi_c(B_{ik}-B_{jk})+\mu(C_{ac}-C_{bc})+(X_{ia}-X_{jb})+(E_{ic}-E_{jc})}\\
&=&q^{X_{ia}-X_{jb}}\sum_cK_{ac}\bar{K}_{bc}q^{E_{ic}-E_{jc}}q^{\mu(C_{ac}-C_{bc})}\sum_kH_{ik}\bar{H}_{jk}(q^{\phi_c})^{B_{ik}-B_{jk}}\\
&=&\delta_{ij}q^{X_{ia}-X_{ib}}\sum_cK_{ac}\bar{K}_{bc}(q^\mu)^{C_{ac}-C_{bc}}=\delta_{ij}\delta_{ab}\end{aligned}$$
Thus Theorem 4.4 (3) applies again, and gives the result.
We believe Theorem 5.6 above to be “optimal”, in the sense that nothing more can be said about the affine tangent space $T_{H\otimes K}^\circ X_{MN}$, in the general case. See [@ba3].
Let us discuss now some rationality questions, in relation with:
The rational enveloping tangent space at $H\in X_N$ is $$[\widetilde{T}_HX_N]_{\mathbb Q}=\widetilde{T}_HX_N\cap M_N(\mathbb Q)$$ and the dimension $d_\mathbb Q(H)$ of this space is called rational defect of $H$.
Observe that the first notion can be extended to all the tangent cones at $H$, and by using an arbitrary field $\mathbb K\subset\mathbb C$ instead of $\mathbb Q$. Indeed, we can set: $$[T_H^* X_N]_\mathbb K=T_H^*X_N\cap M_N(\mathbb K)$$
However, in what follows we will be interested only in the objects constructed in Definition 5.7. It follows from definitions that $d_\mathbb Q(H)\leq d(H)$, and we have:
For a regular Hadamard matrix $H\in M_N(\mathbb C)$ we have $$\widetilde{T}_HC_N=\mathbb C\cdot[\widetilde{T}_HC_N]_{\mathbb Q}$$ and so the defect equals the rational defect, $d_\mathbb Q(H)=d(H)$.
For the usual Fourier matrices $F_N$, this definitely holds at $N=p$ prime, because the minimal polynomial of $w$ over $\mathbb Q$ is simply $P=1+w+\ldots+w^{p-1}$. The case $N=p^2$ has also a simple solution, coming from the fact that all the $p\times p$ blocks of our matrix $A$ can be shown to coincide. In general, this method should probably work for $N=p^k$, or even for any $N\in\mathbb N$. So, our conjecture would be that this holds indeed for the Fourier matrices, usual or general, and more generally for the Butson matrices, and even more generally, modulo Conjecture 3.22 above, for the regular matrices. See [@ba3].
Let us discuss now defect computations for a very interesting class of Hadamard matrices, namely the “master” ones, introduced in [@aff]:
A master Hadamard matrix is an Hadamard matrix of the form $$H_{ij}=\lambda_i^{n_j}$$ with $\lambda_i\in\mathbb T,n_j\in\mathbb R$. The associated “master function” is $f(z)=\sum_jz^{n_j}$.
Observe that with $\lambda_i=e^{im_i}$ we have $H_{ij}=e^{im_in_j}$. The basic example of such a matrix is the Fourier matrix $F_N$, having master function as follows: $$f(z)=\frac{z^N-1}{z-1}$$
Observe that, in terms of $f$, the Hadamard condition on $H$ is simply: $$f\left(\frac{\lambda_i}{\lambda_j}\right)=N\delta_{ij}$$
These matrices were introduced in [@aff], the motivating remark there being the fact that the following operator defines a representation of the Temperley-Lieb algebra [@tli]: $$R=\sum_{ij}e_{ij}\otimes\Lambda^{n_i-n_j}$$
At the level of examples, the first observation, from [@aff], is that the standard $4\times 4$ complex Hadamard matrices are, with 2 exceptions, master Hadamard matrices:
The following complex Hadamard matrix, with $|q|=1$, $$F_{2,2}^q=\begin{pmatrix}
1&1&1&1\\
1&-1&1&-1\\
1&q&-1&-q\\
1&-q&-1&q
\end{pmatrix}$$ is a master Hadamard matrix, for any $q\neq\pm1$.
We use the exponentiation convention $(e^{it})^r=e^{itr}$ for $t\in[0,2\pi)$ and $r\in\mathbb R$. Since $q^2\neq1$, we can find $k\in\mathbb R$ such that $q^{2k}=-1$, and so our matrix becomes: $$F_{2,2}^q=\begin{pmatrix}
1^0&1^1&1^{2k}&1^{2k+1}\\
(-1)^0&(-1)^1&(-1)^{2k}&(-1)^{2k+1}\\
q^0&q^1&q^{2k}&q^{2k+1}\\
(-q)^0&(-q)^1&(-q)^{2k}&(-q)^{2k+1}\\
\end{pmatrix}$$
Now if we pick $\lambda\neq1$ and write $1=\lambda^x,-1=\lambda^y,q=\lambda^z,-q=\lambda^t$, we are done.
We have the following generalization of Proposition 5.10, once again from [@aff]:
$F_M\otimes_QF_N$ is master Hadamard, for any $Q\in M_{M\times N}(\mathbb T)$ of the form $$Q_{ib}=q^{i(Np_b+b)}$$ where $q=e^{2\pi i/MNk}$ with $k\in\mathbb N$, and $p_0,\ldots,p_{N-1}\in\mathbb R$.
The main construction in [@aff] is, in terms of master functions, as follows: $$f(z)=f_M(z^{Nk})f_N(z)$$
Here $k\in\mathbb N$, and the functions on the right are by definition as follows: $$f_M(z)=\sum_iz^{Mr_i+i}\quad,\quad f_N(z)=\sum_az^{Np_a+a}$$
We use the eigenvalues $\lambda_{ia}=q^iw^a$, where $w=e^{2\pi i/N}$, and where $q^{Nk}=\nu$, where $\nu^M=1$. Observe that, according to $f(z)=f_M(z^{Nk})f_N(z)$, the exponents are: $$n_{jb}=Nk(Mr_j+j)+Np_b+b$$
Thus the associated master Hadamard matrix is given by: $$\begin{aligned}
H_{ia,jb}
&=&(q^iw^a)^{Nk(Mr_j+j)+Np_b+b}\\
&=&\nu^{ij}q^{i(Np_b+b)}w^{a(Np_b+b)}\\
&=&\nu^{ij}w^{ab}q^{i(Np_b+b)}\end{aligned}$$
Now since $(F_M\otimes F_N)_{ia,jb}=\nu^{ij}w^{ab}$, we get $H=F_M\otimes_QF_N$ with $Q_{ib}=q^{i(Np_b+b)}$, as claimed. Observe that $Q$ itself is a “master matrix”, because the indices split.
In view of the above examples, and of the lack of other known examples of master Hadamard matrices, he following conjecture was made in [@aff]:
The master Hadamard matrices appear as Diţă deformations of $F_N$.
There is a relation here with the notions of defect and isolation, that we would like to discuss now. First, we have the following defect computation:
The defect of a master Hadamard matrix is given by $$d(H)=\dim_\mathbb R\left\{B\in M_N(\mathbb C)\Big|\bar{B}=\frac{1}{N}BL, (BR)_{i,ij}=(BR)_{j,ij}\ \forall i,j\right\}$$ where $L_{ij}=f(\frac{1}{\lambda_i\lambda_j})$ and $R_{i,jk}=f(\frac{\lambda_j}{\lambda_i\lambda_k})$, $f$ being the master function.
The first order deformation equations are: $$\sum_kH_{ik}\bar{H}_{jk}(A_{ik}-A_{jk})=0$$
With $H_{ij}=\lambda_i^{n_j}$ we have $H_{ij}\bar{H}_{jk}=(\lambda_i/\lambda_j)^{n_k}$, and so the defect is given by: $$d(H)=\dim_\mathbb R\left\{A\in M_N(\mathbb R)\Big|\sum_kA_{ik}\left(\frac{\lambda_i}{\lambda_j}\right)^{n_k}=\sum_kA_{jk}\left(\frac{\lambda_i}{\lambda_j}\right)^{n_k}\ \forall i,j\right\}$$
Now, pick $A\in M_N(\mathbb C)$ and set $B=AH^t$, so that $A=\frac{1}{N}B\bar{H}$. First, we have: $$\begin{aligned}
A\in M_N(\mathbb R)
&\iff&B\bar{H}=\bar{B}H\\
&\iff&\bar{B}=\frac{1}{N}B\bar{H}H^*\end{aligned}$$
On the other hand, the matrix on the right is given by: $$(\bar{H}H^*)_{ij}
=\sum_k\bar{H}_{ik}\bar{H}_{jk}
=\sum_k(\lambda_i\lambda_j)^{-n_k}
=L_{ij}$$
Thus $A\in M_N(\mathbb R)$ if and only the condition $\bar{B}=\frac{1}{N}BL$ in the statement is satisfied. Regarding now the second condition on $A$, observe that with $A=\frac{1}{N}B\bar{H}$ we have: $$\begin{aligned}
\sum_kA_{ik}\left(\frac{\lambda_i}{\lambda_j}\right)^{n_k}
&=&\frac{1}{N}\sum_{ks}B_{is}\left(\frac{\lambda_i}{\lambda_j\lambda_s}\right)^{n_k}\\
&=&\frac{1}{N}\sum_sB_{is}R_{s,ij}\\
&=&\frac{1}{N}(BR)_{i,ij}\end{aligned}$$
Thus the second condition on $A$ reads $(BR)_{i,ij}=(BR)_{j,ij}$, and we are done.
In view of the above results, a conjecture would be that the only isolated master Hadamard matrices are the Fourier matrices $F_p$, with $p$ prime. See [@bop].
Let us discuss now yet another interesting construction of complex Hadamard matrices, due to McNulty and Weigert [@mwe]. The matrices constructed there generalize the Tao matrix $T_6$, and usually have the interesting feature of being isolated.
The construction in [@mwe] uses the theory of MUB, as developed in [@bbe], [@deb], but we will follow here a more direct approach, using basic Gauss sums, from [@bop].
The starting observation from [@mwe] is as follows:
Assuming that $K\in M_N(\mathbb C)$ is Hadamard, so is the matrix $$H_{ia,jb}=\frac{1}{\sqrt{Q}}K_{ij}(L_i^*R_j)_{ab}$$ provided that $\{L_1,\ldots,L_N\}\subset\sqrt{Q}U_Q$ and $\{R_1,\ldots,R_N\}\subset\sqrt{Q}U_Q$ are such that each of the matrices $\frac{1}{\sqrt{Q}}L_i^*R_j\in\sqrt{Q}U_Q$, with $i,j=1,\ldots,N$, is Hadamard.
The check of the unitarity is done as follows: $$\begin{aligned}
<H_{ia},H_{kc}>
&=&\frac{1}{Q}\sum_{jb}K_{ij}(L_i^*R_j)_{ab}\bar{K}_{kj}\overline{(L_k^*R_j)}_{cb}\\
&=&\sum_jK_{ij}\bar{K}_{kj}(L_i^*L_k)_{ac}\\
&=&N\delta_{ik}(L_i^*L_k)_{ac}\\
&=&NQ\delta_{ik}\delta_{ac}\end{aligned}$$
The entries being in addition on the unit circle, we are done.
As input for the above, we can use the following well-known Fourier construction:
For $q\geq3$ prime, the matrices $\{F_q,DF_q,\ldots,D^{q-1}F_q\}$, where $$D=diag\left(1,1,w,w^3,w^6,w^{10},\ldots,w^{\frac{q^2-1}{8}},\ldots,w^{10},w^6,w^3,w\right)$$ with $w=e^{2\pi i/q}$, are such that $\frac{1}{\sqrt{q}}E_i^*E_j$ is complex Hadamard, for any $i\neq j$.
With $0,1,\ldots,q-1$ as indices, the formula of the above matrix $D$ is: $$D_c=w^{0+1+\ldots+(c-1)}=w^{\frac{c(c-1)}{2}}$$
Since we have $\frac{1}{\sqrt{q}}E_i^*E_j\in\sqrt{q}U_q$, we just need to check that these matrices have entries belonging to $\mathbb T$, for any $i\neq j$. With $k=j-i$, these entries are given by: $$\frac{1}{\sqrt{q}}(E_i^*E_j)_{ab}=\frac{1}{\sqrt{q}}(F_q^*D^kF_q)_{ab}=\frac{1}{\sqrt{q}}\sum_cw^{c(b-a)}D_c^k$$
Now observe that with $s=b-a$, we have the following formula: $$\begin{aligned}
\left|\sum_cw^{cs}D_c^k\right|^2
&=&\sum_{cd}w^{cs-ds}w^{\frac{c(c-1)}{2}\cdot k-\frac{d(d-1)}{2}\cdot k}\\
&=&\sum_{cd}w^{(c-d)\left(\frac{c+d-1}{2}\cdot k+s\right)}\\
&=&\sum_{de}w^{e\left(\frac{2d+e-1}{2}\cdot k+s\right)}\\
&=&\sum_e\left(w^{\frac{e(e-1)}{2}\cdot k+es}\sum_dw^{edk}\right)\\
&=&\sum_ew^{\frac{e(e-1)}{2}\cdot k+es}\cdot q\delta_{e0}\\
&=&q\end{aligned}$$
Thus the entries are on the unit circle, and we are done.
We recall that the Legendre symbol is defined as follows: $$\left(\frac{s}{q}\right)=\begin{cases}
0&{\rm if}\ s=0\\
1&{\rm if}\ \exists\,\alpha,s=\alpha^2\\
-1&{\rm if}\not\!\exists\,\alpha,s=\alpha^2
\end{cases}$$
Here, and in what follows, all the numbers are taken modulo $q$. We have:
The matrices $G_k=\frac{1}{\sqrt{q}}F_q^*D^kF_q$, with $D=diag(w^{\frac{c(c-1)}{2}})$ being as above, and with $k\neq0$ are circulant, their first row vectors $V^k$ being given by $$V^k_i=\delta_q\left(\frac{k/2}{q}\right)w^{\frac{q^2-1}{8}\cdot k}\cdot w^{-\frac{\frac{i}{k}(\frac{i}{k}-1)}{2}}$$ where $\delta_q=1$ if $q=1(4)$ and $\delta_q=i$ if $q=3(4)$, and with all inverses being taken in $\mathbb Z_q$.
This is a standard exercice on quadratic Gauss sums. First of all, the matrices $G_k$ in the statement are indeed circulant, their first vectors being given by: $$V^k_i=\frac{1}{\sqrt{q}}\sum_cw^{\frac{c(c-1)}{2}\cdot k+ic}$$
Let us first compute the square of this quantity. We have: $$(V_i^k)^2=\frac{1}{q}\sum_{cd}w^{\left[\frac{c(c-1)}{2}+\frac{d(d-1)}{2}\right]k+i(c+d)}$$
The point now is that the sum $S$ on the right, which has $q^2$ terms, decomposes as follows, where $x$ is a certain exponent, depending on $q,i,k$: $$S=\begin{cases}
(q-1)(1+w+\ldots+w^{q-1})+qw^x&{\rm if}\ q=1(4)\\
(q+1)(1+w+\ldots+w^{q-1})-qw^x&{\rm if}\ q=3(4)
\end{cases}$$
We conclude that we have a formula as follows, where $\delta_q\in\{1,i\}$ is as in the statement, so that $\delta_q^2\in\{1,-1\}$ is given by $\delta_q^2=1$ if $q=1(4)$ and $\delta_q^2=-1$ if $q=3(4)$: $$(V_i^k)^2=\delta_q^2\,w^x$$
In order to compute now the exponent $x$, we must go back to the above calculation of the sum $S$. We succesively have:
– First of all, at $k=1,i=0$ we have $x=\frac{q^2-1}{4}$.
– By translation we obtain $x=\frac{q^2-1}{4}-i(i-1)$, at $k=1$ and any $i$.
– By replacing $w\to w^k$ we obtain $x=\frac{q^2-1}{4}\cdot k-\frac{i}{k}(\frac{i}{k}-1)$, at any $k\neq0$ and any $i$.
Summarizing, we have computed the square of the quantity that we are interested in, the formula being as follows, with $\delta_q$ being as in the statement: $$(V^k_i)^2=\delta_q^2\cdot w^{\frac{q^2-1}{4}\cdot k}\cdot w^{-\frac{i}{k}(\frac{i}{k}-1)}$$
By extracting now the square root, we obtain a formula as follows: $$V^k_i=\pm\delta_q\cdot w^{\frac{q^2-1}{8}\cdot k}\cdot w^{-\frac{\frac{i}{k}(\frac{i}{k}-1)}{2}}$$
The computation of the missing sign is non-trivial, but by using the theory of quadratic Gauss sums, and more specifically a result of Gauss, computing precisely this kind of sign, we conclude that we have indeed a Legendre symbol, $\pm=\left(\frac{k/2}{q}\right)$, as claimed.
Let us combine now all the above results. We obtain the following statement:
Let $q\geq3$ be prime, consider two subsets $S,T\subset\{0,1,\ldots,q-1\}$ satisfying $|S|=|T|$ and $S\cap T=\emptyset$, and write $S=\{s_1,\ldots,s_N\}$ and $T=\{t_1,\ldots,t_N\}$. Then $$H_{ia,jb}=K_{ij}V^{t_j-s_i}_{b-a}$$ where $V$ is as above, is Hadamard, provided that $K\in M_N(\mathbb C)$ is.
This follows indeed by using the general construction in Theorem 5.14 above, with input coming from Proposition 5.15 and Proposition 5.16.
As explained in [@mwe], the above construction covers many interesting examples of Hadamard matrices, known from [@tz1], [@tz2] to be isolated, such as the Tao matrix: $$T_6=\begin{pmatrix}
1&1&1&1&1&1\\
1&1&w&w&w^2&w^2\\
1&w&1&w^2&w^2&w\\
1&w&w^2&1&w&w^2\\
1&w^2&w^2&w&1&w\\
1&w^2&w&w^2&w&1
\end{pmatrix}$$
In general, in order to find isolated matrices, the idea from [@mwe] is that of starting with an isolated matrix, and then use suitable sets $S,T$. The defect computations are, however, quite difficult. As a concrete statement, however, we have the following conjecture:
The complex Hadamard matrix constructed in Theorem 5.17 is isolated, provided that:
1. $K$ is an isolated Fourier matrix, of prime order.
2. $S,T$ consist of consecutive odd numbers, and consecutive even numbers.
This statement is supported by the isolation result for $T_6$, and by several computer simulations in [@mwe]. For further details on all this, we refer to [@mwe], and to [@bop].
As a final topic now, we would like to discuss an extension of a part of the above results, to the case of the partial Hadamard matrices. The extension, done in [@bop], is quite straightforward, but there are however a number of subtleties appearing.
First of all, we can talk about deformations of PHM, as follows:
Let $H\in X_{M,N}$ be a partial complex Hadamard matrix.
1. A deformation of $H$ is a smooth function $f:\mathbb T_1\to (X_{M,N})_H$.
2. The deformation is called “affine” if $f_{ij}(q)=H_{ij}q^{A_{ij}}$, with $A\in M_{M\times N}(\mathbb R)$.
3. We call “trivial” the deformations $f_{ij}(q)=H_{ij}q^{a_i+b_j}$, with $a\in\mathbb R^M,b\in\mathbb R^N$.
We have $X_{M,N}=M_{M\times N}(\mathbb T)\cap\sqrt{N}U_{M,N}$, where $U_{M,N}\subset M_{M\times N}(\mathbb C)$ is the set of matrices having all rows of norm 1, and pairwise orthogonal. This remark leads us to:
Associated to a point $H\in X_{M,N}$ are:
1. The enveloping tangent space: $\widetilde{T}_HX_{M,N}=T_HM_{M\times N}(\mathbb T)\cap T_H\sqrt{N}U_{M,N}$.
2. The tangent cone $T_HX_{M,N}$: the set of tangent vectors to the deformations of $H$.
3. The affine tangent cone $T_H^\circ X_{M,N}$: same as above, using affine deformations only.
4. The trivial tangent cone $T_H^\times X_{M,N}$: as above, using trivial deformations only.
Observe that $\widetilde{T}_HX_{M,N},T_HX_{M,N}$ are real vector spaces, and that $T_HX_{M,N},T_H^\circ X_{M,N}$ are two-sided cones, $\lambda\in\mathbb R,A\in T\implies\lambda A\in T$. Also, we have inclusions as follows: $$T_H^\times X_{M,N}\subset T_H^\circ X_{M,N}\subset T_HX_{M,N}\subset\widetilde{T}_HX_{M,N}$$
Since $\widetilde{T}_HX_{M,N}$ is a real vector space, of particular interest is the computation of its dimension $d(H)=\dim(\widetilde{T}_HX_{M,N})$, called defect of $H$. We have:
Let $H\in X_{M,N}$, and pick $K\in\sqrt{N}U_N$ extending $H$.
1. $\widetilde{T}_HX_{M,N}\simeq\{A\in M_{M\times N}(\mathbb R)|\sum_kH_{ik}\bar{H}_{jk}(A_{ik}-A_{jk})=0,\forall i,j\}$.
2. $\widetilde{T}_HX_{M,N}\simeq\{E=(X\ Y)\in M_{M\times N}(\mathbb C)|X=X^*,(EK)_{ij}\bar{H}_{ij}\in\mathbb R,\forall i,j\}$.
The correspondence $A\to E$ is given by $E_{ij}=\sum_kH_{ik}\bar{K}_{jk}A_{ik}$, $A_{ij}=(EK)_{ij}\bar{H}_{ij}$.
The proofs here go as in the square case, as follows:
\(1) In the square case this was done in the proof of Theorem 4.4 above, and the extension of the computations there to the rectangular case is straightforward.
\(2) Let us set indeed $R_{ij}=A_{ij}H_{ij}$ and $E=RK^*$. The correspondence $A\to R\to E$ is then bijective, and we have the following formula: $$E_{ij}=\sum_kH_{ik}\bar{K}_{jk}A_{ik}$$
With these changes, the system of equations in (1) becomes $E_{ij}=\bar{E}_{ji}$ for any $i,j$ with $j\leq M$. But this shows that we must have $E=(X\ Y)$ with $X=X^*$, and the condition $A_{ij}\in\mathbb R$ corresponds to the condition $(EK)_{ij}\bar{H}_{ij}\in\mathbb R$, as claimed.
As an illustration, in the real case we obtain the following result:
For an Hadamard matrix $H\in M_{M\times N}(\pm1)$ we have $$\widetilde{T}_HX_{M,N}\simeq M_M(\mathbb R)^{symm}\oplus M_{M\times(N-M)}(\mathbb R)$$ and so the defect is given by $$d(H)=N(N+1)/2+M(N-M)$$ independently of the precise value of $H$.
We use Theorem 5.21 (2). Since $H$ is now real we can pick $K\in\sqrt{N}U_N$ extending it to be real too, and with nonzero entries, so the last condition appearing there, namely $(EK)_{ij}\bar{H}_{ij}\in\mathbb R$, simply tells us that $E$ must be real. Thus we have: $$\widetilde{T}_HX_{M,N}\simeq\{E=(X\ Y)\in M_{M\times N}(\mathbb R)|X=X^*\}$$
But this is the formula in the statement, and we are done.
A matrix $H\in X_{M,N}$ cannot be isolated, simply because the space of its Hadamard equivalents provides a copy $\mathbb T^{MN}\subset X_{M,N}$, passing through $H$. However, if we restrict the attention to the matrices which are dephased, the notion of isolation makes sense:
Let $d(H)=\dim(\widetilde{T}_HX_{M,N})$.
1. This number, called undephased defect of $H$, satisfies $d(H)\geq M+N-1$.
2. If $d(H)=M+N-1$ then $H$ is isolated inside the dephased quotient $X_{M,N}\to Z_{M,N}$.
Once again, the known results in the square case extend:
\(1) We have indeed $\dim(T_H^\times X_{M,N})=M+N-1$, and since the tangent vectors to these trivial deformations belong to $\widetilde{T}_HX_{M,N}$, this gives the result.
\(2) Since $d(H)=M+N-1$, the inclusions $T_H^\times X_{M,N}\subset T_HX_{M,N}\subset\widetilde{T}_HX_{M,N}$ must be equalities, and from $T_HX_{M,N}=T_H^\times X_{M,N}$ we obtain the result.
Finally, still at the theoretical level, we have the following conjecture:
An isolated matrix $H\in Z_{M,N}$ must have minimal defect, namely $d(H)=M+N-1$.
In other words, the conjecture is that if $H\in Z_{M,N}$ has only trivial first order deformations, then it has only trivial deformations at any order, including at $\infty$.
In the square matrix case this statement comes with solid evidence, all known examples of complex Hadamard matrices $H\in X_N$ having non-minimal defect being known to admit one-parameter deformations. For more on this subject, see [@tz1], [@tz2].
Let us discuss now some examples of isolated partial Hadamard matrices, and provide some evidence for Conjecture 5.24. We are interested in the following matrices:
The truncated Fourier matrix $F_{S,G}$, with $G$ being a finite abelian group, and with $S\subset G$ being a subset, is constructed as follows:
1. Given $N\in\mathbb N$, we set $F_N=(w^{ij})_{ij}$, where $w=e^{2\pi i/N}$.
2. Assuming $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_s}$, we set $F_G=F_{N_1}\otimes\ldots\otimes F_{N_s}$.
3. We let $F_{S,G}$ be the submatrix of $F_G$ having $S\subset G$ as row index set.
Observe that $F_N$ is the Fourier matrix of the cyclic group $\mathbb Z_N$. More generally, $F_G$ is the Fourier matrix of the finite abelian group $G$. Observe also that $F_{G,G}=F_G$.
We can compute the defect of $F_{S,G}$ by using Theorem 5.21, and we obtain:
For a truncated Fourier matrix $F=F_{S,G}$ we have the formula $$\widetilde{T}_FX_{M,N}=\left\{A\in M_{M\times N}(\mathbb R)\Big|P=AF^t\ {\rm satisfies}\ P_{ij}=P_{i+j,j}=\bar{P}_{i,-j},\forall i,j\right\}$$ where $M=|S|,N=|G|$, and with all the indices being regarded as group elements.
We use Theorem 5.21 (1). The defect equations there are as follows: $$\sum_kF_{ik}\bar{F}_{jk}(A_{ik}-A_{jk})=0$$
Since for $F=F_{S,G}$ we have $F_{ik}\bar{F}_{jk}=(F^t)_{k,i-j}$, we obtain: $$\widetilde{T}_FX_{M,N}=\left\{A\in M_{M\times N}(\mathbb R)\Big|(AF^t)_{i,i-j}=(AF^t)_{j,i-j},\forall i,j\right\}$$
Now observe that for an arbitrary matrix $P\in M_M(\mathbb C)$, we have: $$\begin{aligned}
P_{i,i-j}=P_{j,i-j},\forall i,j
&\iff&P_{i+j,i}=P_{ji},\forall i,j\\
&\iff&P_{i+j,j}=P_{ij},\forall i,j\end{aligned}$$
We therefore conclude that we have the following equality: $$\widetilde{T}_FX_{M,N}=\left\{A\in M_{M\times N}(\mathbb R)\Big|
P=AF^t\ {\rm satisfies}\ P_{ij}=P_{i+j,j},\forall i,j\right\}$$
Now observe that with $A\in M_{M\times N}(\mathbb R)$ and $P=AF^t\in M_M(\mathbb C)$ as above, we have: $$\begin{aligned}
\bar{P}_{ij}
&=&\sum_kA_{ik}(F^*)_{kj}\\
&=&\sum_kA_{ik}(F^t)_{k,-j}\\
&=&P_{i,-j}\end{aligned}$$
Thus, we obtain the formula in the statement, and we are done.
Let us try to find some explicit examples of isolated matrices, of truncated Fourier type. For this purpose, we can use the following improved version of Theorem 5.26:
The defect of $F=F_{S,G}$ is the number $\dim(K)+\dim(I)$, where $$\begin{aligned}
K&=&\left\{A\in M_{M\times N}(\mathbb R)\Big|AF^t=0\right\}\\
I&=&\left\{P\in L_M\Big|\exists A\in M_{M\times N}(\mathbb R),P=AF^t\right\}\end{aligned}$$ where $L_M$ is the following linear space $$L_M=\left\{P\in M_M(\mathbb C)\big|P_{ij}=P_{i+j,j}=\bar{P}_{i,-j},\forall i,j\right\}$$ with all the indices belonging by definition to the group $G$.
We use the general formula in Theorem 5.26. With the notations there, and with the linear space $L_M$ being as above, we have a linear map as follows: $$\Phi:\widetilde{T}_FX_{M,N}\to L_M\quad,\quad\Phi(A)=AF^t$$
By using this map, we obtain the following equality: $$\dim(\widetilde{T}_FX_{M,N})=\dim(\ker\Phi)+\dim({\rm Im}\,\Phi)$$
Now since the spaces on the right are precisely those in the statement, $\ker\Phi=K$ and ${\rm Im}\, \Phi=I$, by applying Theorem 5.26 we obtain the result.
In order to look now for isolated matrices, the first remark is that since a deformation of $F_G$ will produce a deformation of $F_{S,G}$ too, we must restrict the attention to the case where $G=\mathbb Z_p$, with $p$ prime. And here, we have the following conjecture:
There exists a constant $\varepsilon>0$ such that $F_{S,p}$ is isolated, for any $p$ prime, once $S\subset\mathbb Z_p$ satisfies $|S|\geq(1-\varepsilon)p$.
In principle this conjecture can be approached by using the formula in Theorem 5.27, and we have for instance evidence towards the fact that $F_{p-1,p}$ should be always isolated, that $F_{p-2,p}$ should be isolated too, provided that $p$ is big enough, and so on. However, finding a number $\varepsilon>0$ as above looks like a quite difficult question. See [@bop].
Circulant matrices
==================
We discuss in this section yet another type of special complex Hadamard matrices, namely the circulant ones. There has been a lot of work here, starting with the Circulant Hadamard Conjecture (CHC) in the real case, and with many results in the complex case as well. We will present here the main techniques in dealing with such matrices.
It is convenient to introduce the circulant matrices as follows:
A complex matrix $H\in M_N(\mathbb C)$ is called circulant when we have $$H_{ij}=\gamma_{j-i}$$ for some $\gamma\in\mathbb C^N$, with the matrix indices $i,j\in \{0,1,\ldots,N-1\}$ taken modulo $N$.
Here the index convention is quite standard, as for the Fourier matrices $F_N$, and with this coming from Fourier analysis considerations, that we will get into later on.
Here is a basic, and very fundamental example of a circulant Hadamard matrix, which in addition has real entries, and is symmetric: $$K_4=\begin{pmatrix}-1&1&1&1\\1&-1&1&1\\1&1&-1&1\\1&1&1&-1\end{pmatrix}$$
According to the CHC, explained in section 1, this matrix is, up to equivalence, the only circulant Hadamard matrix $H\in M_N(\pm1)$, regardless of the value of $N\in\mathbb N$.
Our first purpose will be that of showing that the CHC dissapears in the complex case, where we have examples at any $N\in\mathbb N$. As a first result here, we have:
The following are circulant and symmetric Hadamard matrices, $$F_2'=\begin{pmatrix}i&1\\1&i\end{pmatrix}\quad,\quad
F_3'=\begin{pmatrix}w&1&1\\1&w&1\\1&1&w\end{pmatrix}\quad,\quad
F_4''=\begin{pmatrix}-1&\nu&1&\nu\\\nu&-1&\nu&1\\1&\nu&-1&\nu\\ \nu&1&\nu&-1\end{pmatrix}$$ where $w=e^{2\pi i/3},\nu=e^{\pi i/4}$, equivalent to the Fourier matrices $F_2,F_3,F_4$.
The orthogonality between rows being clear, we have here complex Hadamard matrices. The fact that we have an equivalence $F_2\sim F_2'$ follows from: $$\begin{pmatrix}1&1\\1&-1\end{pmatrix}
\sim\begin{pmatrix}i&i\\1&-1\end{pmatrix}
\sim\begin{pmatrix}i&1\\1&i\end{pmatrix}$$
At $N=3$ now, the equivalence $F_3\sim F_3'$ can be constructed as follows: $$\begin{pmatrix}1&1&1\\1&w&w^2\\1&w^2&w\end{pmatrix}
\sim\begin{pmatrix}1&1&w\\1&w&1\\w&1&1\end{pmatrix}
\sim\begin{pmatrix}w&1&1\\1&w&1\\1&1&w\end{pmatrix}$$
As for the case $N=4$, here the equivalence $F_4\sim F_4''$ can be constructed as follows, where we use the logarithmic notation $[k]_s=e^{2\pi ki/s}$, with respect to $s=8$: $$\begin{bmatrix}0&0&0&0\\0&2&4&6\\0&4&0&4\\0&6&4&2\end{bmatrix}_8
\sim\begin{bmatrix}0&1&4&1\\1&4&1&0\\4&1&0&1\\1&0&1&4\end{bmatrix}_8
\sim\begin{bmatrix}4&1&0&1\\1&4&1&0\\0&1&4&1\\1&0&1&4\end{bmatrix}_8$$
We will explain later the reasons for denoting this matrix $F_4''$, instead of $F_4'$.
Getting back now to the real circulant matrix $K_4$, this is equivalent to the Fourier matrix $F_G=F_2\otimes F_2$ of the Klein group $G=\mathbb Z_2\times\mathbb Z_2$, as shown by: $$\begin{pmatrix}-1&1&1&1\\1&-1&1&1\\1&1&-1&1\\1&1&1&-1\end{pmatrix}
\sim\begin{pmatrix}
1&1&1&-1\\
1&-1&1&1\\
1&1&-1&1\\
-1&1&1&1
\end{pmatrix}
\sim\begin{pmatrix}
1&1&1&1\\
1&-1&1&-1\\
1&1&-1&-1\\
1&-1&-1&1
\end{pmatrix}$$
In fact, we have the following construction of circulant and symmetric Hadamard matrices at $N=4$, which involves an extra parameter $q\in\mathbb T$:
The following circulant and symmetric matrix is Hadamard, $$K_4^q=\begin{pmatrix}-1&q&1&q\\q&-1&q&1\\1&q&-1&q\\q&1&q&-1\end{pmatrix}$$ for any $q\in\mathbb T$. At $q=1,e^{\pi i/4}$ recover respectively the matrices $K_4,F_4''$.
The rows of the above matrix are pairwise orthogonal for any $q\in\mathbb C$, and so at $q\in\mathbb T$ we obtain a complex Hadamard matrix. The last assertion is clear.
As a first conclusion, coming from the above considerations, we have:
The complex Hadamard matrices of order $N=2,3,4,5$, namely $$F_2,F_3,F_4^p,F_5$$ can be put, up to equivalence, in circulant and symmetric form.
As explained in section 2 above, according to the result of Haagerup [@ha1], the Hadamard matrices at $N=2,3,4,5$ are, up to equivalence, those in the statement.
At $N=2,3$ the problem is solved by Proposition 6.2 above.
At $N=4$ now, our claim is that we have $K_4^q\sim F_4^s$, with $s=q^{-2}$. Indeed, by multiplying the rows of $K_4^q$, and then the columns, by suitable scalars, we have: $$K_4^q=\begin{pmatrix}-1&q&1&q\\q&-1&q&1\\1&q&-1&q\\q&1&q&-1\end{pmatrix}\sim
\begin{pmatrix}
1&-q&-1&-q\\
1&-\bar{q}&1&\bar{q}\\
1&q&-1&q\\
1&\bar{q}&1&-\bar{q}\end{pmatrix}\sim
\begin{pmatrix}
1&1&1&1\\
1&s&-1&-s\\
1&-1&1&-1\\
1&-s&-1&s\end{pmatrix}$$
On the other hand, by permuting the second and third rows of $F_4^s$, we obtain: $$F_4^s=\begin{pmatrix}
1&1&1&1\\
1&-1&1&-1\\
1&s&-1&-s\\
1&-s&-1&s
\end{pmatrix}\sim
\begin{pmatrix}
1&1&1&1\\
1&s&-1&-s\\
1&-1&1&-1\\
1&-s&-1&s\end{pmatrix}$$
Thus these matrices are equivalent, and the result follows from Proposition 6.3.
At $N=5$, the matrix that we are looking for is as follows, with $w=e^{2\pi i/5}$: $$F_5'=\begin{pmatrix}
w^2&1&w^4&w^4&1\\
1&w^2&1&w^4&w^4\\
w^4&1&w^2&1&w^4\\
w^4&w^4&1&w^2&1\\
1&w^4&w^4&1&w^2
\end{pmatrix}$$
It is indeed clear that this matrix is circulant, symmetric, and complex Hadamard, and the fact that we have $F_5\sim F_5'$ follows either directly, or by using [@ha1].
Let us prove now that any Fourier matrix $F_N$ can be put in circulant and symmetric form. We use Björck’s cyclic root formalism [@bjo], which is as follows:
Assume that $H\in M_N(\mathbb T)$ is circulant, $H_{ij}=\gamma_{j-i}$. Then $H$ is Hadamard if and only if the vector $(z_0,z_1,\ldots,z_{N-1})$ given by $z_i=\gamma_i/\gamma_{i-1}$ satisfies: $$\begin{aligned}
z_0+z_1+\ldots+z_{N-1}&=&0\\
z_0z_1+z_1z_2+\ldots+z_{N-1}z_0&=&0\\
\ldots\\
z_0z_1\ldots z_{N-2}+\ldots+z_{N-1}z_0\ldots z_{N-3}&=&0\\
z_0z_1\ldots z_{N-1}&=&1\end{aligned}$$ If so is the case, we say that $z=(z_0,\ldots,z_{N-1})$ is a cyclic $N$-root.
This follows from a direct computation, the idea being that, with $H_{ij}=\gamma_{j-i}$ as above, the orthogonality conditions between the rows are best written in terms of the variables $z_i=\gamma_i/\gamma_{i-1}$, and correspond to the equations in the statement. See [@bjo].
Observe that, up to a global multiplication by a scalar $w\in\mathbb T$, the first row vector $\gamma=(\gamma_0,\ldots,\gamma_{N-1})$ of the matrix $H\in M_N(\mathbb T)$ constructed above is as follows: $$\gamma=(z_0,z_0z_1,z_0z_1z_2,\ldots\ldots,z_0z_1\ldots z_{N-1})$$
Now back to the Fourier matrices, we have the following result:
Given $N\in\mathbb N$, set $\nu=e^{\pi i/N}$ and $q=\nu^{N-1},w=\nu^2$. Then $$(q,qw,qw^2,\ldots,qw^{N-1})$$ is a cyclic $N$-root, and the corresponding complex Hadamard matrix $F_N'$ is circulant and symmetric, and equivalent to the Fourier matrix $F_N$.
Given $q,w\in\mathbb T$, let us find out when $(q,qw,qw^2,\ldots,qw^{N-1})$ is a cyclic root:
\(1) In order for the $=0$ equations in Theorem 6.5 to be satisfied, the value of $q$ is irrelevant, and $w$ must be a primitive $N$-root of unity.
\(2) As for the $=1$ equation in Theorem 6.5, this states in our case that we must have $q^Nw^{\frac{N(N-1)}{2}}=1$, and so that we must have $q^N=(-1)^{N-1}$.
We conclude that with the values of $q,w\in\mathbb T$ in the statement, we have indeed a cyclic $N$-root. Now construct $H_{ij}=\gamma_{j-i}$ as in Theorem 6.5. We have: $$\begin{aligned}
\gamma_k=\gamma_{-k},\forall k
&\iff&q^{k+1}w^{\frac{k(k+1)}{2}}=q^{-k+1}w^{\frac{k(k-1)}{2}},\forall k\\
&\iff&q^{2k}w^k=1,\forall k\\
&\iff&q^2=w^{-1}\end{aligned}$$
But this latter condition holds indeed, because we have $q^2=\nu^{2N-2}=\nu^{-2}=w^{-1}$. We conclude that our circulant matrix $H$ is symmetric as well, as claimed.
It remains to construct an equivalence $H\sim F_N$. In order to do this, observe that, due to our conventions $q=\nu^{N-1},w=\nu^2$, the first row vector of $H$ is given by: $$\begin{aligned}
\gamma_k
&=&q^{k+1}w^{\frac{k(k+1)}{2}}\\
&=&\nu^{(N-1)(k+1)}\nu^{k(k+1)}\\
&=&\nu^{(N+k-1)(k+1)}\end{aligned}$$
Thus, the entries of $H$ are given by the following formula: $$\begin{aligned}
H_{-i,j}
&=&H_{0,i+j}\\
&=&\nu^{(N+i+j-1)(i+j+1)}\\
&=&\nu^{i^2+j^2+2ij+Ni+Nj+N-1}\\
&=&\nu^{N-1}\cdot\nu^{i^2+Ni}\cdot\nu^{j^2+Nj}\cdot\nu^{2ij}\end{aligned}$$
With this formula in hand, we can now finish. Indeed, the matrix $H=(H_{ij})$ is equivalent to the matrix $H'=(H_{-i,j})$. Now regarding $H'$, observe that in the above formula, the factors $\nu^{N-1}$, $\nu^{i^2+Ni}$, $\nu^{j^2+Nj}$ correspond respectively to a global multiplication by a scalar, and to row and column multiplications by scalars. Thus $H'$ is equivalent to the matrix $H''$ obtained by deleting these factors.
But this latter matrix, given by $H''_{ij}=\nu^{2ij}$ with $\nu=e^{\pi i/N}$, is precisely the Fourier matrix $F_N$, and we are done.
As an illustration, let us work out the cases $N=2,3,4,5$. We have here:
The matrices $F_N'$ are as follows:
1. At $N=2,3$ we obtain the old matrices $F_2',F_3'$.
2. At $N=4$ we obtain the following matrix, with $\nu=e^{\pi i/4}$: $$F_4'=\begin{pmatrix}
\nu^3&1&\nu^7&1\\
1&\nu^3&1&\nu^7\\
\nu^7&1&\nu^3&1\\
1&\nu^7&1&\nu^3
\end{pmatrix}$$
3. At $N=5$ we obtain the old matrix $F_5'$.
With notations from Theorem 6.6, the proof goes as follows:
\(1) At $N=2$ we have $\nu=i,q=i,w=-1$, so the cyclic root is $(i,-i)$, the first row vector is $(i,1)$, and we obtain indeed the old matrix $F_2'$. At $N=3$ we have $\nu=e^{\pi i/3}$ and $q=w=\nu^2=e^{2\pi i/3}$, the cyclic root is $(w,w^2,1)$, the first row vector is $(w,1,1)$, and we obtain indeed the old matrix $F_3'$.
\(2) At $N=4$ we have $\nu=e^{\pi i/4}$ and $q=\nu^3,w=\nu^2$, the cyclic root is $(\nu^3,\nu^5,\nu^7,\nu)$, the first row vector is $(\nu^3,1,\nu^7,1)$, and we obtain the matrix in the statement.
\(3) At $N=5$ we have $\nu=e^{\pi i/5}$ and $q=\nu^4=w^2$, with $w=\nu^2=e^{2\pi i/5}$, the cyclic root is therefore $(w^2,w^3,w^4,1,w)$, the first row vector is $(w^2,1,w^4,w^4,1)$, and we obtain in this way the old matrix $F_5'$, as claimed.
Regarding the above matrix $F_4'$, observe that this is equivalent to the matrix $F_4''$ from Proposition 6.2, with the equivalence $F_4'\sim F_4''$ being obtained by multiplying everything by $\nu=e^{\pi i/4}$. While both these matrices are circulant and symmetric, and of course equivalent to $F_4$, one of them, namely $F_4'$, is “better” than the other, because the corresponding cyclic root comes from a progression. This is the reason for our notations $F_4',F_4''$.
Let us discuss now the case of the generalized Fourier matrices $F_G$. In this context, the assumption of being circulant is somewhat unnatural, because this comes from a $\mathbb Z_N$ symmetry, and the underlying group is no longer $\mathbb Z_N$. It is possible to fix this issue by talking about $G$-patterned Hadamard matrices, with $G$ being no longer cyclic, but for our purposes here, best is to formulate the result in a weaker form, as follows:
The generalized Fourier matrices $F_G$, associated to the finite abelian groups $G$, can be put in symmetric and bistochastic form.
We know from Theorem 6.6 that any usual Fourier matrix $F_N$ can be put in circulant and symmetric form. Since circulant implies bistochastic, in the sense that the sums on all rows and all columns must be equal, the result holds for $F_N$.
In general now, if we decompose $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}$, we have: $$F_G=F_{N_1}\otimes\ldots\otimes F_{N_k}$$
Now since the property of being circulant is stable under taking tensor products, and so is the property of being bistochastic, we therefore obtain the result.
We have as well the following alternative generalization of Theorem 6.6, coming from Backelin’s work in [@bac], and remaining in the circulant and symmetric setting:
Let $M|N$, and set $w=e^{2\pi i/N}$. We have a cyclic root as follows, $$(\underbrace{q_1,\ldots,q_M}_M,\underbrace{q_1w,\ldots,q_Mw}_M,\ldots\ldots,\underbrace{q_1w^{N-1},\ldots,q_Mw^{N-1}}_M)$$ provided that $q_1,\ldots,q_M\in\mathbb T$ satisfy $(q_1\ldots q_M)^N=(-1)^{M(N-1)}$. Moreover, assuming $$q_1q_2=1\quad,\quad q_3q_M=q_4q_{M-1}=\ldots=w$$ which imply $(q_1\ldots q_M)^N=(-1)^{M(N-1)}$, the Hadamard matrix is symmetric.
Let us first check the $=0$ equations for a cyclic root. Given arbitrary numbers $q_1,\ldots,q_M\in\mathbb T$, if we denote by $(z_i)$ the vector in the statement, we have: $$\begin{aligned}
\sum_iz_{i+1}\ldots z_{i+K}
&=&\begin{pmatrix}q_1\ldots q_K+q_2\ldots q_{K+1}+\ldots\ldots+q_{M-K+1}\ldots q_M\\
+q_{M-K+2}\ldots q_Mq_1w+\ldots\ldots+q_Mq_1\ldots q_{K-1}w^{K-1}\end{pmatrix}\\
&&\times(1+w^K+w^{2K}+\ldots+w^{(N-1)K})\end{aligned}$$
Now since the sum on the right vanishes, the $=0$ conditions are satisfied. Regarding now the $=1$ condition, the total product of the numbers $z_i$ is given by: $$\prod_iz_i=(q_1\ldots q_M)^N(1\cdot w\cdot w^2\ldots w^{N-1})^M=(q_1\ldots q_M)^Nw^{\frac{MN(N-1)}{2}}$$
By using $w=e^{2\pi i/N}$ we obtain that the coefficient on the right is: $$w^{\frac{MN(N-1)}{2}}=e^{\frac{2\pi i}{N}\cdot\frac{MN(N-1)}{2}}=e^{\pi iM(N-1)}=(-1)^{M(N-1)}$$
Thus, if $(q_1\ldots q_M)^N=(-1)^{M(N-1)}$, we obtain a cyclic root, as stated. See [@bac], [@fau].
The corresponding first row vector can be written as follows: $$V=\left(\underbrace{q_1,q_1q_2,\ldots,q_1\ldots q_M}_M,\ldots\ldots\ldots,\underbrace{\frac{w^{M-1}}{q_2\ldots q_M},\ldots,\frac{w^2}{q_{M-1}q_M},\frac{w}{q_M},1}_M\right)$$
Thus, the corresponding circulant complex Hadamard matrix is as follows: $$H=\begin{pmatrix}
q_1&q_1q_2&q_1q_2q_3&q_1q_2q_3q_4&q_1q_2q_3q_4q_5&\ldots\\
1&q_1&q_1q_2&q_1q_2q_3&q_1q_2q_3q_4&\ldots\\
\frac{w}{q_M}&1&q_1&q_1q_2&q_1q_2q_3&\ldots\\
\frac{w^2}{q_{M-1}q_M}&\frac{w}{q_M}&1&q_1&q_1q_2&\ldots\\
\frac{w^3}{q_{M-2}q_{M-1}q_M}&\frac{w^2}{q_{M-1}q_M}&\frac{w}{q_M}&1&q_1&\ldots\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots
\end{pmatrix}$$
We are therefore led to the symmetry conditions in the statement, and we are done.
Observe that Theorem 6.9 generalizes both Proposition 6.3, and the construction in Theorem 6.6. Thus, we have here a full generalization of Theorem 6.4. Of course, things do not stop here, and the problem of unifying Theorem 6.8 and Theorem 6.9 remains.
As a conclusion to what we have so far, there is definitely no analogue of the CHC in the general complex setting, due to the fact that $F_N$ can be put in circulant form, and this latter result gives rise to a number of interesting generalizations and questions.
However, still in relation with the CHC, but at a more technical level, the problem of investigating the existence of the circulant Butson matrices appears.
The first result in this direction, due to Turyn [@tur], is as follows:
The size of a circulant Hadamard matrix $$H\in M_N(\pm 1)$$ must be of the form $N=4n^2$, with $n\in\mathbb N$.
Let $a,b\in\mathbb N$ with $a+b=N$ be the number of $1,-1$ entries in the first row of $H$. If we denote by $H_0,\ldots,H_{N-1}$ the rows of $H$, then by summing over columns we get: $$\begin{aligned}
\sum_{i=0}^{N-1}<H_0,H_i>
&=&a(a-b)+b(b-a)\\
&=&(a-b)^2\end{aligned}$$
On the other hand, the quantity on the left is $<H_0,H_0>=N$. Thus $N$ is a square, and together with the fact that $N\in 2\mathbb N$, this gives $N=4n^2$, with $n\in\mathbb N$.
Also found by Turyn in [@tur] is the fact that the above number $n\in\mathbb N$ must be odd, and not a prime power. In the general Butson matrix setting now, we have:
Assume that $H\in H_N(l)$ is circulant, let $w=e^{2\pi {\rm i}/l}$. If $a_0,\ldots,a_{l-1}\in\mathbb N$ with $\sum a_i=N$ are the number of $1,w,\ldots,w^{l-1}$ entries in the first row of $H$, then: $$\sum_{ik}w^ka_ia_{i+k}=N$$ This condition, with $\sum a_i=N$, will be called “Turyn obstruction” on $(N,l)$.
Indeed, by summing over the columns of $H$, we obtain: $$\begin{aligned}
\sum_i<H_0,H_i>
&=&\sum_{ij}<w^i,w^j>a_ia_j\\
&=&\sum_{ij}w^{i-j}a_ia_j\end{aligned}$$
Now since the left term is $<H_0,H_0>=N$, this gives the result.
We can deduce from this a number of concrete obstructions, as follows:
When $l$ is prime, the Turyn obstruction is $\sum_i(a_i-a_{i+k})^2=2N$ for any $k\neq 0$. Also, for small values of $l$, the Turyn obstruction is as follows:
1. At $l=2$ the condition is $(a_0-a_1)^2=N$.
2. At $l=3$ the condition is $(a_0-a_1)^2+(a_1-a_2)^2+(a_2-a_3)^2=2N$.
3. At $l=4$ the condition is $(a_0-a_2)^2+(a_1-a_3)^2=N$.
4. At $l=5$ the condition is $\sum_i(a_i-a_{i+1})^2=\sum_i(a_i-a_{i+2})^2=2N$.
We use the fact, from Proposition 3.3 above, that when $l$ is prime, the vanishing sums of $l$-roots of unity are exactly the sums of the following type, with $c\in\mathbb N$: $$S=c+cw+\ldots+cw^{l-1}$$
Thus the Turyn obstruction is equivalent to the following equations, one for each $k\neq 0$: $$\sum_ia_i^2-\sum_ia_ia_{i+k}=N$$
Now by forming squares, this gives the equations in the statement.
Regarding now the $l=2,3,4,5$ assertions, these follow from the first assertion when $l$ is prime, $l=2,3,5$. Also, at $l=4$ we have $w=i$, so the Turyn obstruction reads: $$(a_0^2+a_1^2+a_2^2+a_3^2)+i\sum a_ia_{i+1}-2(a_0a_2+a_1a_3)-i\sum a_ia_{i+1}=N$$
Thus the imaginary terms cancel, and we obtain the formula in the statement.
The above results are of course just some basic, elementary observations on the subject, and the massive amount of work on the CHC has a number of interesting Butson matrix extensions. For some more advanced theory on all this, we refer to [@bs1], [@ckh].
Let us go back now to the pure complex case, and discuss Fourier analytic aspects. From a traditional linear algebra viewpoint, the circulant matrices are best understood as being the matrices which are Fourier-diagonal, and we will exploit this here.
Let us fix $N\in\mathbb N$, and denote by $F=(w^{ij})/\sqrt{N}$ with $w=e^{2\pi i/N}$ the rescaled Fourier matrix. Observe that $F_N=\sqrt{N}F$ is the usual Fourier Hadamard matrix.
Given a vector $q\in\mathbb C^N$, we denote by $Q\in M_N(\mathbb C)$ the diagonal matrix having $q$ as vector of diagonal entries. That is, $Q_{ii}=q_i$, and $Q_{ij}=0$ for $i\neq j$.
With these conventions, the above-mentioned linear algebra result is as follows:
For a complex matrix $H\in M_N(\mathbb C)$, the following are equivalent:
1. $H$ is circulant, $H_{ij}=\xi_{j-i}$ for some $\xi\in\mathbb C^N$.
2. $H$ is Fourier-diagonal, $H=FQF^*$ with $Q$ diagonal.
In addition, the first row vector of $FQF^*$ is given by $\xi=Fq/\sqrt{N}$.
If $H_{ij}=\xi_{j-i}$ is circulant then $Q=F^*HF$ is diagonal, given by: $$Q_{ij}=\frac{1}{N}\sum_{kl}w^{jl-ik}\xi_{l-k}=\delta_{ij}\sum_rw^{jr}\xi_r$$
Also, if $Q=diag(q)$ is diagonal then $H=FQF^*$ is circulant, given by: $$H_{ij}=\sum_kF_{ik}Q_{kk}\bar{F}_{jk}=\frac{1}{N}\sum_kw^{(i-j)k}q_k$$
Observe that this latter formula proves as well the last assertion, $\xi=Fq/\sqrt{N}$.
In relation now with the orthogonal and unitary matrices, we have:
The various sets of circulant matrices are as follows:
1. $M_N(\mathbb C)^{circ}=\{FQF^*|q\in\mathbb C^N\}$.
2. $U_N^{circ}=\{FQF^*|q\in\mathbb T^N\}$.
3. $O_N^{circ}=\{FQF^*|q\in\mathbb T^N,\bar{q}_i=q_{-i},\forall i\}$.
In addition, the first row vector of $FQF^*$ is given by $\xi=Fq/\sqrt{N}$.
All this follows from Theorem 6.13, as follows:
\(1) This assertion, along with the last one, is Theorem 6.13 itself.
\(2) This is clear from (1), because the eigenvalues must be on the unit circle $\mathbb T$.
\(3) Observe first that for $q\in\mathbb C^N$ we have $\overline{Fq}=F\tilde{q}$, with $\tilde{q}_i=\bar{q}_{-i}$, and so $\xi=Fq$ is real if and only if $\bar{q}_i=q_{-i}$ for any $i$. Together with (2), this gives the result.
Observe that in (3), the equations for the parameter space are $q_0=\bar{q}_0$, $\bar{q}_1=q_{n-1}$, $\bar{q}_2=q_{n-2}$, and so on until $[N/2]+1$. Thus, with the convention $\mathbb Z_\infty=\mathbb T$ we have: $$O_N^{circ}\simeq
\begin{cases}
\mathbb Z_2\times\mathbb Z_\infty^{{(N-1)}/2}&(N\ {\rm odd})\\
\mathbb Z_2^2\times\mathbb Z_\infty^{(N-2)/2}&(N\ {\rm even})
\end{cases}$$
In terms of circulant Hadamard matrices, we have the following statement:
The sets of complex and real circulant Hadamard matrices are: $$X_N^{circ}=\{\sqrt{N}FQF^*|q\in\mathbb T^N\}\cap M_N(\mathbb T)$$ $$Y_N^{circ}=\{\sqrt{N}FQF^*|q\in\mathbb T^N,\bar{q}_i=q_{-i}\}\cap M_N(\pm1)$$ In addition, the sets of $q$ parameters are invariant under cyclic permutations, and also under mutiplying by numbers in $\mathbb T$, respectively under multiplying by $-1$.
All the assertions are indeed clear from Proposition 6.14 above.
The above statement is of course something quite theoretical in the real case, where the CHC states that we should have $Y_N^{circ}=\emptyset$, at any $N\neq 4$. However, in the complex case all this is useful, and complementary to Björck’s cyclic root formalism.
Let us discuss now a number of geometric and analytic aspects. First, we have the following deep counting result, due to Haagerup [@ha2]:
When $N$ is prime, the number of circulant $N\times N$ complex Hadamard matrices, counted with certain multiplicities, is exactly $\binom{2N-2}{N-1}$.
This is something advanced, using a variety of techiques from Fourier analysis, number theory, complex analysis and algebraic geometry. The idea in [@ha2] is that, when $N$ is prime, Björck’s cyclic root formalism can be further manipulated, by using Fourier transforms, and we are eventually led to a simpler system of equations.
This simplified system can be shown then to have a finite number of solutions, the key ingredient here being a well-known theorem of Chebotarev, which states that when $N$ is prime, all the minors of the Fourier matrix $F_N$ are nonzero.
With this finiteness result in hand, the precise count can be done as well, by using various techniques from classical algebraic geometry. See [@ha2].
When $N$ is not prime, the situation is considerably more complicated, with some values leading to finitely many solutions, and with other values leading to an infinite number of solutions, and with many other new phenomena appearing. See [@bjo], [@bfr], [@bha], [@ha2].
Our belief is that useful here would be an adaptation of the notion of defect, to the circulant case, in the context of the manifolds from Proposition 6.14 and Theorem 6.15. There are some preliminary differential geometry computations to be done here.
We would like to discuss now an alternative take on these questions, based on the estimate $||U||_1\leq N\sqrt{N}$ from Theorem 1.18. This shows that the real Hadamard matrices are the rescaled versions of the maximizers of the 1-norm on $O_N$, and the same proof shows that the complex Hadamard matrices are the rescaled versions of the maximizers of the 1-norm on $U_N$. Following [@bs1], we will apply this philosophy to the circulant case.
We will need in fact more general $p$-norms as well, so let us start with the following result, in the complex case, which is the most general one on the subject:
If $\psi:[0,\infty)\to\mathbb R$ is strictly concave/convex, the quantity $$F(U)=\sum_{ij}\psi(|U_{ij}|^2)$$ over $U_N$ is maximized/minimized precisely by the rescaled Hadamard matrices.
We recall that Jensen’s inequality states that for $\psi$ convex we have: $$\psi\left(\frac{x_1+\ldots+x_n}{n}\right)\leq\frac{\psi(x_1)+\ldots+\psi(x_n)}{n}$$
For $\psi$ concave the reverse inequality holds. Also, the equality case holds either when $\psi$ is linear, or when the numbers $x_1,\ldots,x_n$ are all equal.
In our case, with $n=N^2$ and with $\{x_1,\ldots,x_n\}=\{|U_{ij}|^2|i,j=1,\ldots,N\}$, we obtain that for any convex function $\psi$, the following holds: $$\psi\left(\frac{1}{N}\right)\leq\frac{F(U)}{N^2}$$
Thus we have $F(U)\geq N^2\psi(1/N)$, and by assuming as in the statement that $\psi$ is strictly convex, the equality case holds precisely when the numbers $|U_{ij}|^2$ are all equal, so when $H=\sqrt{N}U$ is Hadamard. The proof for concave functions is similar.
Of particular interest for our considerations are the power functions $\psi(x)=x^{p/2}$, which are concave at $p\in[1,2)$, and convex at $p\in(2,\infty)$. These lead to:
The rescaled versions $U=H/\sqrt{N}$ of the complex Hadamard matrices $H\in M_N(\mathbb C)$ can be characterized as being:
1. The maximizers of the $p$-norm on $U_N$, at any $p\in[1,2)$.
2. The minimizers of the $p$-norm on $U_N$, at any $p\in(2,\infty]$.
Consider indeed the $p$-norm on $U_N$, which at $p\in[1,\infty)$ is given by: $$||U||_p=\left(\sum_{ij}|U_{ij}|^p\right)^{1/p}$$
By the above discussion, involving the functions $\psi(x)=x^{p/2}$, Proposition 6.17 applies and gives the results at $p\in[1,\infty)$, the precise estimates being as follows: $$||U||_p=
\begin{cases}
\leq N^{2/p-1/2}&{\rm if}\ p<2\\
=N^{1/2}&{\rm if}\ p=2\\
\geq N^{2/p-1/2}&{\rm if}\ p>2
\end{cases}$$
As for the case $p=\infty$, this follows with $p\to\infty$, or directly via Cauchy-Schwarz.
As explained in [@bs1], the most adapted exponent for the circulant case is $p=4$, due to a number of simplifications which appear in the Fourier manipulations. So, before discussing this, let us record the $p=4$ particular case of Theorem 6.18:
Given a matrix $U\in U_N$ we have $$||U||_4\geq 1$$ with equality precisely when $H=U/\sqrt{N}$ is Hadamard.
This follows from Theorem 6.18, or directly from Cauchy-Schwarz, as follows: $$||U||_4^4=\sum_{ij}|U_{ij}|^4\geq\frac{1}{N^2}\left(\sum_{ij}|U_{ij}|^2\right)^2=1$$
Thus we have $||U||_4\geq 1$, with equality if and only if $H=\sqrt{N}U$ is Hadamard.
In the circulant case now, and in Fourier formulation, the estimate is as follows:
Given a vector $q\in\mathbb T^N$, written $q=(q_0,\ldots,q_{N-1})$ consider the following quantity, with all the indices being taken modulo $N$: $$\Phi=\sum_{i+k=j+l}\frac{q_iq_k}{q_jq_l}$$ Then $\Phi$ is real, and we have $\Phi\geq N^2$, with equality if and only if $\sqrt{N}q$ is the eigenvalue vector of a circulant Hadamard matrix $H\in M_N(\mathbb C)$.
By conjugating the formula of $\Phi$ we see that this quantity is indeed real. In fact, $\Phi$ appears by definition as a sum of $N^3$ terms, consisting of $N(2N-1)$ values of $1$ and of $N(N-1)^2$ other complex numbers of modulus 1, coming in pairs $(a,\bar{a})$.
Regarding now the second assertion, by using the various identifications in Theorem 6.13 and Proposition 6.14, and the formula $\xi=Fq/\sqrt{N}$ there, we have: $$\begin{aligned}
||U||_4^4
&=&N\sum_s|\xi_s|^4\\
&=&\frac{1}{N^3}\sum_s|\sum_iw^{si}q_i|^4\\
&=&\frac{1}{N^3}\sum_s\sum_iw^{si}q_i\sum_jw^{-sj}\bar{q}_j\sum_kw^{sk}q_k\sum_lw^{-sl}\bar{q}_l\\
&=&\frac{1}{N^3}\sum_s\sum_{ijkl}w^{(i-j+k-l)s}\frac{q_iq_k}{q_jq_l}\\
&=&\frac{1}{N^2}\sum_{i+k=j+l}\frac{q_iq_k}{q_jq_l}\end{aligned}$$
Thus Proposition 6.19 gives the following estimate: $$\Phi=N^2||U||_4^4\geq N^2$$
Moreover, we have equality precisely in the Hadamard matrix case, as claimed.
We have the following more direct explanation of the above result:
With the above notations, we have the formula $$\Phi=N^2+\sum_{i\neq j}(|\nu_i|^2-|\nu_j|^2)^2$$ where $\nu=(\nu_0,\ldots,\nu_{N-1})$ is the vector given by $\nu=Fq$.
This follows by replacing in the above proof the Cauchy-Schwarz estimate by the corresponding sum of squares. More precisely, we know from the above proof that: $$\Phi=N^3\sum_i|\xi_i|^4$$
On the other hand $U_{ij}=\xi_{j-i}$ being unitary, we have $\sum_i|\xi_i|^2=1$, and so: $$\begin{aligned}
1
&=&\sum_i|\xi_i|^4+\sum_{i\neq j}|\xi_i|^2\cdot|\xi_j|^2\\
&=&N\sum_i|\xi_i|^4-\left((N-1)\sum_i|\xi_i|^4-\sum_{i\neq j}|\xi_i|^2\cdot|\xi_j|^2\right)\\
&=&\frac{1}{N^2}\Phi-\sum_{i\neq j}(|\xi_i|^2-|\xi_j|^2)^2\end{aligned}$$
Now by multiplying by $N^2$, this gives the formula in the statement.
All this is quite interesting. As an application, in the real Hadamard matrix case, we have the following analytic reformulation of the CHC, from [@bs1]:
For $q\in\mathbb T^N$ satisfying $\bar{q}_i=q_{-i}$, the following quantity is real, $$\Phi=\sum_{i+j+k+l=0}q_iq_jq_kq_l$$ and satisfies $\Phi\geq N^2$. The CHC states that we cannot have equality at $N>4$.
This follows indeed from Theorem 6.20, via the identifications from Theorem 6.15, the parameter space in the real case being $\{q\in\mathbb T^N|\bar{q}_i=q_{-i}\}$.
This is certainly quite nice, and the analytic problem might look quite elementary. However, this is not the case. In fact, we already know from section 1 that the CHC is equivalent to Ryser’s conjecture, which looks elementary as well, and is not.
Following [@bs1], let us further discuss all this. We first have:
Write $\Phi=\Phi_0+\ldots+\Phi_{N-1}$, with each $\Phi_i$ being given by the same formula as $\Phi$, namely $\Phi=\sum_{i+k=j+l}\frac{q_iq_k}{q_jq_k}$, but keeping the index $i$ fixed. Then:
1. The critical points of $\Phi$ are those where $\Phi_i\in\mathbb R$, for any $i$.
2. In the Hadamard case we have $\Phi_i=N$, for any $i$.
This follows by doing some elementary computations, as follows:
\(1) The first observation is that the non-constant terms in the definition of $\Phi$ involving the variable $q_i$ are the terms of the sum $K_i+\bar{K}_i$, where: $$K_i=\sum_{2i=j+l}\frac{q_i^2}{q_jq_l}+2\sum_{k\neq i}\sum_{i+k=j+l}\frac{q_iq_k}{q_jq_l}$$
Thus if we fix $i$ and we write $q_i=e^{i\alpha_i}$, we obtain: $$\begin{aligned}
\frac{\partial\Phi}{\partial\alpha_i}
&=&4Re\left(\sum_k\sum_{i+k=j+l}i\cdot\frac{q_iq_k}{q_jq_l}\right)\\
&=&4Im\left(\sum_{i+k=j+l}\frac{q_iq_k}{q_jq_l}\right)\\
&=&4Im(\Phi_i)\end{aligned}$$
Now since the derivative must vanish for any $i$, this gives the result.
\(2) We first perform the end of the Fourier computation in the proof of Theorem 6.20 above backwards, by keeping the index $i$ fixed. We obtain: $$\begin{aligned}
\Phi_i
&=&\sum_{i+k=j+l}\frac{q_iq_k}{q_jq_l}\\
&=&\frac{1}{N}\sum_s\sum_{ijkl}w^{(i-j+k-l)s}\frac{q_iq_k}{q_jq_l}\\
&=&\frac{1}{N}\sum_sw^{si}q_i\sum_jw^{-sj}\bar{q}_j\sum_kw^{sk}q_k\sum_lw^{-sl}\bar{q}_l\\
&=&N^2\sum_sw^{si}q_i\bar{\xi}_s\xi_s\bar{\xi}_s\end{aligned}$$
Here we have used the formula $\xi=Fq/\sqrt{N}$. Now by assuming that we are in the Hadamard case, we have $|\xi_s|=1/\sqrt{N}$ for any $s$, and so we obtain: $$\Phi_i=N\sum_s w^{si}q_i\bar{\xi}_s=N\sqrt{N}q_i\overline{(F^*\xi)}_i=Nq_i\bar{q}_i=N$$
Thus, we have obtained the conclusion in the statement.
Let us discuss now a probabilistic approach to all this. Given a compact manifold $X$ endowed with a probability measure, and a bounded function $\Theta:X\to[0,\infty)$, the maximum of this function can be recaptured via following well-known formula: $$\max\Theta=\lim_{p\to\infty}\left(\int_X\Theta(x)^p\,dx\right)^{1/p}$$
In our case, we are rather interested in computing a minimum, and the result is:
We have the formula $$\min\Phi=N^3-\lim_{p\to\infty}\left(\int_{\mathbb T^N}(N^3-\Phi)^p\,dq\right)^{1/p}$$ where the torus $\mathbb T^N$ is endowed with its usual probability measure.
This follows from the above formula, with $\Theta=N^3-\Phi$. Observe that $\Theta$ is indeed positive, because $\Phi$ is by definition a sum of $N^3$ complex numbers of modulus 1.
Let us restrict now the attention to the problem of computing the moments of $\Phi$, which is more or less the same as computing those of $N^3-\Phi$. We have here:
The moments of $\Phi$ are given by $$\int_{\mathbb T^N}\Phi^p\,dq=\#\left\{ \begin{pmatrix}i_1k_1\ldots i_pk_p\\ j_1l_1\ldots j_pl_p\end{pmatrix}\Big|i_s+k_s=j_s+l_s,[i_1k_1\ldots i_pk_p]=[j_1l_1\ldots j_pl_p]\right\}$$ where the sets between brackets are by definition sets with repetition.
This is indeed clear from the formula of $\Phi$. See [@bs2].
Regarding now the real case, an analogue of Proposition 6.25 holds, but the combinatorics does not get any simpler. One idea in dealing with this problem is by considering the “enveloping sum”, obtained from $\Phi$ by dropping the condition $i+k=j+l$: $$\tilde{\Phi}=\sum_{ijkl}\frac{q_iq_k}{q_jq_l}$$
The point is that the moments of $\Phi$ appear as “sub-quantities” of the moments of $\tilde{\Phi}$, so perhaps the question to start with is to understand very well the moments of $\tilde{\Phi}$.
And this latter problem sounds like a quite familiar one, because $\tilde{\Phi}=|\sum_iq_i|^4$.
We will be back to this a bit later. For the moment, let us do some combinatorics:
We have the moment formula $$\int_{\mathbb T^N}\tilde{\Phi}^p\,dq=\sum_{\pi\in P(2p)}\binom{2p}{\pi}\frac{N!}{(N-|\pi|)!}$$ where $\binom{2p}{\pi}=\binom{2p}{b_1,\ldots,b_{|\pi|}}$, with $b_1,\ldots,b_{|\pi|}$ being the lengths of the blocks of $\pi$.
Indeed, by using the same method as for $\Phi$, we obtain: $$\int_{\mathbb T^N}\tilde{\Phi}(q)^p\,dq=\#\left\{ \begin{pmatrix}i_1k_1\ldots i_pk_p\\ j_1l_1\ldots j_pl_p\end{pmatrix}\Big|[i_1k_1\ldots i_pk_p]=[j_1l_1\ldots j_pl_p]\right\}$$
The sets with repetitions on the right are best counted by introducing the corresponding partitions $\pi=\ker\begin{pmatrix}i_1k_1\ldots i_pk_p\end{pmatrix}$, and this gives the formula in the statement.
In order to discuss now the real case, we have to slightly generalize the above result, by computing all the half-moments of $\widetilde{\Phi}$. The result here is best formulated as:
We have the moment formula $$\int_{\mathbb T^N}|\sum_iq_i|^{2p}\,dq=\sum_kC_{pk}\frac{N!}{(N-k)!}$$ where $C_{pk}=\sum_{\pi\in P(p),|\pi|=k}\binom{p}{b_1,\ldots,b_{|\pi|}}$, with $b_1,\ldots,b_{|\pi|}$ being the lengths of the blocks of $\pi$.
This follows indeed exactly as Proposition 6.26 above, by replacing the exponent $p$ by the exponent $p/2$, and by splitting the resulting sum as in the statement.
Observe that the above formula basically gives the moments of $\tilde{\Phi}$, in the real case. Indeed, let us restrict attention to the case $N=2m$. Then, for the purposes of our minimization problem we can assume that our vector is of the following form: $$q=(1,q_1,\ldots,q_{m-1},1,\bar{q}_{m-1},\ldots,\bar{q}_1)$$
So, we are led to the following conclusion, relating the real and complex cases:
Consider the variable $X=q_1+\ldots+q_{m-1}$ over the torus $\mathbb T^{m-1}$.
1. For the complex problem at $N=m-1$, we have $\widetilde{\Phi}=|X|^4$
2. For the real problem at $N=2m$, we have $\widetilde{\Phi}=|2+X+\bar{X}|^4$.
This is indeed clear from the definition of the enveloping sum $\widetilde{\Phi}$.
Finally, here is a random walk formulation of the problem:
The moments of $\Phi$ have the following interpretation:
1. First, the moments of the enveloping sum $\int\widetilde{\Phi}^p$ count the loops of length $4p$ on the standard lattice $\mathbb Z^N\subset\mathbb R^N$, based at the origin.
2. $\int\Phi^p$ counts those loops which are “piecewise balanced”, in the sense that each of the $p$ consecutive $4$-paths forming the loop satisfy $i+k=j+l$ modulo $N$.
The first assertion follows from the formula in the proof of Proposition 6.27, and the second assertion follows from the formula in Proposition 6.25.
This statement looks quite encouraging, but passing from (1) to (2) is quite a delicate task, because in order to interpret the condition $i+k=j+l$ we have to label the coordinate axes of $\mathbb R^N$ by elements of the cyclic group $\mathbb Z_N$, and this is a quite unfamiliar operation. In addition, in the real case the combinatorics becomes more complex due to the symmetries of the parameter space, and no further results are available so far.
Bistochastic matrices
=====================
In this section and the next two ones we discuss certain analytic aspects of the complex Hadamard matrices, based on the various inequalities obtained in section 1. We will extend these inequalities to the complex case, and discuss them in detail.
As a first, fundamental result, we have:
The complex Hadamard matrices, which form the manifold $$X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$$ can be analytically detected in two ways, as follows:
1. Given $H\in M_N(\mathbb T)$ we have $|\det(H)|\leq N^{N/2}$, with equality when $H\in\sqrt{N}U_N$.
2. Given $H\in\sqrt{N}U_N$ we have $||H||_1\leq N^2$, with equality when $H\in M_N(\mathbb T)$.
This is something that we already know in the real case, from Theorem 1.17 and Theorem 1.18 above, and the proof in the general case is similar:
\(1) This follows indeed as in the real case, because if we denote by $H_1,\ldots,H_N\in\mathbb T^N$ the rows of $H$, then we have, according to the definition of the determinant: $$\begin{aligned}
|\det(H)|
&=&vol<H_1,\ldots,H_N>\\
&\leq&||H_1||\times\ldots\times||H_N||\\
&=&(\sqrt{N})^N\end{aligned}$$
The equality holds when $H_1,\ldots,H_N$ are pairwise orthogonal, as claimed.
\(2) This is something that we discussed in much detail in Proposition 6.17, Theorem 6.18 and Proposition 6.19, and which follows from Cauchy-Schwarz: $$||H||_1=\sum_{ij}|H_{ij}|\leq N\left(\sum_{ij}|H_{ij}|^2\right)^{1/2}=N^2$$
The equality case holds when $|H_{ij}|=1$ for any $i,j$, as claimed.
We will further discuss all this in section 9 below, with a few comments on (1), and with a detailed study of the condition (2), which is something quite fruitful.
Regarding now the third and last basic inequality from the real case, namely the excess estimate from Theorem 1.19, this is something of a different nature, that we will discuss in this section, and in the next one. Let us begin with the following definition:
A complex Hadamard matrix $H\in M_N(\mathbb C)$ is called bistochastic when the sums on all rows and all columns are equal. We denote by $$X_N^{bis}=\left\{H\in X_N\Big|\,H={\rm bistochastic}\right\}$$ the real algebraic manifold formed by such matrices.
The bistochastic Hadamard matrices are quite interesting objects, and include for instance all the circulant Hadamard matrices, that we discussed in section 6. Indeed, assuming that $H_{ij}=\xi_{j-i}$ is circulant, all rows and columns sum up to: $$\lambda=\sum_i\xi_i$$
So, let us first review the material in section 6, from this perspective. As a first and trivial remark, the Fourier matrix $F_2$ looks better in bistochastic form: $$F_2=\begin{pmatrix}1&1\\1&-1\end{pmatrix}\sim
\begin{pmatrix}i&1\\1&i\end{pmatrix}=F_2'$$
This is something quite interesting, philosophically speaking. Indeed, we have here a new idea, namely that of studying the Hadamard matrices $H\in M_N(\pm1)$ by putting them in complex bistochastic form, $H'\in M_N(\mathbb T)$, and then studying these latter matrices.
We will see later on that, while certainly being viable, this idea is quite difficult to develop in practice, and is in need of some considerable preliminary work.
Let us keep now reviewing the material in section 6. According to the results there, and to the above-mentioned fact that circulant implies bistochastic, we have:
The class of bistochastic Hadamard matrices is stable under permuting rows and columns, and under taking tensor products. As examples, we have:
1. The circulant and symmetric forms $F_N'$ of the Fourier matrices $F_N$.
2. The bistochastic and symmetric forms $F_G'$ of the Fourier matrices $F_G$.
3. The circulant and symmetric Backelin matrices, having size $MN$ with $M|N$.
In this statement the claim regarding permutations of rows and columns is clear. Assuming now that $H,K$ are bistochastic, with sums $\lambda,\mu$, we have: $$\sum_{ia}(H\otimes K)_{ia,jb}=\sum_{ia}H_{ij}K_{ab}=\sum_iH_{ij}\sum_aK_{ab}=\lambda\mu$$ $$\sum_{jb}(H\otimes K)_{ia,jb}=\sum_{jb}H_{ij}K_{ab}=\sum_jH_{ij}\sum_bK_{ab}=\lambda\mu$$
Thus, the matrix $H\otimes K$ is bistochastic as well. As for the assertions (1,2,3), we already know all this, from Theorem 6.6, Theorem 6.8 and Theorem 6.9 above.
In the above list of examples, (2) is the key entry. Indeed, while many interesting matrices, such as the usual Fourier ones $F_N$, can be put in circulant form, this is something quite exceptional, which does not work any longer when looking at the general Fourier matrices $F_G$. To be more precise, when using a decomposition of type $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}$, and setting $F_G'=F_{N_1}'\otimes\ldots\otimes F_{N_k}'$, we can only say that $F_G'$ is bistochastic.
As a conclusion, the bistochastic Hadamard matrices are interesting objects, definitely worth some study. So, let us develop now some general theory, for such matrices.
First, we have the following elementary result:
For an Hadamard matrix $H\in M_N(\mathbb C)$, the following are equivalent:
1. $H$ is bistochastic, with sums $\lambda$.
2. $H$ is row-stochastic, with sums $\lambda$, and $|\lambda|^2=N$.
Both the implications are elementary, as follows:
$(1)\implies(2)$ If we denote by $H_1,\ldots,H_N\in\mathbb T^N$ the rows of $H$, we have indeed: $$\begin{aligned}
N
&=&<H_1,H_1>
=\sum_i<H_1,H_i>\\
&=&\sum_i\sum_jH_{1j}\bar{H}_{ij}
=\sum_jH_{1j}\sum_i\bar{H}_{ij}\\
&=&\sum_jH_{1j}\cdot\bar{\lambda}
=\lambda\cdot\bar{\lambda}
=|\lambda|^2\end{aligned}$$
$(2)\implies(1)$ Consider the all-one vector $\xi=(1)_i\in\mathbb C^N$. The fact that $H$ is row-stochastic with sums $\lambda$, respectively column-stochastic with sums $\lambda$, reads: $$\begin{aligned}
\sum_jH_{ij}=\lambda,\forall i\iff\sum_jH_{ij}\xi_j=\lambda\xi_i,\forall i\iff H\xi=\lambda\xi\\
\sum_iH_{ij}=\lambda,\forall j\iff\sum_jH_{ij}\xi_i=\lambda\xi_j,\forall j\iff H^t\xi=\lambda\xi\end{aligned}$$
We must prove that the first condition implies the second one, provided that the row sum $\lambda$ satisfies $|\lambda|^2=N$. But this follows from the following computation: $$\begin{aligned}
H\xi=\lambda\xi
&\implies&H^*H\xi=\lambda H^*\xi\\
&\implies&N^2\xi=\lambda H^*\xi\\
&\implies&N^2\xi=\bar{\lambda}H^t\xi\\
&\implies&H^t\xi=\lambda\xi\end{aligned}$$
Thus, we have proved both the implications, and we are done.
Here is another basic result, that we will need as well in what follows:
For a complex Hadamard matrix $H\in M_N(\mathbb C)$, and a number $\lambda\in\mathbb C$ satisfying $|\lambda|^2=N$, the following are equivalent:
1. We have $H\sim H'$, with $H'$ being bistochastic, with sums $\lambda$.
2. $K_{ij}=a_ib_jH_{ij}$ is bistochastic with sums $\lambda$, for some $a,b\in\mathbb T^N$.
3. The equation $Hb=\lambda\bar{a}$ has solutions $a,b\in\mathbb T^N$.
Once again, this is an elementary result, the proof being as follows:
$(1)\iff(2)$ Since the permutations of the rows and columns preserve the bistochasticity condition, the equivalence $H\sim H'$ that we are looking for can be assumed to come only from multiplying the rows and columns by numbers in $\mathbb T$. Thus, we are looking for scalars $a_i,b_j\in\mathbb T$ such that $K_{ij}=a_ib_jH_{ij}$ is bistochastic with sums $\lambda$, as claimed.
$(2)\iff(3)$ The row sums of the matrix $K_{ij}=a_ib_jH_{ij}$ are given by: $$\sum_jK_{ij}=\sum_ja_ib_jH_{ij}=a_i(Hb)_i$$
Thus $K$ is row-stochastic with sums $\lambda$ precisely when $Hb=\lambda\bar{a}$, and by using the equivalence in Proposition 7.4, we obtain the result.
Finally, here is an extension of the excess inequality from Theorem 1.19 above:
For a complex Hadamard matrix $H\in M_N(\mathbb C)$, the excess, $$E(H)=\sum_{ij}H_{ij}$$ satisfies $|E(H)|\leq N\sqrt{N}$, with equality if and only if $H$ is bistochastic.
In terms of the all-one vector $\xi=(1)_i\in\mathbb C^N$, we have: $$E(H)=\sum_{ij}H_{ij}=\sum_{ij}H_{ij}\xi_j\bar{\xi}_i=\sum_i(H\xi)_i\bar{\xi}_i=<H\xi,\xi>$$
Now by using the Cauchy-Schwarz inequality, along with the fact that $U=H/\sqrt{N}$ is unitary, and hence of norm 1, we obtain, as claimed: $$|E(H)|\leq||H\xi||\cdot||\xi||\leq||H||\cdot||\xi||^2=N\sqrt{N}$$
Regarding now the equality case, this requires the vectors $H\xi,\xi$ to be proportional, and so our matrix $H$ to be row-stochastic. Moreover, if we assume $H\xi=\lambda\xi$, the above computation gives $|\lambda|^2=N$, and by Proposition 7.4, we obtain the result.
The above estimate is potentially quite useful, because it allows us to analytically locate the bistochastic Hadamard manifold $X_N^{bis}$ inside the whole Hadamard manifold $X_N$, a bit in the spirit of the two analytic methods in Theorem 7.1. We will be back to this later, with a number of probabilistic results on the subject.
Let us go back now to the fundamental question, which already appeared several times in the above, of putting an arbitrary Hadamard matrix in bistochastic form. As already explained, we are interested in solving this question in general, and in particular in the real case, with potential complex reformulations of the HC and CHC at stake.
What we know so far on this subject can be summarized as follows:
An Hadamard matrix $H\in M_N(\mathbb C)$ can be put in bistochastic form when one of the following conditions is satisfied:
1. The equations $|Ha|_i=\sqrt{N}$, with $i=1,\ldots,N$, have solutions $a\in\mathbb T^N$.
2. The quantity $|E|$ attains its maximum $N\sqrt{N}$ over the equivalence class of $H$.
This follows indeed from Proposition 7.4 and Proposition 7.5 above.
Thus, we have two approaches to the problem, one algebraic, and one analytic.
Let us first discuss the algebraic approach, coming from (1) above. What we have there is a certain system of $N$ equations, having as unknowns $N$ real variables, namely the phases of $a_1,\ldots,a_N$. This system is highly non-linear, but can be solved, however, via a certain non-explicit method, as explained by Idel and Wolf in [@iwo].
In order to discuss this material, which is quite advanced, let us begin with some preliminaries. The complex projective space appears by definition as follows: $$P^{N-1}_\mathbb C=(\mathbb C^N-\{0\})\big/<x=\lambda y>$$
Inside this projective space, we have the Clifford torus, constructed as follows: $$\mathbb T^{N-1}=\left\{(z_1,\ldots,z_N)\in P^{N-1}_\mathbb C\Big||z_1|=\ldots=|z_N|\right\}$$
With these conventions, we have the following result, from [@iwo]:
For a unitary matrix $U\in U_N$, the following are equivalent:
1. There exist $L,R\in U_N$ diagonal such that $U'=LUR$ is bistochastic.
2. The standard torus $\mathbb T^N\subset\mathbb C^N$ satisfies $\mathbb T^N\cap U\mathbb T^N\neq\emptyset$.
3. The Clifford torus $\mathbb T^{N-1}\subset P^{N-1}_\mathbb C$ satisfies $\mathbb T^{N-1}\cap U\mathbb T^{N-1}\neq\emptyset$.
These equivalences are all elementary, as follows:
$(1)\implies(2)$ Assuming that $U'=LUR$ is bistochastic, which in terms of the all-1 vector $\xi$ means $U'\xi=\xi$, if we set $f=R\xi\in\mathbb T^N$ we have: $$Uf=\bar{L}U'\bar{R}f=\bar{L}U'\xi=\bar{L}\xi\in\mathbb T^N$$
Thus we have $Uf\in\mathbb T^N\cap U\mathbb T^N$, which gives the conclusion.
$(2)\implies(1)$ Given $g\in\mathbb T^N\cap U\mathbb T^N$, we can define $R,L$ as follows: $$R=diag(g_1,\ldots,g_N)\quad,\quad\bar{L}=diag((Ug)_1,\ldots,(Ug)_N)$$
We have then $R\xi=g$ and $\bar{L}\xi=Ug$, and so $U'=LUR$ is bistochastic, because: $$U'\xi=LUR\xi=LUg=\xi$$
$(2)\implies(3)$ This is clear, because $\mathbb T^{N-1}\subset P^{N-1}_\mathbb C$ appears as the projective image of $\mathbb T^N\subset\mathbb C^N$, and so $\mathbb T^{N-1}\cap U\mathbb T^{N-1}$ appears as the projective image of $\mathbb T^N\cap U\mathbb T^N$.
$(3)\implies(2)$ We have indeed the following equivalence: $$\mathbb T^{N-1}\cap U\mathbb T^{N-1}\neq\emptyset
\iff\exists\lambda\neq 0,\lambda\mathbb T^N\cap U\mathbb T^N\neq\emptyset$$
But $U\in U_N$ implies $|\lambda|=1$, and this gives the result.
The point now is that the condition (3) above is something familiar in symplectic geometry, and known to hold for any $U\in U_N$. Thus, following [@iwo], we have:
Any unitary matrix $U\in U_N$ can be put in bistochastic form, $$U'=LUR$$ with $L,R\in U_N$ being both diagonal, via a certain non-explicit method.
As already mentioned, the condition $\mathbb T^{N-1}\cap U\mathbb T^{N-1}\neq\emptyset$ in Proposition 7.8 (3) is something quite natural in symplectic geometry. To be more precise, $\mathbb T^{N-1}\subset P^{N-1}_\mathbb C$ is a Lagrangian submanifold, $\mathbb T^{N-1}\to U\mathbb T^{N-1}$ is a Hamiltonian isotopy, and a result from [@bep], [@cho] states that $\mathbb T^{N-1}$ cannot be displaced from itself via a Hamiltonian isotopy.
Thus, the results in [@bep], [@cho] tells us that $\mathbb T^{N-1}\cap U\mathbb T^{N-1}\neq\emptyset$ holds indeed, for any $U\in U_N$. We therefore obtain the result, via Proposition 7.8. See [@iwo].
In relation now with our Hadamard matrix questions, we have:
Any complex Hadamard matrix can be put in bistochastic form, up to the standard equivalence relations for such matrices.
This follows indeed from Theorem 7.9, because if $H=\sqrt{N}U$ is Hadamard then so is $H'=\sqrt{N}U'$, and with the remark that, in what regards the equivalence relation, we just need the multiplication of the rows and columns by scalars in $\mathbb T$.
All this is extremely interesting, but unfortunately, not explicit. As explained in [@iwo], the various technical results from [@bep], [@cho] show that in the generic, “transverse” situation, there are at least $2^{N-1}$ ways of putting a unitary matrix $U\in U_N$ in bistochastic form, and this modulo the obvious transformation $U\to zU$, with $|z|=1$.
Summarizing, the question of explicitely putting the Hadamard matrices $H\in M_N(\mathbb C)$ in bistochastic form remains open, and open as well is the question of finding a simpler proof for the fact that this can be done indeed, without using [@bep], [@cho].
Regarding this latter question, a possible approach comes from the excess result from Theorem 7.6 above. Indeed, in view of the remark there, it is enough to show that the law of $|E|$ over the equivalence class of $H$ has $N\sqrt{N}$ as upper support bound.
In order to comment on this, let us first formulate:
The glow of $H\in M_N(\mathbb C)$ is the measure $\mu\in\mathcal P(\mathbb C)$ given by: $$\int_\mathbb C\varphi(x)d\mu(x)=\int_{\mathbb T^N\times\mathbb T^N}\varphi\left(\sum_{ij}a_ib_jH_{ij}\right)d(a,b)$$ That is, the glow is the law of $E=\sum_{ij}H_{ij}$, over the equivalence class of $H$.
In this definition $H$ can be any complex matrix, but the equivalence relation is the one for the complex Hadamard matrices. To be more precise, let us call two matrices $H,K\in M_N(\mathbb C)$ equivalent if one can pass from one to the other by permuting rows and columns, or by multiplying the rows and columns by numbers in $\mathbb T$. Now since permuting rows and columns does not change the quantity $E=\sum_{ij}H_{ij}$, we can restrict attention from the full equivalence group $G=(S_N\rtimes\mathbb T^N)\times(S_N\rtimes\mathbb T^N)$ to the smaller group $G'=\mathbb T^N\times\mathbb T^N$, and we obtain the measure $\mu$ in Definition 7.11.
As in the real case, the terminology comes from a picture of the following type, with the stars $*$ representing the entries of our matrix, and with the switches being supposed now to be continuous, randomly changing the phases of the concerned entries: $$\begin{matrix}
\to&&*&*&*&*\\
\to&&*&*&*&*\\
\to&&*&*&*&*\\
\to&&*&*&*&*\\
\\
&&\uparrow&\uparrow&\uparrow&\uparrow
\end{matrix}$$
In short, what we have here is a complex generalization of the Gale-Berlekamp game [@fsl], [@rvi], and this is where the main motivation for studying the glow comes from.
We are in fact interested in computing a real measure, because we have:
The laws $\mu,\mu^+$ of $E,|E|$ over the torus $\mathbb T^N\times\mathbb T^N$ are related by $$\mu=\varepsilon\times\mu^+$$ where $\times$ is the multiplicative convolution, and $\varepsilon$ is the uniform measure on $\mathbb T$.
We have $E(\lambda H)=\lambda E(H)$ for any $\lambda\in\mathbb T$, and so $\mu=law(E)$ is invariant under the action of $\mathbb T$. Thus $\mu$ must decompose as $\mu=\varepsilon\times\mu^+$, where $\mu^+$ is a certain probability measure on $[0,\infty)$, and this measure $\mu^+$ is the measure in the statement.
In particular, we can see from the above result that the glow is invariant under rotations. With this observation made, we can formulate the following result:
The glow of any Hadamard matrix $H\in M_N(\mathbb C)$, or more generally of any $H\in\sqrt{N}U_N$, satisfies the following conditions, where $\mathbb D$ is the unit disk, $$N\sqrt{N}\,\mathbb T\subset supp(\mu)\subset N\sqrt{N}\,\mathbb D$$ with the inclusion on the right coming from Cauchy-Schwarz, and with the inclusion on the left corresponding to the fact that $H$ can be put in bistochastic form.
In this statement the inclusion on the right comes indeed from Cauchy-Schwarz, as explained in the proof of Theorem 7.6 above, with the remark that the computation there only uses the fact that the rescaled matrix $U=H/\sqrt{N}$ is unitary.
Regarding now the inclusion on the left, we know from Theorem 7.9 that $H$ can be put in bistochastic form. According to Proposition 7.7, this tells us that we have: $$N\sqrt{N}\,\mathbb T\cap supp(\mu)\neq\emptyset$$
Now by using the rotational invariance of the glow, and hence of its support, coming from Proposition 7.12, we obtain from this $N\sqrt{N}\,\mathbb T\subset supp(\mu)$, as claimed.
The challenging question is that of obtaining a proof of the above result by using probabilistic techniques. Indeed, as explained at the end of section 6 above, the support of a measure can be recaptured from the moments, simply by computing a limit. Thus, knowing the moments of the glow well enough would solve the problem.
Regarding the moments of the glow, the formula is as follows:
For $H\in M_N(\mathbb T)$ the even moments of $|E|$ are given by $$\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{[i]=[k],[j]=[l]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}$$ where the sets between brackets are by definition sets with repetition.
We have indeed the following computation: $$\begin{aligned}
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}
&=&\int_{\mathbb T^N\times\mathbb T^N}\Big|\sum_{ij}H_{ij}a_ib_j\Big|^{2p}\\
&=&\int_{\mathbb T^N\times\mathbb T^N}\left(\sum_{ijkl}\frac{H_{ij}}{H_{kl}}\cdot\frac{a_ib_j}{a_kb_l}\right)^p\\
&=&\sum_{ijkl}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\int_{\mathbb T^N}\frac{a_{i_1}\ldots a_{i_p}}{a_{k_1}\ldots a_{k_p}}\int_{\mathbb T^N}\frac{b_{j_1}\ldots b_{j_p}}{b_{l_1}\ldots b_{l_p}}\end{aligned}$$
Now since the integrals at right equal respectively the Kronecker symbols $\delta_{[i],[k]}$ and $\delta_{[j],[l]}$, we are led to the formula in the statement.
With this formula in hand, the main result, regarding the fact that the complex Hadamard matrices can be put in bistochastic form, reformulates as follows:
For a complex Hadamard matrix $H\in M_N(\mathbb T)$ we have $$\lim_{p\to\infty}\left(\sum_{[i]=[k],[j]=[l]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\right)^{1/p}=N^3$$ coming from the fact that $H$ can be put in bistochastic form.
This follows from the well-known fact that the maximum of a bounded function $\Theta:X\to[0,\infty)$ can be recaptured via following formula: $$\max(\Theta)=\lim_{p\to\infty}\left(\int_X\Theta(x)^p\,dx\right)^{1/p}$$
With $X=\mathbb T^N\times\mathbb T^N$ and with $\Theta=|E|^2$, we conclude that the limit in the statement is the square of the upper bound of the glow. But, according to Theorem 7.13, this upper bound is known to be $\leq N^3$ by Cauchy-Schwarz, and the equality holds by [@iwo].
To conclude now, the challenging question is that of finding a direct proof for Theorem 7.15. All this would provide an alternative aproach to the results in [@iwo], which would be of course still not explicit, but which would use at least some more familiar tools.
We will discuss such questions in section 9 below, with the remark however that the problems at $N\in\mathbb N$ fixed being quite difficult, we will do a $N\to\infty$ study only.
Getting away now from these difficult questions, we have nothing concrete so far, besides the list of examples from Theorem 7.3, coming from the circulant matrix considerations in section 6. So, our purpose will be that of extending that list.
A first natural question is that of looking at the Butson matrix case. We have here the following result, extending the finding from Proposition 7.4 above:
Assuming that $H_N(l)$ contains a bistochastic matrix, the equations $$\begin{aligned}
a_0+a_1+\ldots+a_{l-1}&=&N\\
|a_0+a_1w+\ldots+a_{l-1}w^{l-1}|^2&=&N\end{aligned}$$ must have solutions, over the positive integers.
This is a reformulation of the equality $|\lambda|^2=N$, from Proposition 7.4 above. Indeed, if we set $w=e^{2\pi i/l}$, and we denote by $a_i\in\mathbb N$ the number of $w^i$ entries appearing in the first row of our matrix, then the row sum of the matrix is given by: $$\lambda=a_0+a_1w+\ldots+a_{l-1}w^{l-1}$$
Thus, we obtain the system of equations in the statement.
The point now is that, in practice, we are led precisely to the Turyn obstructions from section 6 above. At very small values of $l$, the obstructions are as follows:
Assuming that $H_N(l)$ contains a bistochastic matrix, the following equations must have solutions, over the integers:
1. $l=2$: $4n^2=N$.
2. $l=3$: $x^2+y^2+z^2=2N$, with $x+y+z=0$.
3. $l=4$: $a^2+b^2=N$.
This follows indeed from the results that we have:
\(1) This is something well-known, which follows from Proposition 7.16.
\(2) This is best viewed by using Proposition 7.16, and the following formula, that we already know, from section 3 above: $$|a+bw+cw^2|^2=\frac{1}{2}[(a-b)^2+(b-c)^2+(c-a)^2]$$
At the level of the concrete obstructions, we must have for instance $5\!\!\not|N$. Indeed, this follows as in the proof of the de Launey obstruction for $H_N(3)$ with $5|N$.
\(3) This follows again from Proposition 7.16, and from $|a+ib|^2=a^2+b^2$.
As a conclusion, nothing much interesting is going on in the Butson matrix case, with various arithmetic obstructions, that we partly already met, appearing here. See [@kse].
In order to reach, however, to a number of positive results, beyond those in Theorem 7.3, we can investigate various special classes of matrices, such as the Diţă products.
In order to formulate our results, we will use the following notion:
We say that a complex Hadamard matrix $H\in M_N(\mathbb C)$ is in “almost bistochastic form” when all the row sums belong to $\sqrt{N}\cdot\mathbb T$.
Observe that, assuming that this condition holds, the matrix $H$ can be put in bistochastic form, just by multiplying its rows by suitable numbers from $\mathbb T$.
We will be particularly interested here in the special situation where the affine deformations $H^q\in M_N(\mathbb C)$ of a given complex Hadamard matrix $H\in M_N(\mathbb C)$ can be put in almost bistochastic form, independently of the value of the parameter $q$.
For the simplest deformations, namely those of $F_2\otimes F_2$, this is indeed the case:
The deformations of $F_2\otimes F_2$, with parameter matrix $Q=(^p_r{\ }^q_s)$, $$F_2\otimes_QF_2=
\begin{pmatrix}
p&q&p&q\\
p&-q&p&-q\\
r&s&-r&-s\\
r&-s&-r&s
\end{pmatrix}$$ can be put in almost bistochastic form, independently of the value of $Q$.
By multiplying the columns of the matrix in the statement with $1,1,-1,1$ respectively, we obtain the following matrix: $$F_2\otimes''_QF_2=
\begin{pmatrix}
p&q&-p&q\\
p&-q&-p&-q\\
r&s&r&-s\\
r&-s&r&s
\end{pmatrix}$$
The row sums of this matrix being $2q,-2q,2r,2r\in2\mathbb T$, we are done.
We will see later on that the above matrix $F_2\otimes''_QF_2$ is equivalent to a certain matrix $F_2\otimes'F_2$, which looks a bit more complicated, but is part of a series $F_N\otimes'F_N$.
Now back to the general case, we have the following result:
A deformed tensor product $H\otimes_QK$ can be put in bistochastic form when there exist numbers $x^i_a\in\mathbb T$ such that with $$G_{ib}=\frac{(K^*x^i)_b}{Q_{ib}}$$ we have $|(H^*G)_{ib}|=\sqrt{MN}$, for any $i,b$.
The deformed tensor product $L=H\otimes_QK$ is given by $L_{ia,jb}=Q_{ib}H_{ij}K_{ab}$. By multiplying the columns by scalars $R_{jb}\in\mathbb T$, this matrix becomes: $$L'_{ia,jb}=R_{jb}Q_{ib}H_{ij}K_{ab}$$
The row sums of this matrix are given by: $$\begin{aligned}
S_{ia}'
&=&\sum_{jb}R_{jb}Q_{ib}H_{ij}K_{ab}\\
&=&\sum_bK_{ab}Q_{ib}\sum_jH_{ij}R_{jb}\\
&=&\sum_bK_{ab}Q_{ib}(HR)_{ib}\end{aligned}$$
In terms of the variables $C^i_b=Q_{ib}(HR)_{ib}$, these rows sums are given by: $$S_{ia}'=\sum_bK_{ab}C^i_b=(KC^i)_a$$
Thus $H\otimes_QK$ can be put in bistochastic form when we can find scalars $R_{jb}\in\mathbb T$ and $x^i_a\in\mathbb T$ such that, with $C^i_b=Q_{ib}(HR)_{ib}$, the following condition is satisfied: $$(KC^i)_a=\sqrt{MN}x^i_a\quad,\quad\forall i,a$$
But this condition is equivalent to $KC^i=\sqrt{MN}x^i$ for any $i$, and by multiplying to the left by the adjoint matrix $K^*$, we are led to the following condition: $$\sqrt{N}C^i=\sqrt{M}K^*x^i\quad,\quad\forall i$$
Now by recalling that $C^i_b=Q_{ib}(HR)_{ib}$, this condition is equivalent to: $$\sqrt{N}Q_{ib}(HR)_{ib}=\sqrt{M}(K^*x^i)_b\quad,\quad\forall i,b$$
With $G_{ib}=(K^*x^i)_b/Q_{ib}$ as in the statement, this condition reads: $$\sqrt{N}(HR)_{ib}=\sqrt{M}G_{ib}\quad,\quad\forall i,b$$
But this condition is equivalent to $\sqrt{N}HR=\sqrt{M}G$, and by multiplying to the left by the adjoint matrix $H^*$, we are led to the following condition: $$\sqrt{MN}R=H^*G$$
Thus, we have obtained the condition in the statement.
As an illustration for this result, assume that $H,K$ can be put in bistochastic form, by using vectors $y\in\mathbb T^M,z\in\mathbb T^N$. If we set $x^i_a=y_iz_a$, with $Q=1$ we have: $$G_{ib}=(K^*x^i)_b=[K^*(y_iz)]_b=y_i(K^*z)_b$$
We therefore obtain the following formula: $$\begin{aligned}
(H^*G)_{ib}
&=&\sum_j(H^*)_{ij}G_{jb}\\
&=&\sum_j(H^*)_{ij}y_j(K^*z)_b\\
&=&(H^*y)_i(K^*z)_b\end{aligned}$$
Thus the usual tensor product $H\otimes K$ can be put in bistochastic form as well.
In the case $H=F_M$ the equations simplify, and we have:
A deformed tensor product $F_M\otimes_QK$ can be put in bistochastic form when there exist numbers $x^i_a\in\mathbb T$ such that with $$G_{ib}=\frac{(K^*x^i)_b}{Q_{ib}}$$ we have the following formulae, with $l$ being taken modulo $M$: $$\sum_jG_{jb}\bar{G}_{j+l,b}=MN\delta_{l,0}\quad,\quad\forall l,b$$ Moreover, the $M\times N$ matrix $|G_{jb}|^2$ is row-stochastic with sums $N^2$, and the $l=0$ equations state that this matrix must be column-stochastic, with sums $MN$.
With notations from Theorem 7.20, and with $w=e^{2\pi i/M}$, we have: $$(H^*G)_{ib}=\sum_jw^{-ij}G_{jb}$$
The absolute value of this number can be computed as follows: $$\begin{aligned}
|(H^*G)_{ib}|^2
&=&\sum_{jk}w^{i(k-j)}G_{jb}\bar{G}_{kb}\\
&=&\sum_{jl}w^{il}G_{jb}\bar{G}_{j+l,b}\\
&=&\sum_lw^{il}\sum_jG_{jb}\bar{G}_{j+l,b}\end{aligned}$$
If we denote by $v^b_l$ the sum on the right, we obtain: $$|(H^*G)_{ib}|^2=\sum_lw^{il}v^b_l=(F_Mv^b)_i$$
Now if we denote by $\xi$ the all-one vector in $\mathbb C^M$, the condition $|(H^*G)_{ib}|=\sqrt{MN}$ for any $i,b$ found in Theorem 7.20 above reformulates as follows: $$F^Mv^b=MN\xi\quad,\quad\forall b$$
By multiplying to the left by $F_M^*/M$, this condition is equivalent to: $$v^b=NF_M^*\xi=\begin{pmatrix}MN\\0\\ \vdots\\0\end{pmatrix}\quad,\quad\forall b$$
Let us examine the first equation, $v^b_0=MN$. By definition of $v^b_l$, we have: $$v^b_0=\sum_jG_{jb}\bar{G}_{jb}=\sum_j|G_{jb}|^2$$
Now recall from Theorem 7.20 that we have $G_{jb}=(K^*x^j)_b/Q_{jb}$, for certain numbers $x^j_b\in\mathbb T$. Since we have $Q_{jb}\in\mathbb T$ and $K^*/\sqrt{N}\in U_N$, we obtain: $$\begin{aligned}
\sum_b|G_{jb}|^2
&=&\sum_b|(K^*x^j)_b|^2\\
&=&||K^*x^j||_2^2\\
&=&N||x^j||_2^2\\
&=&N^2\end{aligned}$$
Thus the $M\times N$ matrix $|G_{jb}|^2$ is row-stochastic, with sums $N^2$, and our equations $v^b_0=MN$ for any $b$ state that this matrix must be column-stochastic, with sums $MN$.
Regarding now the other equations that we found, namely $v^b_l=0$ for $l\neq0$, by definition of $v^b_l$ and of the variables $G_{jb}$, these state that we must have: $$\sum_jG_{jb}\bar{G}_{j+l,b}=0\quad,\quad\forall l\neq0,\forall b$$
Thus, we are led to the conditions in the statement.
As an illustration for this result, let us go back to the $Q=1$ situation, explained after Theorem 7.20. By using the formula $G_{ib}=y_i(K^*z)_b$ there, we have: $$\begin{aligned}
\sum_jG_{jb}\bar{G}_{j+l,b}
&=&\sum_jy_j(K^*z)_b\,\overline{y}_{j+l}\overline{(K^*z)_b}\\
&=&|(K^*z)_b|^2\sum_j\frac{y_j}{y_{j+l}}\\
&=&M\cdot N\delta_{l,0}\end{aligned}$$
Thus, if $K$ can be put in bistochastic form, then so can be put $F_M\otimes K$.
As a second illustration, let us go back to the matrices $F_2\otimes'_QF_2$ from the proof of Proposition 7.19 above. The vector of the row sums being $S=(2q,-2q,2r,2r)$, we have $x=(q,-q,r,r)$, and so we obtain the following formulae for the entries of $G$: $$G_{0b}=\frac{\left[\begin{pmatrix}1&1\\1&-1\end{pmatrix}\begin{pmatrix}q\\-q\end{pmatrix}\right]_b}{Q_{0b}}=\frac{\begin{pmatrix}0\\2q\end{pmatrix}_b}{Q_{0b}}$$ $$G_{1b}=\frac{\left[\begin{pmatrix}1&1\\1&-1\end{pmatrix}\begin{pmatrix}r\\r\end{pmatrix}\right]_b}{Q_{1b}}=\frac{\begin{pmatrix}2r\\0\end{pmatrix}_b}{Q_{1b}}$$
Thus, in this case the matrix $G$ is as follows, independently of $Q$: $$G=\begin{pmatrix}0&2\\2&0\end{pmatrix}$$
In particular, we see that the conditions in Proposition 7.21 are satisfied.
As a main application now, we have the following result:
The Diţă deformations $F_N\otimes_QF_N$ can be put in almost bistochastic form, independently of the value of the parameter matrix $Q\in M_N(\mathbb T)$.
We use Proposition 7.21 above, with $M=N$, and with $K=F_N$. Let $w=e^{2\pi i/N}$, and consider the vectors $x^i\in\mathbb T^N$ given by: $$x^i=(w^{(i-1)a})_a$$
Since $K^*K=N1_N$, and $x^i$ are the column vectors of $K$, shifted by 1, we have: $$K^*x^0=\begin{pmatrix}0\\0\\ \vdots\\0\\N\end{pmatrix}\quad,\quad
K^*x^1=\begin{pmatrix}N\\0\\ \vdots\\0\\0\end{pmatrix}\quad,\ \ldots\ ,\quad
K^*x^{N-1}=\begin{pmatrix}0\\0\\ \vdots\\N\\0\end{pmatrix}$$
We conclude that we have $(K^*x^i)_b=N\delta_{i-1,b}$, and so the matrix $G$ is given by: $$G_{ib}=\frac{N\delta_{i-1,b}}{Q_{ib}}$$
With this formula in hand, the sums in Proposition 7.21 are given by: $$\sum_jG_{jb}\bar{G}_{j+l,b}=\sum_j\frac{N\delta_{j-1,b}}{Q_{jb}}\cdot\frac{N\delta_{j+l-1,b}}{Q_{j+l,b}}$$
In the case $l\neq0$ we clearly get $0$, because the products of Kronecker symbols are $0$. In the case $l=0$ the denominators are $|Q_{jb}|^2=1$, and we obtain: $$\sum_jG_{jb}\bar{G}_{jb}=N^2\sum_j\delta_{j-1,b}=N^2$$
Thus, the conditions in Proposition 7.21 are satisfied, and we obtain the result.
Here is an equivalent formulation of the above result:
The matrix $F_N\otimes'_QF_N$, with $Q\in M_N(\mathbb T)$, defined by $$(F_N\otimes'_QF_N)_{ia,jb}=\frac{w^{ij+ab}}{w^{bj+j}}\cdot\frac{Q_{ib}}{Q_{b+1,b}}$$ where $w=e^{2\pi i/N}$ is almost bistochastic, and equivalent to $F_N\otimes_QF_N$.
Our claim is that this is the matrix constructed in the proof of Theorem 7.22. Indeed, let us first go back to the proof of Theorem 7.20. In the case $M=N$ and $H=K=F_N$, the Diţă deformation $L=H\otimes_QK$ studied there is given by: $$L_{ia,jb}=Q_{ib}H_{ij}K_{ab}=w^{ij+ab}Q_{ib}$$
As explained in the proof of Theorem 7.22, if the conditions in the statement there are satisfied, then the matrix $L_{ia,jb}'=R_{jb}L_{ia,jb}$ is almost bistochastic, where: $$\sqrt{MN}\cdot R=H^*G$$
In our case now, $M=N$ and $H=K=F_N$, we know from the proof of Proposition 7.21 that the choice of $G$ which makes work Theorem 7.22 is as follows: $$G_{ib}=\frac{N\delta_{i-1,b}}{Q_{ib}}$$
With this formula in hand, we can compute the matrix $R$, as follows: $$\begin{aligned}
R_{jb}
&=&\frac{1}{N}(H^*G)_{jb}\\
&=&\frac{1}{N}\sum_iw^{-ij}G_{ib}\\
&=&\sum_iw^{ij}\cdot\frac{\delta_{i-1,b}}{Q_{ib}}\\
&=&\frac{w^{-(b+1)j}}{Q_{b+1,b}}\end{aligned}$$
Thus, the modified version of $F_N\otimes_QF_N$ which is almost bistochastic is given by: $$\begin{aligned}
L_{ia,jb}'
&=&R_{jb}L_{ia,jb}\\
&=&\frac{w^{-(b+1)j}}{Q_{b+1,b}}\cdot w^{ij+ab}Q_{ib}\\
&=&\frac{w^{ij+ab}}{w^{bj+j}}\cdot\frac{Q_{ib}}{Q_{b+1,b}}\end{aligned}$$
Thus we have obtained the formula in the statement, and we are done.
As an illustration, let us work out the case $N=2$. Here we have $w=-1$, and with $Q=(^p_r{\ }^q_s)$, and then with $u=\frac{p}{r},v=\frac{s}{q}$, we obtain the following matrix: $$F_2\otimes_QF_2=\begin{pmatrix}
\frac{p}{r}&\frac{q}{q}&-\frac{p}{r}&\frac{q}{q}\\
\frac{p}{r}&-\frac{q}{q}&-\frac{p}{r}&-\frac{q}{q}\\
\frac{r}{r}&\frac{s}{q}&\frac{r}{r}&-\frac{s}{q}\\
\frac{r}{r}&-\frac{s}{q}&\frac{r}{r}&\frac{s}{q}
\end{pmatrix}
=\begin{pmatrix}
u&1&-u&1\\
u&-1&-u&-1\\
1&v&1&-v\\
1&-v&1&v
\end{pmatrix}$$
Observe that this matrix is indeed almost bistochastic, with row sums $2,-2,2,2$.
It is quite unclear on how to get beyond these results. An interesting question here would be probably that of focusing on the real case, and see if the Hadamard matrices there, $H\in M_N(\pm1)$, can be put or not in bistochastic form, in an explicit way.
This is certainly true for the Walsh matrices, but for the other basic examples, such as the Paley or the Williamson matrices, no results seem to be known so far.
Having such a theory would be potentially very interesting, with a complex reformulation of the HC and of the other real Hadamard questions at stake.
Glow computations
=================
We discuss here the computation of the glow, in the $N\to\infty$ limit. We have seen in section 1 that, in what concerns the real glow of the real Hadamard matrices, with $N\to\infty$ we obtain a real Gaussian measure. Our main purpose here will be that of establishing a similar result in the complex case, stating that for the complex glow of the complex Hadamard matrices, with $N\to\infty$ we obtain a complex Gaussian measure.
The computations in the complex case are considerably more involved than those in the real case, and will require a lot of combinatorics. Also, we will investigate the problem of getting beyond the $N\to\infty$ limiting result, with formulae at order $1,2,3,4$.
As a first motivation for all this, we have the Gale-Berlekamp game [@fsl], [@rvi]. Another motivation comes from the questions regarding the bistochastic matrices, in relation with [@iwo], explained in section 7. Finally, we have the question of connecting and computing the defect, and other invariants of the Hadamard matrices, in terms of the glow.
Let us begin by reviewing the few theoretical things that we know about the glow, from section 7 above. The main results there can be summarized as follows:
The glow of $H\in M_N(\mathbb C)$, which is the law $\mu\in\mathcal P(\mathbb C)$ of the excess $$E=\sum_{ij}H_{ij}$$ over the Hadamard equivalence class of $H$, has the following properties:
1. $\mu=\varepsilon\times\mu^+$, where $\mu^+=law(|E|)$.
2. $\mu$ is invariant under rotations.
3. $H\in\sqrt{N}U_N$ implies $supp(\mu)\subset N\sqrt{N}\,\mathbb D$.
4. $H\in\sqrt{N}U_N$ implies as well $N\sqrt{N}\,\mathbb T\subset supp(\mu)$.
We already know all this from section 7, the idea being as follows:
\(1) This follows by using $H\to zH$ with $|z|=1$, as explained in Proposition 7.12.
\(2) This follows from (1), the convolution with $\varepsilon$ bringing the invariance.
\(3) This folllows from Cauchy-Schwarz, as explained in Theorem 7.13.
\(4) This is something highly non-trivial, coming from [@iwo].
In what follows we will be mainly interested in the Hadamard matrix case, but since the computations here are quite difficult, let us begin our study with other matrices.
It is convenient to normalize our matrices, by assuming that the corresponding $2$-norm $||H||_2=\sqrt{\sum_{ij}|H_{ij}|^2}$ takes the value $||H||_2=N$. Note that this is always the case with the Hadamard matrices, and more generally with the matrices $H\in\sqrt{N}U_N$.
We recall that the complex Gaussian distribution $\mathcal C$ is the law of $z=\frac{1}{\sqrt{2}}(x+iy)$, where $x,y$ are independent standard Gaussian variables. In order to detect this distribution, we can use the moment method, and the well-known formula $\mathbb E(|z|^{2p})=p!$.
Finally, we use the symbol $\sim$ to denote an equality of distributions. We have:
We have the following computations:
1. For the rescaled identity $\widetilde{I}_N=\sqrt{N}I_N$ we have $E\sim\sqrt{N}(q_1+\ldots +q_N)$, with $q\in\mathbb T^N$ random. With $N\to\infty$ we have $E/N\sim\mathcal C$.
2. For the flat matrix $J_N=(1)_{ij}$ we have $E\sim(a_1+\ldots+a_N)(b_1+\ldots+b_N)$, with $(a,b)\in\mathbb T^N\times\mathbb T^N$ random. With $N\to\infty$ we have $E/N\sim\mathcal C\times\mathcal C$.
We use Theorem 8.1, and the moment method:
\(1) Here we have $E=\sqrt{N}\sum_{i}a_ib_i$, with $a,b\in\mathbb T^N$ random, and with $q_i=a_ib_i$ this gives the first assertion. Let us estimate now the moments of $|E|^2$. We have: $$\begin{aligned}
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}
&=&N^p\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq\\
&=&N^p\int_{\mathbb T^N}\sum_{ij}\frac{q_{i_1}\ldots q_{i_p}}{q_{j_1}\ldots q_{j_p}}\,dq\\
&=&N^p\#\left\{(i,j)\in\{1,\ldots,N\}^p\times\{1,\ldots,N\}^p\Big|[i_1,\ldots,i_p]=[j_1,\ldots,j_p]\right\}\\
&\simeq&N^p\cdot p!N(N-1)\ldots(N-p+1)\\
&\simeq&N^p\cdot p!N^p\\
&=&p!N^{2p}\end{aligned}$$
Here, and in what follows, the sets between brackets are by defintion sets with repetition, and the middle estimate comes from the fact that, with $N\to\infty$, only the multi-indices $i=(i_1,\ldots,i_p)$ having distinct entries contribute. But this gives the result.
\(2) Here we have $E=\sum_{ij}a_ib_j=\sum_ia_i\sum_jb_j$, and this gives the first assertion. Now since $a,b\in\mathbb T^N$ are independent, so are the quantities $\sum_ia_i,\sum_jb_j$, so we have: $$\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\left(\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq\right)^2\simeq (p!N^{p})^2$$
Here we have used the estimate in the proof of (1), and this gives the result.
As a first conclusion, the glow is intimately related to the basic hypertoral law, namely that of $q_1+\ldots+q_N$, with $q\in\mathbb T^N$ random. Observe that at $N=1$ this hypertoral law is simply $\delta_1$, and that at $N=2$ we obtain the following law: $$\begin{aligned}
law|1+q|
&=&law\sqrt{(1+e^{it})(1+e^{-it})}\\
&=&law\sqrt{2+2\cos t}\\
&=&law\left(2\cos\frac{t}{2}\right)\end{aligned}$$
In general, the law of $\sum q_i$ is known to be related to the Pólya random walk [@pol]. Also, as explained for instance in section 6, the moments of this law are: $$\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq=\sum_{\pi\in P(p)}\binom{p}{\pi}\frac{N!}{(N-|\pi|)!}$$
As a second conclusion, even under the normalization $||H||_2=N$, the glow can behave quite differently in the $N\to\infty$ limit. So, let us restrict now the attention to the Hadamard matrices. At $N=2$ we only have $F_2$ to be invesigated, the result being:
For the Fourier matrix $F_2$ we have $$|E|^2=4+2Re(\alpha-\beta)$$ for certain variables $\alpha,\beta\in\mathbb T$ which are uniform, and independent.
The matrix that we interested in, namely the Fourier matrix $F_2$ altered by a vertical switching vector $(a,b)$ and an horizontal switching vector $(c,d)$, is: $$\widetilde{F}_2=\begin{pmatrix}ac&ad\\bc&-bd\end{pmatrix}$$
With this notation, we have the following formula: $$|E|^2
=|ac+ad+bc-bd|^2
=4+\frac{ad}{bc}+\frac{bc}{ad}-\frac{bd}{ac}-\frac{ac}{bd}$$
For proving that $\alpha=\frac{ad}{bc}$ and $\beta=\frac{bd}{ac}$ are independent, we use the moment method: $$\int_{\mathbb T^4}\left(\frac{ad}{bc}\right)^p\left(\frac{bd}{ac}\right)^q
=\int_{\mathbb T}a^{p-q}\int_{\mathbb T}b^{q-p}\int_{\mathbb T}c^{-p-q}\int_{\mathbb T}d^{p+q}\\
=\delta_{p,q,0}$$
Thus $\alpha,\beta$ are indeed independent, and we are done.
Observe that $law(|E|^2)$, and hence $law(E)$, is uniquely determined by the above result. It is possible of course to derive from this some more concrete formulae, but let us look instead at the case $N=3$. Here the matrix that we are interested in is: $$\widetilde{F}_3=\begin{pmatrix}ad&ae&af\\ bd&wbe&w^2bf\\ cd&w^2ce&wcf\end{pmatrix}$$
Thus, we would like to compute the law of the following quantity: $$|E|=|ad+ae+af+bd+wbe+w^2bf+cd+w^2ce+wcf|$$
The problem is that when trying to compute $|E|^2$, the terms won’t cancel much. More precisely, we have $|E|^2=9+C_0+C_1w+C_2w^2$, where $C_0,C_1,C_2$ are as follows: $$\begin{aligned}
C_0&=&\frac{ae}{bd}+\frac{ae}{cd}+\frac{af}{bd}+\frac{af}{cd}+\frac{bd}{ae}+\frac{bd}{af}
+\frac{be}{cf}+\frac{bf}{ce}+\frac{cd}{ae}+\frac{cd}{af}+\frac{ce}{bf}+\frac{cf}{be}\\
C_1&=&\frac{ad}{bf}+\frac{ad}{ce}+\frac{ae}{bf}+\frac{af}{ce}+\frac{bd}{ce}+\frac{be}{ad}
+\frac{be}{af}+\frac{be}{cd}+\frac{cd}{bf}+\frac{cf}{ad}+\frac{cf}{ae}+\frac{cf}{bd}\\
C_2&=&\frac{ad}{be}+\frac{ad}{cf}+\frac{ae}{cf}+\frac{af}{be}+\frac{bd}{cf}+\frac{bf}{ad}
+\frac{bf}{ae}+\frac{bf}{cd}+\frac{cd}{be}+\frac{ce}{ad}+\frac{ce}{af}+\frac{ce}{bd}\end{aligned}$$
In short, all this leads nowhere, and the exact study stops at $F_2$.
In general now, one idea is that of using Bernoulli-type variables coming from the row sums, as in the real case. We have here the following result:
The glow of $H\in M_N(\mathbb C)$ is given by the formula $$law(E)=\int_{a\in\mathbb T^N}B((Ha)_1,\ldots,(Ha)_N)$$ where $B(c_1,\ldots,c_N)=law(\sum_i\lambda_ic_i)$, with $\lambda\in\mathbb T^N$ random.
This is indeed clear from the formula $E=<a,Hb>$, because when $a\in\mathbb T^N$ is fixed, $E$ follows the law $B((Ha)_1,\ldots,(Ha)_N)$ in the statement.
Observe that we can write $B(c_1,\ldots,c_N)=\varepsilon\times\beta(|c_1|,\ldots,|c_N|)$, where the measure $\beta(r_1,\ldots,r_N)\in\mathcal P(\mathbb R_+)$ with $r_1,\ldots,r_N\geq 0$ is given by $\beta(r_1,\ldots,r_N)=law|\sum_i\lambda_ir_i|$. Regarding now the computation of $\beta$, we have: $$\beta(r_1,\ldots,r_N)=law\sqrt{\sum_{ij}\frac{\lambda_i}{\lambda_j}\cdot r_ir_j}$$
Consider now the following variable, which is easily seen, for instance by using the moment method, to be uniform over the projective torus $\mathbb T^{N-1}=\mathbb T^N/\mathbb T$: $$(\mu_1,\mu_2,\ldots,\mu_N)=\left(\frac{\lambda_1}{\lambda_2},\frac{\lambda_2}{\lambda_3},\ldots,\frac{\lambda_N}{\lambda_1}\right)$$
Now since we have $\lambda_i/\lambda_j=\mu_i\mu_{i+1}\ldots\mu_j$, with the convention $\mu_i\ldots\mu_j=\overline{\mu_j\ldots\mu_i}$ for $i>j$, this gives the following formula, with $\mu\in\mathbb T^{N-1}$ random: $$\beta(r_1,\ldots,r_N)=law\sqrt{\sum_{ij}\mu_i\mu_{i+1}\ldots\mu_j\cdot r_ir_j}$$
It is possible to further study the laws $\beta$ by using this formula. However, in practice, it is more convenient to use the complex measures $B$ from Theorem 8.4.
Let us end these preliminaries with a discussion of the “arithmetic” version of the problem, which makes the link with the Gale-Berlekamp switching game [@fsl], [@rvi] and with the work in section 1. We have the following unifying formalism:
Given $H\in M_N(\mathbb C)$ and $s\in\mathbb N\cup\{\infty\}$, we define $\mu_s\in\mathcal P(\mathbb C)$ by $$\int_\mathbb C\varphi(x)d\mu_s(x)=\int_{\mathbb Z^N_s\times\mathbb Z^N_s}\varphi\left(\sum_{ij}a_ib_jH_{ij}\right)d(a,b)$$ where $\mathbb Z_s\subset\mathbb T$ is the group of the $s$-roots of unity, with the convention $\mathbb Z_\infty=\mathbb T$.
Observe that at $s=\infty$ we obtain the measure in Theorem 8.1. Also, at $s=2$ and for a usual Hadamard matrix, $H\in M_N(\pm1)$, we obtain the measure from section 1.
Observe that for $H\in M_N(\pm1)$, knowing $\mu_2$ is the same as knowing the statistics of the number of one entries, $|1\in H|$. This follows indeed from the following formula: $$\sum_{ij}H_{ij}=|1\in H|-|-1\in H|=2|1\in H|-N^2$$
More generally, at $s=p$ prime, we have the following result:
When $s$ is prime and $H\in M_N(\mathbb Z_s)$, the statistics of the number of one entries, $|1\in H|$, can be recovered from that of the total sum, $E=\sum_{ij}H_{ij}$.
The problem here is of vectorial nature, so given $V\in\mathbb Z_s^n$, we would like to compare the quantities $|1\in V|$ and $\sum V_i$. Let us write, up to permutations: $$V=(\underbrace{1\ldots1}_{a_0}\,\underbrace{w\ldots w}_{a_1}\ldots\ldots\underbrace{w^{s-1}\ldots w^{s-1}}_{a_{s-1}})$$
We have then $|1\in V|=a_0$ and $\sum V_i=a_0+a_1w+\ldots+a_{s-1}w^{s-1}$, and we also know that $a_0+a_1+\ldots+a_{s-1}=n$. Now when $s$ is prime, the only ambiguity in recovering $a_0$ from $a_0+a_1w+\ldots+a_{s-1}w^{s-1}$ can come from $1+w+\ldots+w^{s-1}=0$. But since the sum of the numbers $a_i$ is fixed, $a_0+a_1+\ldots+a_{s-1}=n$, this ambiguity dissapears.
As an illustration, at $s=3$ we can write $E=a+bw+cw^2$, and we have: $$Re(E)=a-\frac{b+c}{2}=a-\frac{n-a}{2}=\frac{3a-n}{2}$$
At $s=5$, however, is it now clear how to explicitely solve the problem. In fact, the problem of finding an explicit formula is related to the question of relating the complex probability measures $\mu_s=law(E)$ for a given matrix $H\in M_N(\mathbb T)$, at various values of $s\in\mathbb N\cup\{\infty\}$. Note that Theorem 8.6 tells us that for a matrix $H\in M_N(\mathbb Z_s)$ with $s$ prime, the measure $\mu_2$ can be recaptured from the knowledge of $\mu_s$.
Let us investigate now the glow of the complex Hadamard matrices, by using the moment method. We use the moment formula from section 7, namely:
For $H\in M_N(\mathbb T)$ the even moments of $|E|$ are given by $$\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{[i]=[k],[j]=[l]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}$$ where the sets between brackets are by definition sets with repetition.
As explained in section 7, with $E=\sum_{ij}H_{ij}a_ib_j$ we obtain: $$\begin{aligned}
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}
&=&\int_{\mathbb T^N\times\mathbb T^N}\left(\sum_{ijkl}\frac{H_{ij}}{H_{kl}}\cdot\frac{a_ib_j}{a_kb_l}\right)^p\\
&=&\sum_{ijkl}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\int_{\mathbb T^N}\frac{a_{i_1}\ldots a_{i_p}}{a_{k_1}\ldots a_{k_p}}\int_{\mathbb T^N}\frac{b_{j_1}\ldots b_{j_p}}{b_{l_1}\ldots b_{l_p}}\end{aligned}$$
The integrals on the right being $\delta_{[i],[k]}$ and $\delta_{[j],[l]}$, we obtain the result.
As a first application, let us investigate the tensor products. We have:
The even moments of $|E|$ for $L=H\otimes K$ are given by $$\int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p}=\sum_{[ia]=[kc],[jb]=[ld]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}}$$ where the sets between brackets are as usual sets with repetition.
With $L=H\otimes K$, the formula in Proposition 8.7 reads: $$\int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p}=\sum_{[ia]=[kc],[jb]=[ld]}\frac{L_{i_1a_1,j_1b_1}\ldots L_{i_pa_p,j_pb_p}}{L_{k_1c_1,l_1d_1}\ldots L_{k_pc_p,l_pd_p}}$$
But this gives the formula in the statement, and we are done.
The above result is quite bad news. Indeed, we cannot reconstruct the glow of $H\otimes K$ from that of $H,K$, because the indices “get mixed”.
Let us develop now some moment machinery. Let $P(p)$ be the set of partitions of $\{1,\ldots,p\}$, with its standard order relation $\leq$, which is such that $\sqcap\!\!\sqcap\ldots\leq\pi\leq|\ |\ldots|$, for any $\pi\in P(p)$. We denote by $\mu(\pi,\sigma)$ the associated Möbius function, given by: $$\mu(\pi,\sigma)=\begin{cases}
1&{\rm if}\ \pi=\sigma\\
-\sum_{\pi\leq\tau<\sigma}\mu(\pi,\tau)&{\rm if}\ \pi<\sigma\\
0&{\rm if}\ \pi\not\leq\sigma
\end{cases}$$
For $\pi\in P(p)$ we set $\binom{p}{\pi}=\binom{p}{b_1\ldots b_{|\pi|}}=\frac{p!}{b_1!\ldots b_{|\pi|}!}$, where $b_1,\ldots,b_{|\pi|}$ are the block lenghts. Finally, we use the following notation, where $H_1,\ldots,H_N\in\mathbb T^N$ are the rows of $H$: $$H_\pi(i)=\bigotimes_{\beta\in\pi}\prod_{r\in\beta}H_{i_r}$$
With these notations, we have the following result:
The glow moments of a matrix $H\in M_N(\mathbb T)$ are given by $$\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{\pi\in P(p)}K(\pi)N^{|\pi|}I(\pi)$$ where $K(\pi)=\sum_{\sigma\in P(p)}\mu(\pi,\sigma)\binom{p}{\sigma}$ and $I(\pi)=\frac{1}{N^{|\pi|}}\sum_{[i]=[j]}<H_\pi(i),H_\pi(j)>$.
We know from Proposition 8.7 that the moments are given by: $$\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}
=\sum_{[i]=[j],[x]=[y]}\frac{H_{i_1x_1}\ldots H_{i_px_p}}{H_{j_1y_1}\ldots H_{j_py_p}}$$
With $\sigma=\ker x,\rho=\ker y$, we deduce that the moments of $|E|^2$ decompose over partitions, $\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\int_{\mathbb T^N}\sum_{\sigma,\rho\in P(p)}C(\sigma,\rho)$, with the contributions being as follows: $$C(\sigma,\rho)=\sum_{\ker x=\sigma,\ker y=\rho}\delta_{[x],[y]}\sum_{ij}
\frac{H_{i_1x_1}\ldots H_{i_px_p}}{H_{j_1y_1}\ldots H_{j_py_p}}
\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}$$
We have $C(\sigma,\rho)=0$ unless $\sigma\sim\rho$, in the sense that $\sigma,\rho$ must have the same block structure. The point now is that the sums of type $\sum_{\ker x=\sigma}$ can be computed by using the Möbius inversion formula. We obtain a formula as follows: $$C(\sigma,\rho)=\delta_{\sigma\sim\rho}\sum_{\pi\leq\sigma}\mu(\pi,\sigma)\prod_{\beta\in\pi}C_{|\beta|}(a)$$
Here the functions on the right are by definition given by: $$\begin{aligned}
C_r(a)
&=&\sum_x\sum_{ij}\frac{H_{i_1x}\ldots H_{i_rx}}{H_{j_1x}\ldots H_{j_rx}}\cdot\frac{a_{i_1}\ldots a_{i_r}}{a_{j_1}\ldots a_{j_r}}\\
&=&\sum_{ij}<H_{i_1}\ldots H_{i_r},H_{j_1}\ldots H_{j_r}>\cdot\frac{a_{i_1}\ldots a_{i_r}}{a_{j_1}\ldots a_{j_r}}\end{aligned}$$
Now since there are $\binom{p}{\sigma}$ partitions having the same block structure as $\sigma$, we obtain: $$\begin{aligned}
\int_{\mathbb T^N\times\mathbb T^N}|\Omega|^{2p}
&=&\int_{\mathbb T^N}\sum_{\pi\in P(p)}\left(\sum_{\sigma\sim\rho}\sum_{\mu\leq\sigma}\mu(\pi,\sigma)\right)\prod_{\beta\in\pi}C_{|\beta|}(a)\\
&=&\sum_{\pi\in P(p)}\left(\sum_{\sigma\in P(p)}\mu(\pi,\sigma)\binom{p}{\sigma}\right)\int_{\mathbb T^N}\prod_{\beta\in\pi}C_{|\beta|}(a)\end{aligned}$$
But this gives the formula in the statement, and we are done.
Let us discuss now the asymptotic behavior of the glow. For this purpose, we first study the coefficients $K(\pi)$ in Theorem 8.9. We have here the following result:
$K(\pi)=\sum_{\pi\leq\sigma}\mu(\pi,\sigma)\binom{p}{\sigma}$ has the following properties:
1. $\widetilde{K}(\pi)=\frac{K(\pi)}{p!}$ is multiplicative: $\widetilde{K}(\pi\pi')=\widetilde{K}(\pi)\widetilde{K}(\pi')$.
2. $K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma}$.
3. $K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^p(-1)^{r-1}(r-1)!C_{pr}$, where $C_{pr}=\sum_{p=a_1+\ldots+a_r}\binom{p}{a_1,\ldots,a_r}^2$.
\(1) We use the fact that $\mu(\pi\pi',\sigma\sigma')=\mu(\pi,\sigma)\mu(\pi',\sigma')$, which is a well-known property of the Möbius function, which can be proved by recurrence. Now if $b_1,\ldots,b_s$ and $c_1,\ldots,c_t$ are the block lengths of $\sigma,\sigma'$, we obtain, as claimed: $$\begin{aligned}
\widetilde{K}(\pi\pi')
&=&\sum_{\pi\pi'\leq\sigma\sigma'}\mu(\pi\pi',\sigma\sigma')\cdot\frac{1}{b_1!\ldots b_s!}\cdot\frac{1}{c_1!\ldots c_t!}\\
&=&\sum_{\pi\leq\sigma,\pi'\leq\sigma'}\mu(\pi,\sigma)\mu(\pi',\sigma')\cdot\frac{1}{b_1!\ldots b_s!}\cdot\frac{1}{c_1!\ldots c_t!}\\
&=&\widetilde{K}(\pi)\widetilde{K}(\pi')\end{aligned}$$
\(2) We use here the formula $\mu(\sqcap\!\!\sqcap\ldots\sqcap,\sigma)=(-1)^{|\sigma|-1}(|\sigma|-1)!$, which once again is well-known, and can be proved by recurrence on $|\sigma|$. We obtain, as claimed: $$K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\mu(\sqcap\!\!\sqcap\ldots\sqcap,\sigma)\binom{p}{\sigma}=\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma}$$
\(3) By using the formula in (2), and summing over $r=|\sigma|$, we obtain: $$K(\sqcap\!\!\sqcap\ldots\sqcap)
=\sum_{r=1}^p(-1)^{r-1}(r-1)!\sum_{|\sigma|=r}\binom{p}{\sigma}$$
Now if we denote by $a_1,\ldots,a_r$ with $a_i\geq1$ the block lengths of $\sigma$, then $\binom{p}{\sigma}=\binom{p}{a_1,\ldots,a_r}$. On the other hand, given $a_1,\ldots,a_r\geq1$ with $a_1+\ldots+a_r=p$, there are exactly $\binom{p}{a_1,\ldots,a_r}$ partitions $\sigma$ having these numbers as block lengths, and this gives the result.
Now let us take a closer look at the integrals $I(\pi)$. We have here:
Consider the one-block partition $\sqcap\!\!\sqcap\ldots\sqcap\in P(p)$.
1. $I(\sqcap\!\!\sqcap\ldots\sqcap)=\#\{i,j\in\{1,\ldots,N\}^p|[i]=[j]\}$.
2. $I(\sqcap\!\!\sqcap\ldots\sqcap)=\int_{\mathbb T^N}|\sum_ia_i|^{2p}da$.
3. $I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\binom{p}{\sigma}\frac{N!}{(N-|\sigma|)!}$.
4. $I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}C_{pr}\frac{N!}{(N-r)!}$, where $C_{pr}=\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2$.
\(1) This follows indeed from the following computation: $$I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{[i]=[j]}\frac{1}{N}<H_{i_1}\ldots H_{i_r},H_{j_1}\ldots H_{j_r}>=\sum_{[i]=[j]}1$$
\(2) This follows from the following computation: $$\int_{\mathbb T^N}\left|\sum_ia_i\right|^{2p}=\int_{\mathbb T^N}\sum_{ij}\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}da=\#\left\{i,j\Big|[i]=[j]\right\}$$
\(3) If we let $\sigma=\ker i$ in the above formula of $I(\sqcap\!\!\sqcap\ldots\sqcap)$, we obtain: $$I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\#\left\{i,j\Big|\ker i=\sigma,[i]=[j]\right\}$$
Now since there are $\frac{N!}{(N-|\sigma|)!}$ choices for $i$, and then $\binom{p}{\sigma}$ for $j$, this gives the result.
\(4) If we set $r=|\sigma|$, the formula in (3) becomes: $$I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}\frac{N!}{(N-r)!}\sum_{\sigma\in P(p),|\sigma|=r}\binom{p}{\sigma}$$
Now since there are exactly $\binom{p}{b_1,\ldots,b_r}$ permutations $\sigma\in P(p)$ having $b_1,\ldots,b_r$ as block lengths, the sum on the right equals $\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2$, as claimed.
In general, the integrals $I(\pi)$ can be estimated as follows:
Let $H\in M_N(\mathbb T)$, having its rows pairwise orthogonal.
1. $I(|\,|\,\ldots|)=N^p$.
2. $I(|\,|\,\ldots|\ \pi)=N^aI(\pi)$, for any $\pi\in P(p-a)$.
3. $|I(\pi)|\lesssim p!N^p$, for any $\pi\in P(p)$.
\(1) Since the rows of $H$ are pairwise orthogonal, we have: $$I(|\,|\ldots|)=\sum_{[i]=[j]}\prod_{r=1}^p\delta_{i_r,j_r}=\sum_{[i]=[j]}\delta_{ij}=\sum_i1=N^p$$
\(2) This follows by the same computation as the above one for (1).
\(3) We have indeed the following estimate: $$|I(\pi)|
\leq\sum_{[i]=[j]}\prod_{\beta\in\pi}1=\sum_{[i]=[j]}1=\#\left\{i,j\in\{1,\ldots,N\}\Big|[i]=[j]\right\}\simeq p!N^p$$
Thus we have obtained the formula in the statement, and we are done.
We have now all needed ingredients for a universality result:
The glow of a complex Hadamard matrix $H\in M_N(\mathbb T)$ is given by: $$\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left(\frac{|E|}{N}\right)^{2p}=1-\binom{p}{2}N^{-1}+O(N^{-2})$$ In particular, $E/N$ becomes complex Gaussian in the $N\to\infty$ limit.
We use the moment formula in Theorem 8.9. By using Proposition 8.12 (3), we conclude that only the $p$-block and $(p-1)$-block partitions contribute at order 2, so: $$\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=K(|\,|\ldots|)N^pI(|\,|\ldots|)+\binom{p}{2}K(\sqcap|\ldots|)N^{p-1}I(\sqcap|\ldots|)+O(N^{2p-2})$$
Now by dividing by $N^{2p}$ and then by using the various formulae in Proposition 8.10, Proposition 8.11 and Proposition 8.12 above, we obtain, as claimed: $$\int_{\mathbb T^N\times\mathbb T^N}\left(\frac{|E|}{N}\right)^{2p}
=p!-\binom{p}{2}\frac{p!}{2}\cdot\frac{2N-1}{N^2}+O(N^{-2})$$
Finally, since the law of $E$ is invariant under centered rotations in the complex plane, this moment formula gives as well the last assertion.
Let us study now the glow of the Fourier matrices, $F=F_G$. We use the standard formulae $F_{ix}F_{iy}=F_{i,x+y}$, $\overline{F}_{ix}=F_{i,-x}$ and $\sum_xF_{ix}=N\delta_{i0}$. We first have:
For a Fourier matrix $F_G$ we have $$I(\pi)=\#\left\{i,j\Big|[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\}$$ with all the indices, and with the sums at right, taken inside $G$.
The basic components of the integrals $I(\pi)$ are given by: $$\frac{1}{N}\left\langle\prod_{r\in\beta}F_{i_r},\prod_{r\in\beta}F_{j_r}\right\rangle=\frac{1}{N}\left\langle F_{\sum_{r\in\beta}i_r},F_{\sum_{r\in\beta}i_r}\right\rangle=\delta_{\sum_{r\in\beta}i_r,\sum_{r\in\beta}j_r}$$
But this gives the formula in the statement, and we are done.
We have the following interpretation of the above integrals:
For any partition $\pi$ we have the formula $$I(\pi)=\int_{\mathbb T^N}\prod_{b\in\pi}\left(\frac{1}{N^2}\sum_{ij}|H_{ij}|^{2|\beta|}\right)da$$ where $H=FAF^*$, with $F=F_G$ and $A=diag(a_0,\ldots,a_{N-1})$.
We have the following computation: $$\begin{aligned}
H=F^*AF
&\implies&|H_{xy}|^2=\sum_{ij}\frac{F_{iy}F_{jx}}{F_{ix}F_{jy}}\cdot\frac{a_i}{a_j}\\
&\implies&|H_{xy}|^{2p}=\sum_{ij}\frac{F_{j_1x}\ldots F_{j_px}}{F_{i_1x}\ldots F_{i_px}}\cdot\frac{F_{i_1y}\ldots F_{i_py}}{F_{j_1y}\ldots F_{j_py}}\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}\\
&\implies&\sum_{xy}|H_{xy}|^{2p}=\sum_{ij}\left|<H_{i_1}\ldots H_{i_p},H_{j_1}\ldots H_{j_p}>\right|^2\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}\end{aligned}$$
But this gives the formula in the statement, and we are done.
Regarding now the glow estimates, we first have the following result:
For $F_G$ we have the estimate $$I(\pi)=b_1!\ldots b_{|\pi|}!N^p+O(N^{p-1})$$ where $b_1,\ldots,b_{|\pi|}$ with $b_1+\ldots+b_{|\pi|}=p$ are the block lengths of $\pi$.
With $\sigma=\ker i$ we obtain: $$I(\pi)=\sum_{\sigma\in P(p)}\#\left\{i,j\Big|\ker i=\sigma,[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\}$$
Since there are $\frac{N!}{(N-|\sigma|)!}\simeq N^{|\sigma|}$ choices for $i$ satisfying $\ker i=\sigma$, and then there are $\binom{p}{\sigma}=O(1)$ choices for $j$ satisfying $[i]=[j]$, we conclude that the main contribution comes from $\sigma=|\,|\ldots|$, and so we have: $$I(\pi)=\#\left\{i,j\Big|\ker i=|\,|\ldots|,[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\}+O(N^{p-1})$$
Now the condition $\ker i=|\,|\ldots|$ tells us that $i$ must have distinct entries, and there are $\frac{N!}{(N-p)!}\simeq N^p$ choices for such multi-indices $i$. Regarding now the indices $j$, the main contribution comes from those obtained from $i$ by permuting the entries over the blocks of $\pi$, and since there are $b_1!\ldots b_{|\pi|}!$ choices here, this gives the result.
At the second order now, the estimate is as follows:
For $F_G$ we have the formula $$\frac{I(\pi)}{b_1!\ldots b_s!N^p}=1+\left(\sum_{i<j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}-\frac{1}{2}\sum_i\binom{b_i}{2}\right)N^{-1}+O(N^{-2})$$ where $b_1,\ldots,b_s$ being the block lengths of $\pi\in P(p)$.
Let us define the “non-arithmetic” part of $I(\pi)$ as follows: $$I^\circ(\pi)=\#\left\{i,j\Big|[i_r|r\in\beta]=[j_r|r\in\beta],\forall\beta\in\pi\right\}$$
We then have the following formula: $$I^\circ(\pi)=\prod_{\beta\in\pi}\left\{i,j\in I^{|\beta|}\Big|[i]=[j]\right\}=\prod_{\beta\in\pi}I(\beta)$$
Also, Proposition 8.16 shows that we have the following estimate: $$I(\pi)=I^\circ(\pi)+O(N^{p-1})$$
Our claim now is that we have the folowing formula: $$\frac{I(\pi)-I^\circ(\pi)}{b_1!\ldots b_s!N^p}=\sum_{i<j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}N^{-1}+O(N^{-2})$$
Indeed, according to Proposition 8.16, we have a formula of the following type: $$I(\pi)=I^\circ(\pi)+I^1(\pi)+O(N^{p-2})$$
More precisely, this formula holds indeed, with $I^1(\pi)$ coming from $i_1,\ldots,i_p$ distinct, $[i]=[j]$, and with one constraint of type $\sum_{r\in\beta}i_r=\sum_{j\in\beta}j_r$, with $[i_r|r\in\beta]\neq[j_r|r\in\beta]$. Now observe that for a two-block partition $\pi=(a,b)$ this constraint is implemented, up to permutations which leave invariant the blocks of $\pi$, as follows: $$\begin{matrix}
i_1\ldots i_c&k_1\ldots k_{a-c}&&j_1\ldots j_c&l_1\ldots l_{a-c}\\
\underbrace{j_1\ldots j_c}_c&\underbrace{k_1\ldots k_{a-c}}_{a-c}&&\underbrace{i_1\ldots i_c}_c&\underbrace{l_1\ldots l_{a-c}}_{b-c}
\end{matrix}$$
Let us compute now $I^1(a,b)$. We cannot have $c=0,1$, and once $c\geq2$ is given, we have $\binom{a}{c},\binom{b}{c}$ choices for the positions of the $i,j$ variables in the upper row, then $N^{p-1}+O(N^{p-2})$ choices for the variables in the upper row, and then finally we have $a!b!$ permutations which can produce the lower row. We therefore obtain: $$I^1(a,b)=a!b!\sum_{c\geq2}\binom{a}{c}\binom{b}{c}N^{p-1}+O(N^{p-2})$$
In the general case now, a similar discussion applies. Indeed, the constraint of type $\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r$ with $[i_r|r\in\beta]\neq[j_r|r\in\beta]$ cannot affect $\leq1$ blocks, because we are not in the non-arithmetic case, and cannot affect either $\geq3$ blocks, because affecting $\geq3$ blocks would require $\geq2$ constraints. Thus this condition affects exactly $2$ blocks, and if we let $i<j$ be the indices in $\{1,\ldots,s\}$ corresponding to these 2 blocks, we obtain: $$I^1(\pi)=b_1!\ldots b_s!\sum_{i<j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}N^{p-1}+O(N^{p-2})$$
But this proves the above claim. Let us estimate now $I(\sqcap\!\!\sqcap\ldots\sqcap)$. We have: $$\begin{aligned}
I(\sqcap\!\!\sqcap\ldots\sqcap)
&=&p!\frac{N!}{(N-p)!}+\binom{p}{2}\frac{p!}{2}\cdot\frac{N!}{(N-p+1)!}+O(N^{p-2})\\
&=&p!N^r\left(1-\binom{p}{2}N^{-1}+O(N^{-2})\right)+\binom{p}{2}\frac{p!}{2}N^{p-1}+O(N^{p-2})\\
&=&p!N^p\left(1-\frac{1}{2}\binom{p}{2}N^{-1}+O(N^{-2})\right)\end{aligned}$$
Now by using the formula $I^\circ(\pi)=\prod_{\beta\in\pi}I(\beta)$, we obtain: $$I^\circ(\pi)=b_1!\ldots b_s!N^p\left(1-\frac{1}{2}\sum_i\binom{b_i}{2}N^{-1}+O(N^{-2})\right)$$
By plugging this quantity into the above estimate, we obtain the result.
In order to estimate glow, we will need the explicit formula of $I(\sqcap\sqcap)$:
For $F_G$ with $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}$ we have the formula $$I(\sqcap\sqcap)=N(4N^3-11N+2^e+7)$$ where $e\in\{0,1,\ldots,k\}$ is the number of even numbers among $N_1,\ldots,N_k$.
We use the fact that, when dealing with the conditions $\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r$ defining the quantities $I(\pi)$, one can always erase some of the variables $i_r,j_r$, as to reduce to the “purely arithmetic” case, $\{i_r|r\in\beta\}\cap\{j_r|r\in\beta\}=\emptyset$. We have: $$I(\sqcap\sqcap)=I^\circ(\sqcap\sqcap)+I^{ari}(\sqcap\sqcap)$$
Let us compute now $I^{ari}(\sqcap\sqcap)$. There are 3 contributions to this quantity, namely:
\(1) , with $i\neq j$, $2i=2j$. Since $2(i_1,\ldots,i_k)=2(j_1,\ldots,j_k)$ corresponds to the collection of conditions $2i_r=2j_r$, inside $\mathbb Z_{N_r}$, which each have 1 or 2 solutions, depending on whether $N_r$ is odd or even, the contribution here is: $$\begin{aligned}
I^{ari}_1(\sqcap\sqcap)
&=&\#\{i\neq j|2i=2j\}\\
&=&\#\{i,j|2i=2j\}-\#\{i,j|i=j\}\\
&=&2^eN-N\\
&=&(2^e-1)N\end{aligned}$$
\(2) , with $i,j,k$ distinct, $2i=j+k$. The contribution here is: $$\begin{aligned}
I^{ari}_2(\sqcap\sqcap)
&=&4\#\{i,j,k\ {\rm distinct}|2i=j+k\}\\
&=&4\#\{i\neq j|2i-j\neq i,j\}\\
&=&4\#\{i\neq j|2i\neq 2j\}\\
&=&4(\#\{i,j|i\neq j\}-\#\{i\neq j|2i=2j\})\\
&=&4(N(N-1)-(2^e-1)N)\\
&=&4N(N-2^e)\end{aligned}$$
\(3) , with $i,j,k,l$ distinct, $i+j=k+l$. The contribution here is: $$\begin{aligned}
I^{ari}_3(\sqcap\sqcap)
&=&4\#\{i,j,k,l\ {\rm distinct}|i+j=k+l\}\\
&=&4\#\{i,j,k\ {\rm distinct}|i+j-k\neq i,j,k\}\\
&=&4\#\{i,j,k\ {\rm distinct}|i+j-k\neq k\}\\
&=&4\#\{i,j,k\ {\rm distinct}|i\neq 2k-j\}\end{aligned}$$
We can split this quantity over two cases, $2j\neq 2k$ and $2j=2k$, and we obtain: $$\begin{aligned}
I^{ari}_3(\sqcap\sqcap)
&=&4(\#\{i,j,k\ {\rm distinct}|2j\neq 2k,i\neq 2k-j\}\\
&&+\#\{i,j,k\ {\rm distinct}|2j=2k,i\neq 2k-j\})\end{aligned}$$
The point now is that in the first case, $2j\neq 2k$, the numbers $j,k,2k-j$ are distinct, while in the second case, $2j=2k$, we simply have $2k-j=j$. Thus, we obtain: $$\begin{aligned}
I^{ari}_3(\sqcap\sqcap)
&=&4\left(\sum_{j\neq k,2j\neq 2k}\#\{i|i\neq j,k,2k-j\}+\sum_{j\neq k,2j=2k}\#\{i|i\neq j,k\}\right)\\
&=&4(N(N-2^e)(N-3)+N(2^e-1)(N-2))\\
&=&4N(N(N-3)-2^e(N-3)+2^e(N-2)-(N-2))\\
&=&4N(N^2-4N+2^e+2)\end{aligned}$$
We can now compute the arithmetic part. This is given by: $$\begin{aligned}
I^{ari}(\sqcap\sqcap)
&=&(2^e-1)N+4N(N-2^e)+4N(N^2-4N+2^e+2)\\
&=&N(2^e-1+4(N-2^e)+4(N^2-4N+2^e+2))\\
&=&N(4N^2-12N+2^e+7)\end{aligned}$$
Thus the integral to be computed is given by: $$\begin{aligned}
I(\sqcap\sqcap)
&=&N^2(2N-1)^2+N(4N^2-12N+2^e+7)\\
&=&N(4N^3-4N^2+N+4N^2-12N+2^e+7)\\
&=&N(4N^3-11N+2^e+7)\end{aligned}$$
Thus we have reached to the formula in the statement, and we are done.
We have the following asymptotic result:
The glow of $F_G$, with $|G|=N$, is given by $$\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left(\frac{|E|}{N}\right)^{2p}=1-K_1N^{-1}+K_2N^{-2}-K_3N^{-3}+O(N^{-4})$$ with $K_1=\binom{p}{2}$, $K_2=\binom{p}{2}\frac{3p^2+p-8}{12}$, $K_3=\binom{p}{3}\frac{p^3+4p^2+p-18}{8}$.
We use the quantities $\widetilde{K}(\pi)=\frac{K(\pi)}{p!},\widetilde{I}(\pi)=\frac{I(\pi)}{N^p}$, which are such that $\widetilde{K}(\pi|\ldots|)=\widetilde{K}(\pi),\widetilde{I}(\pi|\ldots|)=\widetilde{I}(\pi)$. In terms of $J(\sigma)=\binom{p}{\sigma}\widetilde{K}(\sigma)\widetilde{I}(\sigma)$, we have: $$\begin{aligned}
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}
&=&J(\emptyset)\\
&+&N^{-1}J(\sqcap)\\
&+&N^{-2}\left(J(\sqcap\!\sqcap)+J(\sqcap\sqcap)\right)\\
&+&N^{-3}\left(J(\sqcap\!\!\sqcap\!\!\sqcap)+J(\sqcap\!\!\sqcap\sqcap)+J(\sqcap\sqcap\sqcap)\right)\\
&+&O(N^{-4})\end{aligned}$$
We have $\widetilde{K}_0=\widetilde{K}_1=1$, $\widetilde{K}_2=\frac{1}{2}-1=-\frac{1}{2}$, $\widetilde{K}_3=\frac{1}{6}-\frac{3}{2}+2=\frac{2}{3}$ and: $$\widetilde{K}_4=\frac{1}{24}-\frac{4}{6}-\frac{3}{4}+\frac{12}{2}-6=-\frac{11}{8}$$
Regarding now the numbers $C_{pr}$ in Proposition 8.16, these are given by: $$C_{p1}=1,C_{p2}=\frac{1}{2}\binom{2p}{p}-1,\ldots\ldots,C_{p,p-1}=\frac{p!}{2}\binom{p}{2},C_{pp}=p!$$
We deduce that $I(|)=N$, $I(\sqcap)=N(2N-1)$, $I(\sqcap\!\sqcap)=N(6N^2-9N+4)$ and: $$I(\sqcap\!\!\sqcap\!\!\sqcap)=N(24N^3-72N^2+82N-33)$$
By using Proposition 8.17 and Proposition 8.18, we obtain the following formula: $$\begin{aligned}
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}
&=&1-\frac{1}{2}\binom{p}{2}(2N^{-1}-N^{-2})+\frac{2}{3}\binom{p}{3}(6N^{-2}-9N^{-3})\\
&+&3\binom{p}{4}N^{-2}-33\binom{p}{4}N^{-3}-40\binom{p}{5}N^{-3}\\
&-&15\binom{p}{6}N^{-3}+O(N^{-4})\end{aligned}$$
But this gives the formulae of $K_1,K_2,K_3$ in the statement, and we are done.
It is possible to compute the next term as well, the result being:
Let $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}$ be a finite abelian group, and set $N=N_1\ldots N_k$. Then the glow of the associated Fourier matrix $F_G$ is given by $$\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left(\frac{|E|}{N}\right)^{2p}=1-K_1N^{-1}+K_2N^{-2}-K_3N^{-3}+K_4N^{-4}+O(N^{-5})$$ where the quantities $K_1,K_2,K_3,K_4$ are given by $$\begin{aligned}
K_1&=&\binom{p}{2}\\
K_2&=&\binom{p}{2}\frac{3p^2+p-8}{12}\\
K_3&=&\binom{p}{3}\frac{p^3+4p^2+p-18}{8}\\
K_4&=&\frac{8}{3}\binom{p}{3}+\frac{3}{4}\left(121+\frac{2^e}{N}\right)\binom{p}{4}+416\binom{p}{5}+\frac{2915}{2}\binom{p}{6}+40\binom{p}{7}+105\binom{p}{8}\end{aligned}$$ where $e\in\{0,1,\ldots,k\}$ is the number of even numbers among $N_1,\ldots,N_k$.
This is something that we already know, up to order 3, and the next coefficient $K_4$ can be computed in a similar way, based on results that we already have. Skipping the technical details here, we obtain the formula for $K_4$ in the statement.
The passage from Theorem 8.19 to Theorem 8.20 is quite interesting, because it shows that the glow of the Fourier matrices $F_G$ is not polynomial in $N=|G|$. When restricting the attention to the usual Fourier matrices $F_N$, the glow up to order 4 is polynomial both in $N$ odd, and in $N$ even, but it is not clear what happens at higher order.
An interesting question here is that of computing the glow of the Walsh matrices. For such a matrix $W_N$, with $N=2^n$, the underlying group is $G=\mathbb Z_2^n$, and the numbers $C_I(J_1,\ldots,J_r)=\#\left\{(a_i)_{i\in I}\in G\ {\rm distinct}\Big|\sum_{j\in J_s}a_j=0,\forall s\right\}$ are polynomial in $N=2^n$. This suggests that the integrals $I(\pi)$, and hence the glow, should be polynomial in $N$.
There are many interesting questions in relation with the glow. As already mentioned in the beginning of this section, a motivation for all this comes from [@iwo]. Also, we have as well the question of connecting the various invariants of the Hadamard matrices, such as the defect, or the algebraic invariants from sections 10-12 below, to the glow.
Norm maximizers
===============
We discuss here some further analytic questions. We know from Theorem 7.1 above that the Hadamard manifold $X_N=M_N(\mathbb T)\cap\sqrt{N}U_N$ can be analytically computed in two possible ways. The first method is via the Hadamard determinant bound: $$X_N=\left\{H\in M_N(\mathbb T)\Big|\,|\det(H)|=N^{N/2}\right\}$$
This method, going back to Hadamard’s 1893 paper [@had] is of course something well-known, and we will not further comment on this. We will focus instead on the second possible method, which is something more recent [@bcs], via the 1-norm bound: $$X_N=\left\{H\in\sqrt{N}U_N\Big|\,||H||_1=N^2\right\}$$
This formula suggests a systematic study of the 1-norm on $U_N$, with results about critical points, and local maximizers. In addition, in connection with the real Hadamard matrix problematics, we would like to study as well the 1-norm on $O_N$, where the critical points and local maximizers might be different from the real ones over $U_N$.
We have already met such questions, in sections 1,6,7 above, in various technical contexts. Let us begin with a short summary of the results that we already have:
Let $U\in U_N$, and set $H=\sqrt{N}U$.
1. For $p\in[1,2)$ we have $||U||_p\leq N^{2/p-1/2}$, with equality when $H$ is Hadamard.
2. For $p\in(2,\infty]$ we have $||U||_p\geq N^{2/p-1/2}$, with equality when $H$ is Hadamard.
As explained in section 6 above, all this follows from the Hölder inequality, with the remark that at $p=1,4$ we just need the Cauchy-Schwarz inequality.
We have chosen here to talk about $p$-norms instead of just the 1-norm, and this, for two reasons. First, we have seen in section 6 that for certain technical questions regarding the circulant matrices, the natural norm to be used is in fact the 4-norm. And second, we will formulate in what follows certain conjectures regarding the 1-norm, and these conjectures are probably easier to investigate first in the $p$-norm setting, with $p$ arbitrary.
The above result suggests the following definition:
Given $U\in U_N$, the matrix $H=\sqrt{N}U$ is called:
1. Almost Hadamard, if $U$ locally maximizes the $1$-norm on $U_N$.
2. $p$-almost Hadamard, with $p<2$, if $U$ locally maximizes the $p$-norm on $U_N$.
3. $p$-almost Hadamard, with $p>2$, if $U$ locally minimizes the $p$-norm on $U_N$.
4. Absolute almost Hadamard, if it is $p$-almost Hadamard at any $p\neq2$.
We have as well real versions of these notions, with $U_N$ replaced by $O_N$.
All this might seem a bit complicated, but this is the best way of presenting things. We are mainly interested in (1), but as mentioned above, the exponent $p=4$ from (3) is interesting as well, and once we have (3) we must formulate (2) as well, and finally (4) is a useful thing too, because the absolute case is sometimes easier to study.
As for the “doubling” of all these notions, via the last sentence, this is necessary too, because given a function $F:U_N\to\mathbb R$, an element $U\in O_N$ can be a local extremum of the restriction $F_{|O_N}:O_N\to\mathbb R$, but not of the function $F$ itself. And, we will see in what follows that this is the case, and in a quite surprising way, with the $p$-norms.
Let us first study the critical points. Things are quite tricky here, and complete results are available so far only at $p=1$. In the real case, following [@bcs], we have:
If $U\in O_N$ locally maximizes the $1$-norm, then $U_{ij}\neq 0$ for any $i,j$.
Assume that $U$ has a 0 entry. By permuting the rows we can assume that this 0 entry is in the first row, having under it a nonzero entry in the second row.
We denote by $U_1,\ldots,U_N$ the rows of $U$. By permuting the columns we can assume that we have a block decomposition of the following type: $$\begin{pmatrix}U_1\\ U_2\end{pmatrix}
=\begin{pmatrix}
0&0&Y&A&B\\
0&X&0&C&D
\end{pmatrix}$$
Here $X,Y,A,B,C,D$ are certain vectors with nonzero entries, with $A,B,C,D$ chosen such that each entry of $A$ has the same sign as the corresponding entry of $C$, and each entry of $B$ has sign opposite to the sign of the corresponding entry of $D$.
Our above assumption states that $X$ is not the null vector.
For $t>0$ small consider the matrix $U^t$ obtained by rotating by $t$ the first two rows of $U$. In row notation, this matrix is given by: $$U^t=\begin{pmatrix}
\cos t&\sin t\\
-\sin t&\cos t\\
&&1\\
&&&\ddots\\
&&&&1\end{pmatrix}
\begin{pmatrix}
U_1\\ U_2\\ U_3\\ \ldots\\ U_N
\end{pmatrix}
=\begin{pmatrix}
\cos t\cdot U_1+\sin t\cdot U_2\\ -\sin t\cdot U_1+\cos t\cdot U_2\\ U_3\\ \ldots\\ U_N
\end{pmatrix}$$
We make the convention that the lower-case letters denote the 1-norms of the corresponding upper-case vectors. According to the above sign conventions, we have: $$\begin{aligned}
||U^t||_1
&=&||\cos t\cdot U_1+\sin t\cdot U_2||_1+||-\sin t\cdot U_1+\cos t\cdot U_2||_1+\sum_{i=3}^Nu_i\\
&=&(\cos t+\sin t)(x+y+b+c)+(\cos t-\sin t)(a+d)+\sum_{i=3}^Nu_i\\
&=&||U||_1+(\cos t+\sin t-1)(x+y+b+c)+(\cos t-\sin t-1)(a+d)\end{aligned}$$
By using $\sin t=t+O(t^2)$ and $\cos t=1+O(t^2)$ we obtain: $$\begin{aligned}
||U^t||_1
&=&||U||_1+t(x+y+b+c)-t(a+d)+O(t^2)\\
&=&||U||_1+t(x+y+b+c-a-d)+O(t^2)\end{aligned}$$
In order to conclude, we have to prove that $U$ cannot be a local maximizer of the $1$-norm. This will basically follow by comparing the norm of $U$ to the norm of $U^t$, with $t>0$ small or $t<0$ big. However, since in the above computation it was technically convenient to assume $t>0$, we actually have three cases:
Case 1: $b+c>a+d$. Here for $t>0$ small enough the above formula shows that we have $||U^t||_1>||U||_1$, and we are done.
Case 2: $b+c=a+d$. Here we use the fact that $X$ is not null, which gives $x>0$. Once again for $t>0$ small enough we have $||U^t||_1>||U||_1$, and we are done.
Case 3: $b+c<a+d$. In this case we can interchange the first two rows of $U$ and restart the whole procedure: we fall in Case 1, and we are done again.
In the complex case, following [@bn3], we have a similar result:
If $U\in U_N$ locally maximizes the $1$-norm, then $U_{ij}\neq 0$ for any $i,j$.
We use the same method as in the real case, namely a “rotation trick”. Let us denote by $U_1,\ldots,U_N$ the rows of $U$, and let us perform a rotation of $U_1,U_2$: $$\begin{bmatrix}U^t_1\\ U^t_2\end{bmatrix}
=\begin{bmatrix}\cos t\cdot U_1-\sin t\cdot U_2\\ \sin t\cdot U_1+\cos t\cdot U_2\end{bmatrix}$$
In order to compute the 1-norm, let us permute the columns of $U$, in such a way that the first two rows look as follows, with $X,Y,A,B$ having nonzero entries: $$\begin{bmatrix}U_1\\ U_2\end{bmatrix}
=\begin{bmatrix}0&0&Y&A\\0&X&0&B\end{bmatrix}$$
The rotated matrix will look then as follows: $$\begin{bmatrix}U_1^t\\ U_2^t\end{bmatrix}
=\begin{bmatrix}
0&-\sin t\cdot X&\cos t\cdot Y&\cos t\cdot A-\sin t\cdot B\\
0&\cos t\cdot X&\sin t\cdot y&\sin t\cdot A+\cos t\cdot B\end{bmatrix}$$
Our claim is that $X,Y$ must be empty. Indeed, if $A$ and $B$ are not empty, let us fix a column index $k$ for both $A,B$, and set $\alpha=A_k$, $\beta=B_k$. We have then: $$\begin{aligned}
|(U_1^t)_k|+|(U_2^t)_k|
&=&|\cos t\cdot\alpha-\sin t\cdot\beta|+|\sin t\cdot\alpha+\cos t\cdot\beta|\\
&=&\sqrt{\cos^2t\cdot|\alpha|^2+\sin^2t\cdot|\beta|^2-\sin t\cos t(\alpha\bar{\beta}+\beta\bar{\alpha})}\\
&+&\sqrt{\sin^2t\cdot|\alpha|^2+\cos^2t\cdot|\beta|^2+\sin t\cos t(\alpha\bar{\beta}+\beta\bar{\alpha})}\end{aligned}$$
Since $\alpha,\beta\neq 0$, the above function is derivable at $t=0$, and we obtain: $$\begin{aligned}
\frac{\partial\left(|(U_1^t)_k|+|(U_2^t)_k|\right)}{\partial t}
&=&\frac{\sin 2t(|\beta|^2-|\alpha|^2)-\cos 2t(\alpha\bar{\beta}+\beta\bar{\alpha})}{2\sqrt{\cos^2t\cdot|\alpha|^2+\sin^2t\cdot|\beta|^2-\sin t\cos t(\alpha\bar{\beta}+\beta\bar{\alpha})}}\\
&+&\frac{\sin 2t(|\alpha|^2-|\beta|^2)+\cos 2t(\alpha\bar{\beta}+\beta\bar{\alpha})}{2\sqrt{\sin^2t\cdot|\alpha|^2+\cos^2t\cdot|\beta|^2+\sin t\cos t(\alpha\bar{\beta}+\beta\bar{\alpha})}}\end{aligned}$$
Thus at $t=0$, we obtain the following formula: $$\frac{\partial\left(|(U_1^t)_k|+|(U_2^t)_k|\right)}{\partial t}(0)=\frac{\alpha\bar{\beta}+\beta\bar{\alpha}}{2}\left(\frac{1}{|\beta|}-\frac{1}{|\alpha|}\right)$$
Now since $U$ locally maximizes the 1-norm, both directional derivatives of $||U^t||_1$ must be negative in the limit $t\to 0$. On the other hand, if we denote by $C$ the contribution coming from the right (which might be zero in the case where $A$ and $B$ are empty), i.e. the sum over $k$ of the above quantities, we have: $$\begin{aligned}
\frac{\partial||U^t||_1}{\partial t}_{\big|t=0^+}
&=&\frac{\partial}{\partial t}_{\big|t=0^+}(|\cos t|+|\sin t|)(||X||_1+||Y||_1)+C\\
&=&(-\sin t + \cos t)_{\big|t=0}(||X||_1+||Y||_1)+C\\
&=&||X||_1+||Y||_1+C\end{aligned}$$
As for the derivative at left, this is given by the following formula: $$\begin{aligned}
\frac{\partial||U^t||_1}{\partial t}_{\big|t=0^-}
&=&\frac{\partial}{\partial t}_{\big|t=0^-}(|\cos t|+|\sin t|)(||X||_1+||Y||_1)+C\\
&=&(-\sin t - \cos t)_{\big|t=0}(||X||_1+||Y||_1)+C\\
&=&-||X||_1-||Y||_1+C\end{aligned}$$
We therefore obtain the following inequalities, where $C$ is as above: $$\begin{aligned}
||X||_1+||Y||_1+C &\leq& 0\\
-||X||_1-||Y||_1+C&\leq& 0\end{aligned}$$
Consider now the matrix obtained from $U$ by interchanging $U_1,U_2$. Since this matrix must be as well a local maximizer of the 1-norm, and since the above formula shows that $C$ changes its sign when interchanging $U_1,U_2$, we obtain: $$\begin{aligned}
||X||_1+||Y||_1-C &\leq& 0\\
-||X||_1-||Y||_1-C&\leq& 0\end{aligned}$$
The four inequalities that we have give altogether $||X||_1+||Y||_1=C=0$, and from $||X||_1+||Y||_1=0$ we obtain that both $X,Y$ must be empty, as claimed.
As a conclusion, up to a permutation of the columns, the first two rows must be of the following form, with $A,B$ having only nonzero entries: $$\begin{bmatrix}U_1\\ U_2\end{bmatrix}
=\begin{bmatrix}0&A\\0&B\end{bmatrix}$$
By permuting the rows of $U$, the same must hold for any two rows $U_i,U_j$. Now since $U$ cannot have a zero column, we conclude that $U$ cannot have zero entries, as claimed.
Let us compute now the critical points. Following [@bn3], we have:
Let $\varphi:[0,\infty)\to\mathbb R$ be a differentiable function. A matrix $U\in U_N^*$ is a critical point of the quantity $$F(U)=\sum_{ij}\varphi(|U_{ij}|)$$ precisely when $WU^*$ is self-adjoint, where $W_{ij}={\rm sgn}(U_{ij})\varphi'(|U_{ij}|)$.
We regard $U_N$ as a real algebraic manifold, with coordinates $U_{ij},\bar{U}_{ij}$. This manifold consists by definition of the zeroes of the following polynomials: $$A_{ij}=\sum_kU_{ik}\bar{U}_{jk}-\delta_{ij}$$
Since $U_N$ is smooth, and so is a differential manifold in the usual sense, it follows from the general theory of Lagrange multipliers that a given matrix $U\in U_N$ is a critical point of $F$ precisely when the condition $dF\in span(dA_{ij})$ is satisfied.
Regarding the space $span(dA_{ij})$, this consists of the following quantities: $$\begin{aligned}
\sum_{ij}M_{ij}dA_{ij}
&=&\sum_{ijk}M_{ij}(U_{ik}d\bar{U}_{jk}+\bar{U}_{jk}dU_{ik})\\
&=&\sum_{jk}(M^tU)_{jk}d\bar{U}_{jk}+\sum_{ik}(M\bar{U})_{ik}dU_{ik}\\
&=&\sum_{ij}(M^tU)_{ij}d\bar{U}_{ij}+\sum_{ij}(M\bar{U})_{ij}dU_{ij}\end{aligned}$$
In order to compute $dF$, observe first that, with $S_{ij}=sgn(U_{ij})$, we have: $$d|U_{ij}|=d\sqrt{U_{ij}\bar{U}_{ij}}=\frac{U_{ij}d\bar{U}_{ij}+\bar{U}_{ij}dU_{ij}}{2|U_{ij}|}=\frac{1}{2}(S_{ij}d\bar{U}_{ij}+\bar{S}_{ij}dU_{ij})$$
We therefore obtain, with $W_{ij}=sgn(U_{ij})\varphi'(|U_{ij}|)$ as in the statement: $$dF=\sum_{ij}d\left(\varphi(|U_{ij}|)\right)=\sum_{ij}\varphi'(|U_{ij}|)d|U_{ij}|=\frac{1}{2}\sum_{ij}W_{ij}d\bar{U}_{ij}+\bar{W}_{ij}dU_{ij}$$
We conclude that $U\in U_N$ is a critical point of $F$ if and only if there exists a matrix $M\in M_N(\mathbb C)$ such that the following two conditions are satisfied: $$W=2M^tU\quad,\quad\bar{W}=2M\bar{U}$$
Now observe that these two equations can be written as follows: $$M^t=\frac{1}{2}WU^*\quad,\quad M^t=\frac{1}{2}UW^*$$
Summing up, the critical point condition on $U\in U_N$ simply reads $WU^*=UW^*$, which means that the matrix $WU^*$ must be self-adjoint, as claimed.
In order to process the above result, we can use the following notion:
Given $U\in U_N$, we consider its “color decomposition” $U=\sum_{r>0}rU_r$, with $U_r\in M_N(\mathbb T\cup\{0\})$ containing the phase components at $r>0$, and we call $U$:
1. Semi-balanced, if $U_rU^*$ and $U^*U_r$, with $r>0$, are all self-adjoint.
2. Balanced, if $U_rU_s^*$ and $U_r^*U_s$, with $r,s>0$, are all self-adjoint.
These conditions are quite natural, because for a unitary matrix $U\in U_N$, the relations $UU^*=U^*U=1$ translate as follows, in terms of the color decomposition: $$\sum_{r>0}rU_rU^*=\sum_{r>0}rU^*U_r=1$$ $$\sum_{r,s>0}rsU_rU_s^*=\sum_{r,s>0}rsU_r^*U_s=1$$
Thus, our balancing conditions express the fact that the various components of the above sums all self-adjoint. Now back to our critical point questions, we have:
For a matrix $U\in U_N^*$, the following are equivalent:
1. $U$ is a critical point of $F(U)=\sum_{ij}\varphi(|U_{ij}|)$, for any $\varphi:[0,\infty)\to\mathbb R$.
2. $U$ is a critical point of all the $p$-norms, with $p\in[1,\infty)$.
3. $U$ is semi-balanced, in the above sense.
We use Theorem 9.5 above. The matrix constructed there is given by: $$\begin{aligned}
(WU^*)_{ij}
&=&\sum_k{\rm sgn}(U_{ik})\varphi'(|U_{ik}|)\bar{U}_{jk}\\
&=&\sum_{r>0}\varphi'(r)\sum_{k,|U_{ik}|=r}{\rm sgn}(U_{ik})\bar{U}_{jk}\\
&=&\sum_{r>0}\varphi'(r)\sum_k(U_r)_{ik}\bar{U}_{jk}\\
&=&\sum_{r>0}\varphi'(r)(U_rU^*)_{ij}\end{aligned}$$
Thus we have $WU^*=\sum_{r>0}\varphi'(r)U_rU^*$, and when $\varphi:[0,\infty)\to\mathbb R$ varies, either as an arbitrary differentiable function, or as a power function $\varphi(x)=x^p$ wirh $p\in[1,\infty)$, the individual components of this sum must be all self-adjoint, as desired.
In practice now, most of the known examples of semi-balanced matrices are actually balanced. We have the following collection of simple facts, regarding such matrices:
The class of balanced matrices is as follows:
1. It contains the matrices $U=H/\sqrt{N}$, with $H\in M_N(\mathbb C)$ Hadamard.
2. It is stable under transposition, complex conjugation, and taking adjoints.
3. It is stable under taking tensor products.
4. It is stable under the Hadamard equivalence relation.
5. It contains the matrix $V_N=\frac{1}{N}(2\mathbb I_N-N1_N)$, where $\mathbb I_N$ is the all-$1$ matrix.
All these results are elementary, the proof being as follows:
\(1) Here $U\in U_N$ follows from the Hadamard condition, and since there is only one color component, namely $U_{1/\sqrt{N}}=H$, the balancing condition is satisfied as well.
\(2) Assuming that $U=\sum_{r>0}rU_r$ is a color decomposition of a given matrix $U\in U_N$, the following are color decompositions too, and this gives the assertions: $$U^t=\sum_{r>0}rU_r^t\quad,\quad\bar{U}=\sum_{r>0}r\bar{U}_r\quad,\quad U^*=\sum_{r>0}rU_r^*$$
\(3) Assuming that $U=\sum_{r>0}rU_r$ and $V=\sum_{s>0}sV_s$ are the color decompositions of two given unitary matrices $U,V$, we have: $$U\otimes V=\sum_{r,s>0}rs\cdot U_r\otimes V_s=\sum_{p>0}p\sum_{p=rs}U_r\otimes V_s$$
Thus the color components of $W=U\otimes V$ are the matrices $W_p=\sum_{p=rs}U_r\otimes V_s$, and it follows that if $U,V$ are both balanced, then so is $W=U\otimes V$.
\(4) We recall that the Hadamard equivalence consists in permuting rows and columns, and switching signs on rows and columns. Since all these operations correspond to certain conjugations at the level of the matrices $U_rU_s^*,U_r^*U_s$, we obtain the result.
\(5) The matrix in the statement, which goes back to [@bnz], is as follows: $$V_N=\frac{1}{N}
\begin{pmatrix}
2-N&2&\ldots&2\\
2&2-N&\ldots&2\\
\ldots&\ldots&\ldots&\ldots\\
2&2&\ldots&2-N
\end{pmatrix}$$
Observe that this matrix is indeed unitary, its rows being of norm one, and pairwise orthogonal. The color components of this matrix being $V_{2/N-1}=1_N$ and $V_{2/N}=\mathbb I_N-1_N$, it follows that this matrix is balanced as well, as claimed.
Let us look now more in detail at $V_N$, and at the matrices having similar properties. Following [@bnz], let us call $(a,b,c)$ pattern any matrix $M\in M_N(0,1)$, with $N=a+2b+c$, such that any two rows look as follows, up to a permutation of the columns: $$\begin{matrix}
0\ldots 0&0\ldots 0&1\ldots 1&1\ldots 1\\
\underbrace{0\ldots 0}_a&\underbrace{1\ldots 1}_b&\underbrace{0\ldots 0}_b&\underbrace{1\ldots 1}_c
\end{matrix}$$
As explained in [@bnz], there are many interesting examples of $(a,b,c)$ patterns, coming from the balanced incomplete block designs (BIBD), and all these examples can produce two-entry unitary matrices, by replacing the $0,1$ entries with suitable numbers $x,y$.
Now back to the matrix $V_N$ from Proposition 9.8 (5), observe that this matrix comes from a $(0,1,N-2)$ pattern. And also, independently of this, this matrix has the remarkable property of being at the same time circulant and self-adjoint.
We have in fact the following result, generalizing Proposition 9.8 (5):
The following matrices are balanced:
1. The orthogonal matrices coming from $(a,b,c)$ patterns.
2. The unitary matrices which are circulant and self-adjoint.
These observations basically go back to [@bnz], the proofs being as follows:
\(1) If we denote by $P,Q\in M_N(0,1)$ the matrices describing the positions of the $0,1$ entries inside the pattern, then we have the following formulae: $$\begin{aligned}
PP^t=P^tP&=&a\mathbb I_N+b1_N\\
QQ^t=Q^tQ&=&c\mathbb I_N+b1_N\\
PQ^t=P^tQ=QP^t=Q^tP&=&b\mathbb I_N-b1_N\end{aligned}$$
Since all these matrices are symmetric, $U$ is balanced, as claimed.
\(2) Assume that $U\in U_N$ is circulant, $U_{ij}=\gamma_{j-i}$, and in addition self-adjoint, which means $\bar{\gamma}_i=\gamma_{-i}$. Consider the following sets, which must satisfy $D_r=-D_r$: $$D_r=\{k:|\gamma_r|=k\}$$
In terms of these sets, we have the following formula: $$\begin{aligned}
(U_rU_s^*)_{ij}
&=&\sum_k(U_r)_{ik}(\bar{U}_s)_{jk}\\
&=&\sum_k\delta_{|\gamma_{k-i}|,r}\,{\rm sgn}(\gamma_{k-i})\cdot\delta_{|\gamma_{k-j}|,s}\,{\rm sgn}(\bar{\gamma}_{k-j})\\
&=&\sum_{k\in(D_r+i)\cap(D_s+j)}{\rm sgn}(\gamma_{k-i})\,{\rm sgn}(\bar{\gamma}_{k-j})\end{aligned}$$
With $k=i+j-m$ we obtain, by using $D_r=-D_r$, and then $\bar{\gamma}_i=\gamma_{-i}$: $$\begin{aligned}
(U_rU_s^*)_{ij}
&=&\sum_{m\in(-D_r+j)\cap(-D_s+i)}{\rm sgn}(\gamma_{j-m})\,{\rm sgn}(\bar{\gamma}_{i-m})\\
&=&\sum_{m\in(D_r+i)\cap(D_r+j)}{\rm sgn}(\gamma_{j-m})\,{\rm sgn}(\bar{\gamma}_{i-m})\\
&=&\sum_{m\in(D_r+i)\cap(D_r+j)}{\rm sgn}(\bar{\gamma}_{m-j})\,{\rm sgn}(\gamma_{m-i})\end{aligned}$$
Now by interchanging $i\leftrightarrow j$, and with $m\to k$, this formula becomes: $$(U_rU_s^*)_{ji}=\sum_{k\in(D_r+i)\cap(D_r+j)}{\rm sgn}(\bar{\gamma}_{k-i})\,{\rm sgn}(\gamma_{k-j})$$
We recognize here the complex conjugate of $(U_rU_s^*)_{ij}$, as previously computed above, and we therefore deduce that $U_rU_s^*$ is self-adjoint. The proof for $U_r^*U_s$ is similar.
Summarizing, the study of the critical points alone leads to some interesting combinatorics. There are of course many questions regarding the balanced and semi-balanced matrices, for instance in connection with design theory. See [@bn3].
Let us compute now derivatives. As in Theorem 9.5, it is convenient to do the computations in a more general framework, where we have a function as follows: $$F(U)=\sum_{ij}\psi(|U_{ij}|^2)$$
In order to study the local extrema of these quantities, consider the following function, depending on $t>0$ small: $$f(t)=F(Ue^{tA})=\sum_{ij}\psi(|(Ue^{tA})_{ij}|^2)$$
Here $U\in U_N$ is an arbitrary unitary matrix, and $A\in M_N(\mathbb C)$ is assumed to be anti-hermitian, $A^*=-A$, with this latter assumption needed for having $e^A\in U_N$.
Let us first compute the derivative of $f$. We have:
We have the following formula, $$f'(t)=2\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)Re\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]$$ valid for any $U\in U_N$, and any $A\in M_N(\mathbb C)$ anti-hermitian.
The matrices $U,e^{tA}$ being both unitary, we have: $$\begin{aligned}
|(Ue^{tA})_{ij}|^2
&=&(Ue^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\\
&=&(Ue^{tA})_{ij}((Ue^{tA})^*)_{ji}\\
&=&(Ue^{tA})_{ij}(e^{tA^*}U^*)_{ji}\\
&=&(Ue^{tA})_{ij}(e^{-tA}U^*)_{ji}\end{aligned}$$
We can now differentiate our function $f$, and by using once again the unitarity of the matrices $U,e^{tA}$, along with the formula $A^*=-A$, we obtain: $$\begin{aligned}
f'(t)
&=&\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)\left[(UAe^{tA})_{ij}(e^{-tA}U^*)_{ji}-(Ue^{tA})_{ij}(e^{-tA}AU^*)_{ji}\right]\\
&=&\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)\left[(UAe^{tA})_{ij}\overline{((e^{-tA}U^*)^*)_{ij}}-(Ue^{tA})_{ij}\overline{((e^{-tA}AU^*)^*)_{ij}}\right]\\
&=&\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}+(Ue^{tA})_{ij}\overline{(UAe^{tA})_{ij}}\right]\end{aligned}$$
But this gives the formula in the statement, and we are done.
Before computing the second derivative, let us evaluate $f'(0)$. In terms of the color decomposition $U=\sum_{r>0}rU_r$ of our matrix, the result is as follows:
We have the following formula, $$f'(0)=2\sum_{r>0}r\psi'(r^2)Re\left[Tr(U_r^*UA)\right]$$ where $U_r\in M_N(\mathbb T\cup\{0\})$ are the color components of $U$.
We use the formula in Proposition 9.10 above. At $t=0$, we obtain: $$f'(0)=2\sum_{ij}\psi'(|U_{ij}|^2)Re\left[(UA)_{ij}\overline{U}_{ij}\right]$$
Consider now the color decomposition of $U$. We have the following formulae: $$\begin{aligned}
U_{ij}=\sum_{r>0}r(U_r)_{ij}
&\implies&|U_{ij}|^2=\sum_{r>0}r^2|(U_r)_{ij}|\\
&\implies&\psi'(|U_{ij}|^2)=\sum_{r>0}\psi'(r^2)|(U_r)_{ij}|\end{aligned}$$
Now by getting back to the above formula of $f'(0)$, we obtain: $$f'(0)=2\sum_{r>0}\psi'(r^2)\sum_{ij}Re\left[(UA)_{ij}\overline{U}_{ij}|(U_r)_{ij}|\right]$$
Our claim now is that we have $\overline{U}_{ij}|(U_r)_{ij}|=r\overline{(U_r)}_{ij}$. Indeed, in the case $|U_{ij}|\neq r$ this formula reads $\overline{U}_{ij}\cdot 0=r\cdot 0$, which is true, and in the case $|U_{ij}|=r$ this formula reads $r\bar{S}_{ij}\cdot 1=r\cdot\bar{S}_{ij}$, which is once again true. We therefore conclude that we have: $$f'(0)=2\sum_{r>0}r\psi'(r^2)\sum_{ij}Re\left[(UA)_{ij}\overline{(U_r)}_{ij}\right]$$
But this gives the formula in the statement, and we are done.
Let us compute now the second derivative. The result here is as follows:
We have the following formula, $$\begin{aligned}
f''(0)
&=&4\sum_{ij}\psi''(|U_{ij}|^2)Re\left[(UA)_{ij}\overline{U}_{ij}\right]^2\\
&&+2\sum_{ij}\psi'(|U_{ij}|^2)Re\left[(UA^2)_{ij}\overline{U}_{ij}\right]\\
&&+2\sum_{ij}\psi'(|U_{ij}|^2)|(UA)_{ij}|^2\end{aligned}$$ valid for any $U\in U_N$, and any $A\in M_N(\mathbb C)$ anti-hermitian.
We use the formula in Proposition 9.10 above, namely: $$f'(t)=2\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)Re\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]$$
Since the real part on the right, or rather its double, appears as the derivative of the quantity $|(Ue^{tA})_{ij}|^2$, when differentiating a second time, we obtain: $$\begin{aligned}
f''(t)
&=&4\sum_{ij}\psi''(|(Ue^{tA})_{ij}|^2)Re\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]^2\\
&&+2\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)Re\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]'\end{aligned}$$
In order to compute now the missing derivative, observe that we have: $$\begin{aligned}
\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]'
&=&(UA^2e^{tA})_{ij}\overline{(Ue^{tA})_{ij}}+(UAe^{tA})_{ij}\overline{(UAe^{tA})_{ij}}\\
&=&(UA^2e^{tA})_{ij}\overline{(Ue^{tA})_{ij}}+|(UAe^{tA})_{ij}|^2\end{aligned}$$
Summing up, we have obtained the following formula: $$\begin{aligned}
f''(t)
&=&4\sum_{ij}\psi''(|(Ue^{tA})_{ij}|^2)Re\left[(UAe^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]^2\\
&&+2\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)Re\left[(UA^2e^{tA})_{ij}\overline{(Ue^{tA})_{ij}}\right]\\
&&+2\sum_{ij}\psi'(|(Ue^{tA})_{ij}|^2)|(UAe^{tA})_{ij}|^2\end{aligned}$$
But at $t=0$ this gives the formula in the statement, and we are done.
For the function $\psi(x)=\sqrt{x}$, corresponding to the functional $F(U)=||U||_1$, there are some simplifications, that we will work out now in detail. First, we have:
Let $U \in U_N^*$. For the function $F(U)=||U||_1$ we have the formula $$f''(0)=Re\left[Tr(S^*UA^2)\right]+\sum_{ij}\frac{Im\left[(UA)_{ij}\overline{S}_{ij}\right]^2}{|U_{ij}|}$$ valid for any anti-hermitian matrix $A$, where $U_{ij}=S_{ij}|U_{ij}|$.
We use the formula in Proposition 9.12 above, with $\psi(x)=\sqrt{x}$. The derivatives are here $\psi'(x)=\frac{1}{2\sqrt{x}}$ and $\psi''(x)=-\frac{1}{4x\sqrt{x}}$, and we obtain: $$\begin{aligned}
f''(0)
&=&-\sum_{ij}\frac{Re\left[(UA)_{ij}\overline{U}_{ij}\right]^2}{|U_{ij}|^3}
+\sum_{ij}\frac{Re\left[(UA^2)_{ij}\overline{U}_{ij}\right]}{|U_{ij}|}
+\sum_{ij}\frac{|(UA)_{ij}|^2}{|U_{ij}|}\\
&=&-\sum_{ij}\frac{Re\left[(UA)_{ij}\overline{S}_{ij}\right]^2}{|U_{ij}|}
+\sum_{ij}Re\left[(UA^2)_{ij}\overline{S}_{ij}\right]
+\sum_{ij}\frac{|(UA)_{ij}|^2}{|U_{ij}|}\\
&=&Re\left[Tr(S^*UA^2)\right]+\sum_{ij}\frac{|(UA)_{ij}|^2-Re\left[(UA)_{ij}\overline{S}_{ij}\right]^2}{|U_{ij}|}\end{aligned}$$
But this gives the formula in the statement, and we are done.
We are therefore led to the following result, regarding the 1-norm:
A matrix $U\in U_N^*$ locally maximizes the one-norm on $U_N$ precisely when $S^*U$ is self-adjoint, where $S_{ij}={\rm sgn}(U_{ij})$, and when $$Tr(S^*UA^2)+\sum_{ij}\frac{Im\left[(UA)_{ij}\overline{S}_{ij}\right]^2}{|U_{ij}|}\leq0$$ holds, for any anti-hermitian matrix $A\in M_N(\mathbb C)$.
According to Theorem 9.5 and Proposition 9.13, the local maximizer condition requires $X=S^*U$ to be self-adjoint, and the following inequality to be satisfied: $$Re\left[Tr(S^*UA^2)\right]+\sum_{ij}\frac{Im\left[(UA)_{ij}\overline{S}_{ij}\right]^2}{|U_{ij}|}\leq0$$
Now observe that since both $X$ and $A^2$ are self-adjoint, we have: $$Re\left[Tr(XA^2)\right]=\frac{1}{2}\left[Tr(XA^2)+Tr(A^2X)\right]=Tr(XA^2)$$
Thus we can remove the real part, and we obtain the inequality in the statement.
In order to further improve the above result, we will need:
For a self-adjoint matrix $X\in M_N(\mathbb C)$, the following are equivalent:
1. $Tr(XA^2)\leq0$, for any anti-hermitian matrix $A\in M_N(\mathbb C)$.
2. $Tr(XB^2)\geq0$, for any hermitian matrix $B\in M_N(\mathbb C)$.
3. $Tr(XC)\geq0$, for any positive matrix $C\in M_N(\mathbb C)$.
4. $X\geq0$.
These equivalences are well-known, the proof being as follows:
$(1)\implies(2)$ follows by taking $B=iA$.
$(2)\implies(3)$ follows by taking $C=B^2$.
$(3)\implies(4)$ follows by diagonalizing $X$, and then taking $C$ to be diagonal.
$(4)\implies(1)$ is clear as well, because with $Y=\sqrt{X}$ we have: $$Tr(XA^2)=Tr(Y^2A^2)=Tr(YA^2Y)=-Tr((YA)(YA)^*)\leq0$$
Thus, the above four conditions are indeed equivalent.
We would like to discuss as well the real case, and we will need here:
For a symmetric matrix $X\in M_N(\mathbb R)$, the following are equivalent:
1. $Tr(XA^2)\leq0$, for any antisymmetric matrix $A$.
2. The sum of the two smallest eigenvalues of $X$ is positive.
In terms of the vector $a=\sum_{ij}A_{ij}e_i\otimes e_j$, which is antisymmetric, we have: $$Tr(XA^2)=<X,A^2>=-<AX,A>=-<a,(1\otimes X)a>$$
Thus the condition (1) is equivalent to $P(1\otimes X)P$ being positive, with $P$ being the orthogonal projection on the antisymmetric subspace in $\mathbb R^N\otimes\mathbb R^N$.
For any two eigenvectors $x_i \perp x_j$ of $X$, with eigenvalues $\lambda_i, \lambda_j$, we have: $$\begin{aligned}
P(1\otimes X)P(x_i\otimes x_j-x_j\otimes x_i)
&=&P(\lambda_j x_i\otimes x_j-\lambda_i x_j\otimes x_i)\\
&=&\frac{\lambda_i +\lambda_j}{2}(x_i\otimes x_j-x_j\otimes x_i)\end{aligned}$$
Thus, we obtain the conclusion in the statement.
Following [@bn3], we can now formulate a final result on the subject, as follows:
Given $U\in U_N$, set $S_{ij}={\rm sgn}(U_{ij})$, and $X=S^*U$.
1. $U$ locally maximizes the $1$-norm on $U_N$ precisely when $X\geq0$, and when $$\Phi(U,B)=Tr(XB^2)-\sum_{ij}\frac{Re\left[(UB)_{ij}\overline{S}_{ij}\right]^2}{|U_{ij}|}$$ is positive, for any hermitian matrix $B\in M_N(\mathbb C)$.
2. If $U\in O_N$, this matrix locally maximizes the $1$-norm on $O_N$ precisely when $X$ is self-adjoint, and the sum of its two smallest eigenvalues is positive.
Here (1) follows from Theorem 9.14, by setting $A=iB$, and by using Proposition 9.15, which shows that we must have indeed $X\geq0$. As for (2), this follows from (1), with the remark that the right term vanishes, and from Proposition 9.16.
In practice now, let us first discuss the real case. The following result, involving the notion of $(a,b,c)$ pattern appearing in Theorem 9.9, was proved in [@bnz]:
If $U=U(x,y)$ is orthogonal, coming from an $(a,b,c)$ pattern, with $$(N(a-b)+2b)|x|+(N(c-b)+2b)|y|\geq 0$$ the matrix $H=\sqrt{N}U$ is almost Hadamard, in a real sense.
Since any row of $U$ consists of $a+b$ copies of $x$ and $b+c$ copies of $y$, we have: $$(SU^t)_{ij}
=\begin{cases}
(a+b)|x|+(b+c)|y|&(i=j)\\
(a-b)|x|+(c-b)|y|&(i\neq j)
\end{cases}$$
Now observe that we can write the matrix $SU^t$ as follows: $$\begin{aligned}
SU^t
&=&2b(|x|+|y|)1_N+((a-b)|x|+(c-b)|y|)NJ_N\\
&=&2b(|x|+|y|)(1_N-J_N)+((N(a-b)+2b)|x|+(N(c-b)+2b)|y|))J_N\end{aligned}$$
Since $1_N-J_N,J_N$ are orthogonal projections, we have $SU^t>0$ if and only if the coefficients of these matrices are both positive, and this gives the result.
As a basic example for the above construction, we have the following matrix: $$K_N=\frac{1}{\sqrt{N}}
\begin{pmatrix}
2-N&2&\ldots&2\\
2&2-N&\ldots&2\\
\ldots&\ldots&\ldots&\ldots\\
2&2&\ldots&2-N
\end{pmatrix}$$
We should mention that this matrix is in fact absolute almost Hadamard, in the real sense, as explained in [@moh]. There are many other interesting examples, coming from various block design constructions, and we refer here to [@bnz].
Observe now that our basic example, namely the above matrix $K_N$, is at the same time circulant and symmetric. We have in fact the following result, also from [@bnz]:
Consider a circulant matrix $H\in M_N(\mathbb R^*)$, written $H_{ij}=\gamma_{j-i}$. If the following conditions are satisfied, $H$ is almost Hadamard, in a real sense:
1. The vector $q=F^*\gamma$ satisfies $q\in\mathbb T^N$.
2. With $\varepsilon={\rm sgn}(\gamma)$, $\rho_i=\sum_r\varepsilon_r\gamma_{i+r}$ and $\nu=F^*\rho$, we have $\nu>0$.
We use the Fourier transform theory from Theorem 6.13 above. As a first observation, the orthogonality of $U$ is equivalent to the condition (1). Regarding now the condition $SU^t>0$, this is equivalent to $S^tU>0$. But: $$(S^tH)_{ij}=\sum_kS_{ki}H_{kj}=\sum_k\varepsilon_{i-k}\gamma_{j-k}=\sum_r\varepsilon_r\gamma_{j-i+r}=\rho_{j-i}$$
Thus $S^tU$ is circulant, with $\rho/\sqrt{N}$ as first row. We therefore have $S^tU=FLF^*$ with $L=diag(\nu)$ and $\nu=F^*\rho$, so $S^tU>0$ iff $\nu>0$, and we are done. See [@bnz].
As an example here, consider the following vector, having length $N=2n+1$: $$q=(-1)^n(1,-1,1,\ldots,-1,1,1,-1,\ldots,1,-1)$$
This vector satisfies the conditions of Theorem 9.19, and produces the following circulant $N\times N$ real almost Hadamard matrix, from [@bnz]: $$L_N=\frac{1}{N}
\begin{pmatrix}
1&-\cos^{-1}\frac{\pi}{N}&\cos^{-1}\frac{2\pi}{N}&\ldots\ldots&\cos^{-1}\frac{(N-1)\pi}{N}\\
\cos^{-1}\frac{(N-1)\pi}{N}&1&-\cos^{-1}\frac{\pi}{N}&\ldots\ldots&-\cos^{-1}\frac{(N-2)\pi}{N}\\
-\cos^{-1}\frac{(N-2)\pi}{N}&\cos^{-1}\frac{(N-1)\pi}{N}&1&\ldots\ldots&\cos^{-1}\frac{(N-3)\pi}{N}\\
\vdots&\vdots&\vdots&&\vdots\\
\vdots&\vdots&\vdots&&\vdots\\
-\cos^{-1}\frac{\pi}{N}&\cos^{-1}\frac{2\pi}{N}&-\cos^{-1}\frac{3\pi}{N}&\ldots\ldots&1
\end{pmatrix}$$
We refer to the paper [@bnz] for further details on all this, and for some other basic facts regarding the almost Hadamard matrices, in the real case. Some further analytic facts are available from [@bcs], [@bn1], [@bs1], [@moh]. There is as well a concrete application of all this, to the minors of the Hadamard matrices, in the spirit of [@kms], available from [@bs2].
Following now [@bn3], let us discuss the complex case. Quite surprisingly, the above “basic” matrix $K_N$ is not an almost Hadamard matrix in the complex sense. That is, while $K_N/\sqrt{N}$ locally maximizes the 1-norm on $O_N$, it does not do so over $U_N$.
In fact, the same happens for the various matrices coming from Theorem 9.18 and Theorem 9.19 above. And, in addition to this, various complex matrices which can be constructed via straightforward complex extensions of the constructions in Theorem 9.18 and Theorem 9.19 fail to be almost Hadamard as well, in the complex sense.
We are therefore led to the following statement, from [@bn3]:
Any local maximizer of the $1$-norm on $U_N$ must be a global maximizer, i.e. must be a rescaled Hadamard matrix.
In other words, our conjecture would be that, in the complex setting, almost Hadamard implies Hadamard. This would be of course something very useful.
Regarding now the known verifications of the AHC, as already mentioned above, these basically concern the natural “candidates” coming from Theorem 9.18 and Theorem 9.19, as well as some straightforward complex generalizations of these candidates.
All this is quite technical, and generally speaking, we refer here to [@bn3]. Let us mention, however, that the main idea that emerges from [@bn3] is that of using a method based on a random derivative, pointing towards a suitable homogeneous space coset.
In order to explain this, let $OSC_N\subset USC_N$ be the orthogonal symmetric circulant matrices, and unitary self-adjoint circulant matrices. Via the Fourier transform indentifications from Theorem 6.13, the inclusion $OSC_N\subset USC_N$ corresponds then to the following inclusion, with $\mathbb Z_2^{(N+e)/2}$ with $e=0,1$ being $\{p\in\mathbb Z_2^N|p_k=p_{-k}\}$: $$\mathbb Z_2^{(N+e)/2}\subset\mathbb Z_2^N$$
Let us consider as well the set $USB_N$ consisting of unitary bistochastic self-adjoint matrices. The various results in [@bn3] suggest the following statement:
Given $U\in USB_N$ satisfying $S^*U\geq0$, there exists a simple function $B\to B^U$, probably either the identity or the passage to another coset, such that $$\int_{OSC_N}\Phi(U,B^U)dB\leq0$$ and such that the equality can only be attained when $H=\sqrt{N}U$ is Hadamard.
Observe that, in view of Theorem 9.17 above, this would more or less prove the AHC, modulo a remaining extension from $USB_N$ to the group $U_N$ itself.
As already mentioned, this latter conjecture is supported by the computations in [@bn3], which either use this idea, or can be reformulated in this spirit.
Regarding the applications, the situation is of course very different from the one in the real case. Assuming that the AHC holds indeed, we would have here a new approach to the complex Hadamard matrices, which is by construction analytic and local. This would be of course something quite powerful, potentially reshaping the whole subject.
Quantum groups
==============
We discuss in what follows the relation between the Hadamard matrices $H\in M_N(\mathbb C)$ and the quantum permutation groups $G\subset S_N^+$, and its potential applications to certain mathematical physics questions. There is a lot of material to be surveyed here, and we will insist on mathematical aspects, regarding the correspondence $H\to G$.
We will need many preliminaries, first concerning the operator algebras, then the compact quantum groups in the sense of Woronowicz [@wo1], [@wo2], and then the matrix modelling questions for such quantum groups. Once done with this, we will be able to talk about quantum permutations, and their relation with the Hadamard matrices.
Let $H$ be a Hilbert space. We denote by $B(H)$ the algebra of bounded operators $T:H\to H$, with usual norm and involution. The algebra $B(H)$, as well as any of its unital subalgebras $A\subset B(H)$ which are complete, and stable under $*$, fit into:
A unital $C^*$-algebra is a complex algebra with unit $A$, having:
1. A norm $a\to||a||$, making it a Banach algebra (the Cauchy sequences converge).
2. An involution $a\to a^*$, which satisfies $||aa^*||=||a||^2$, for any $a\in A$.
In what follows we will often omit the adjective “unital”, because the passage to the non-unital case would bring nothing interesting, in connection with our questions.
Generally speaking, the elements $a\in A$ are best thought of as being some kind of “generalized operators”, on some Hilbert space which is not present. By using this idea, one can emulate spectral theory in this setting, in the following way:
Given $a\in A$, define its spectrum as $\sigma(a)=\{\lambda\in\mathbb C|a-\lambda\not\in A^{-1}\}$, and its spectral radius $\rho(a)$ as the radius of the smallest centered disk containing $\sigma(a)$.
1. The spectrum of a norm one element is in the unit disk.
2. The spectrum of a unitary element $(a^*=a^{-1}$) is on the unit circle.
3. The spectrum of a self-adjoint element ($a=a^*$) consists of real numbers.
4. The spectral radius of a normal element ($aa^*=a^*a$) is equal to its norm.
All this is standard, by using $\sigma(f(a))=f(\sigma(a))$ for any $f\in\mathbb C[X]$, and more generally for any $f\in\mathbb C(X)$ having poles outside $\sigma(a)$, which is elementary:
\(1) This simply comes from $\frac{1}{1-a}=1+a+a^2+\ldots$ for any $||a||<1$.
\(2) This follows from $\sigma(a)^{-1}=\sigma(a^{-1})=\sigma(a^*)=\overline{\sigma(a)}$.
\(3) This follows from (2), by using $f(z)=(z+it)/(z-it)$, with $t\in\mathbb R$.
\(4) We have $\rho(a)\leq ||a||$ from (1). Conversely, given $\rho>\rho(a)$, we have: $$\int_{|z|=\rho}\frac{z^n}{z -a}\,dz =\sum_{k=0}^\infty\left(\int_{|z|=\rho}z^{n-k-1}dz\right) a^k=a^{n-1}$$
By applying the norm and taking $n$-th roots we obtain $\rho\geq\lim_{n\to\infty}||a^n||^{1/n}$, and then by using $||aa^*||=||a||^2$ we obtain $\rho\geq ||a||$, and so $\rho(a)\geq||a||$, as desired.
With these preliminaries in hand, we can now formulate some theorems. The basic facts about the $C^*$-algebras, that we will need here, can be summarized as:
The $C^*$-algebras have the following properties:
1. The commutative ones are those of the form $C(X)$, with $X$ compact space.
2. Any such algebra $A$ embeds as $A\subset B(H)$, for some Hilbert space $H$.
3. In finite dimensions, these are the direct sums of matrix algebras.
All this is standard, the idea being as follows:
\(1) Given a compact space $X$, the algebra $C(X)$ of continuous functions $f:X\to\mathbb C$ is indeed a $C^*$-algebra, with norm $||f||=\sup_{x\in X}|f(x)|$, and involution $f^*(x)=\overline{f(x)}$. Observe that this algebra is indeed commutative, because $f(x)g(x)=g(x)f(x)$.
Conversely, if $A$ is commutative, we can define $X=Spec(A)$ to be the space of characters $\chi :A\to\mathbb C$, with topology making continuous all evaluation maps $ev_a:\chi\to\chi(a)$. We have then a morphism of algebras $ev:A\to C(X)$ given by $a\to ev_a$, and Proposition 10.2 (3) shows that $ev$ is a $*$-morphism, Proposition 10.2 (4) shows that $ev$ is isometric, and finally the Stone-Weierstrass theorem shows that $ev$ is surjective.
\(2) This is standard for $A=C(X)$, where we can pick a probability measure on $X$, and set $H=L^2(X)$, and use the embedding $A\subset B(H)$ given by $f\to(g\to fg)$.
In the general case, where $A$ is no longer commutative, the proof is quite similar, by emulating basic measure theory in the abstract $C^*$-algebra setting.
\(3) Assuming that $A$ is finite dimensional, we can first decompose its unit as $1=p_1+\ldots+p_k$, with $p_i\in A$ being minimal projections. Each of the linear spaces $A_i=p_iAp_i$ is then a non-unital $*$-subalgebra of $A$, and we have a non-unital $*$-algebra sum decomposition $A=A_1\oplus\ldots\oplus A_k$. On the other hand, since each $p_i$ is minimal, we have unital $*$-algebra isomorphisms $A_i\simeq M_{r_i}(\mathbb C)$, where $r_i=rank(p_i)$. Thus, we obtain a $C^*$-algebra isomorphism $A\simeq M_{r_1}(\mathbb C)\oplus\ldots\oplus M_{r_k}(\mathbb C)$, as desired.
All the above is of course quite brief, but details on all this can be found in any book on operator algebras. For a slightly longer proof of (1), called Gelfand theorem, and which is the key result that we will need here, we refer for instance to [@bbc].
As a conclusion to all this, given a $C^*$-algebra $A$, we can think of it as being of the form $A=C(X)$, with $X$ being a “compact quantum space”. We will be interested here in the case where $X$ is a “compact quantum group”. The axioms for the corresponding $C^*$-algebras, found by Woronowicz in [@wo1], are, in a soft form, as follows:
A Woronowicz algebra is a $C^*$-algebra $A$, given with a unitary matrix $u\in M_N(A)$ whose coefficients generate $A$, such that the formulae $$\Delta(u_{ij})=\sum_ku_{ik}\otimes u_{kj}\quad,\quad
\varepsilon(u_{ij})=\delta_{ij}\quad,\quad
S(u_{ij})=u_{ji}^*$$ define morphisms of $C^*$-algebras $\Delta:A\to A\otimes A$, $\varepsilon:A\to\mathbb C$, $S:A\to A^{opp}$.
The morphisms $\Delta,\varepsilon,S$ are called comultiplication, counit and antipode.
We say that $A$ is cocommutative when $\Sigma\Delta=\Delta$, where $\Sigma(a\otimes b)=b\otimes a$ is the flip. We have the following result, which justifies the terminology and axioms:
The following are Woronowicz algebras:
1. $C(G)$, with $G\subset U_N$ compact Lie group. Here the structural maps are: $$\begin{aligned}
\Delta(\varphi)&=&(g,h)\to \varphi(gh)\\
\varepsilon(\varphi)&=&\varphi(1)\\
S(\varphi)&=&g\to\varphi(g^{-1})\end{aligned}$$
2. $C^*(\Gamma)$, with $F_N\to\Gamma$ finitely generated group. Here the structural maps are: $$\begin{aligned}
\Delta(g)&=&g\otimes g\\
\varepsilon(g)&=&1\\
S(g)&=&g^{-1}\end{aligned}$$
Moreover, we obtain in this way all the commutative/cocommutative algebras.
In both cases, we have to exhibit a certain matrix $u$. For the first assertion, we can use the matrix $u=(u_{ij})$ formed by matrix coordinates of $G$, given by: $$g=\begin{pmatrix}
u_{11}(g)&\ldots&u_{1N}(g)\\
\vdots&&\vdots\\
u_{N1}(g)&\ldots&u_{NN}(g)
\end{pmatrix}$$
For the second assertion, we can use the diagonal matrix formed by generators: $$u=\begin{pmatrix}
g_1&&0\\
&\ddots&\\
0&&g_N
\end{pmatrix}$$
Finally, the last assertion follows from the Gelfand theorem, in the commutative case, and in the cocommutative case, this follows from the results of Woronowicz in [@wo1].
In general now, the structural maps $\Delta,\varepsilon,S$ have the following properties:
Let $(A,u)$ be a Woronowicz algebra.
1. $\Delta,\varepsilon$ satisfy the usual axioms for a comultiplication and a counit, namely: $$\begin{aligned}
(\Delta\otimes id)\Delta&=&(id\otimes \Delta)\Delta\\
(\varepsilon\otimes id)\Delta&=&(id\otimes\varepsilon)\Delta=id\end{aligned}$$
2. $S$ satisfies the antipode axiom, on the $*$-subalgebra generated by entries of $u$: $$m(S\otimes id)\Delta=m(id\otimes S)\Delta=\varepsilon(.)1$$
3. In addition, the square of the antipode is the identity, $S^2=id$.
The two comultiplication axioms follow from: $$\begin{aligned}
(\Delta\otimes id)\Delta(u_{ij})&=&(id\otimes \Delta)\Delta(u_{ij})=\sum_{kl}u_{ik}\otimes u_{kl}\otimes u_{lj}\\
(\varepsilon\otimes id)\Delta(u_{ij})&=&(id\otimes\varepsilon)\Delta(u_{ij})=u_{ij}\end{aligned}$$
As for the antipode formulae, the verification here is similar.
Summarizing, the Woronowicz algebras appear to have very nice properties. In view of Proposition 10.5 above, we can now formulate the following definition:
Given a Woronowicz algebra $A$, we formally write $$A=C(G)=C^*(\Gamma)$$ and call $G$ compact quantum group, and $\Gamma$ discrete quantum group.
When $A$ is both commutative and cocommutative, $G$ is a compact abelian group, $\Gamma$ is a discrete abelian group, and these groups are dual to each other, $G=\widehat{\Gamma},\Gamma=\widehat{G}$. In general, we still agree to write $G=\widehat{\Gamma},\Gamma=\widehat{G}$, but in a formal sense.
With this in mind, let us call now corepresentation of $A$ any unitary matrix $v\in M_n(A)$ satisfying the same conditions are those satisfied by $u$, namely: $$\Delta(v_{ij})=\sum_kv_{ik}\otimes v_{kj}\quad,\quad\varepsilon(v_{ij})=\delta_{ij}\quad,\quad S(v_{ij})=v_{ji}^*$$
These corepresentations can be thought of as corresponding to the unitary representations of the underlying compact quantum group $G$. As main examples, we have $u=(u_{ij})$ itself, its conjugate $\bar{u}=(u_{ij}^*)$, as well as any tensor product between $u,\bar{u}$.
We have the following key result, due to Woronowicz [@wo1]:
Any Woronowicz algebra has a unique Haar integration functional, $$\left(\int_G\otimes id\right)\Delta=\left(id\otimes\int_G\right)\Delta=\int_G(.)1$$ which can be constructed by starting with any faithful positive form $\varphi\in A^*$, and setting $$\int_G=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k}$$ where $\phi*\psi=(\phi\otimes\psi)\Delta$. Moreover, for any corepresentation $v\in M_n(\mathbb C)\otimes A$ we have $$\left(id\otimes\int_G\right)v=P$$ where $P$ is the orthogonal projection onto $Fix(v)=\{\xi\in\mathbb C^n|v\xi=\xi\}$.
Following [@wo1], this can be done in 3 steps, as follows:
\(1) Given $\varphi\in A^*$, our claim is that the following limit converges, for any $a\in A$: $$\int_\varphi a=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k}(a)$$
Indeed, by linearity we can assume that $a$ is the coefficient of corepresentation, $a=(\tau\otimes id)v$. But in this case, an elementary computation shows that we have the following formula, where $P_\varphi$ is the orthogonal projection onto the $1$-eigenspace of $(id\otimes\varphi)v$: $$\left(id\otimes\int_\varphi\right)v=P_\varphi$$
\(2) Since $v\xi=\xi$ implies $[(id\otimes\varphi)v]\xi=\xi$, we have $P_\varphi\geq P$, where $P$ is the orthogonal projection onto the space $Fix(v)=\{\xi\in\mathbb C^n|v\xi=\xi\}$. The point now is that when $\varphi\in A^*$ is faithful, by using a positivity trick, one can prove that we have $P_\varphi=P$. Thus our linear form $\int_\varphi$ is independent of $\varphi$, and is given on coefficients $a=(\tau\otimes id)v$ by: $$\left(id\otimes\int_\varphi\right)v=P$$
\(3) With the above formula in hand, the left and right invariance of $\int_G=\int_\varphi$ is clear on coefficients, and so in general, and this gives all the assertions. See [@wo1].
The above result is something quite fundamental, and as a main application, one can develop a Peter-Weyl type theory for the corepresentations of $A$. See [@wo1].
Finally, we will need some general theory regarding the random matrix models for the Woronowicz algebras. The idea here is very simple, namely that of modelling the coordinates $u_{ij}\in A$ by certain concrete variables $U_{ij}\in B$. Our favorite type of algebras being the random matrix ones, $B=M_K(C(T))$, we are led into:
A matrix model for $A=C(G)$ is a morphism of $C^*$-algebras $$\pi:C(G)\to M_K(C(T))$$ where $T$ is a compact space, and $K\geq1$ is an integer.
The “best” models are of course the faithful ones, $\pi:C(G)\subset M_K(C(T))$. However, this formalism is quite restrictive, not covering many interesting examples.
In order to fix this, let us look at the group dual case, $A=C^*(\Gamma)$, with $\Gamma$ being a usual discrete group. We know that a model $\pi:C^*(\Gamma)\to M_K(C(T))$ must come from a group representation $\rho:\Gamma\to C(T,U_K)$. Now observe that when $\rho$ is faithful, the representation $\pi$ is in general not faithful, for instance because when $T=\{.\}$ its target algebra is finite dimensional. On the other hand, this representation “reminds” $\Gamma$, and so can be used in order to fully understand $\Gamma$. This leads to the following definition:
Let $\pi:C(G)\to M_K(C(T))$ be a matrix model.
1. The Hopf image of $\pi$ is the smallest quotient Hopf $C^*$-algebra $C(G)\to C(H)$ producing a factorization of type $\pi:C(G)\to C(H)\to M_K(C(T))$.
2. When the inclusion $H\subset G$ is an isomorphism, i.e. when there is no non-trivial factorization as above, we say that $\pi$ is inner faithful.
In the case where $G=\widehat{\Gamma}$ is a group dual, $\pi$ must come from a group representation $\rho:\Gamma\to C(T,U_K)$, and the above factorization is simply the one obtained by taking the image, $\rho:\Gamma\to\Lambda\subset C(T,U_K)$. Thus $\pi$ is inner faithful when $\Gamma\subset C(T,U_K)$.
Also, given a compact group $G$, and elements $g_1,\ldots,g_K\in G$, we have a representation $\pi:C(G)\to\mathbb C^K$, given by $f\to(f(g_1),\ldots,f(g_K))$. The minimal factorization of $\pi$ is then via $C(H)$, with $H=\overline{<g_1,\ldots,g_K>}$, and $\pi$ is inner faithful when $G=H$.
In general, the existence and uniqueness of the Hopf image comes from dividing $C(G)$ by a suitable ideal. We refer to [@bbi] for more details regarding this construction.
We will be interested here in the quantum permutation groups, and their relation with the Hadamard matrices. The following key definition is due to Wang [@wan]:
A magic unitary matrix is a square matrix over a $C^*$-algebra, $$u\in M_N(A)$$ whose entries are projections, summing up to $1$ on each row and each column.
The basic examples of such matrices come from the usual permutation groups, $G\subset S_N$. Indeed, given such subgroup, the following matrix is magic: $$u_{ij}=\chi\left(\sigma\in G\Big|\sigma(j)=i\right)$$
This leads us into the following key definition, due to Wang [@wan] as well:
$C(S_N^+)$ is the universal $C^*$-algebra generated by the entries of a $N\times N$ magic unitary matrix $u=(u_{ij})$, with the morphisms given by $$\Delta(u_{ij})=\sum_ku_{ik}\otimes u_{kj}\quad,\quad\varepsilon(u_{ij})=\delta_{ij}\quad,\quad S(u_{ij})=u_{ji}$$ as comultiplication, counit and antipode.
This algebra satisfies the axioms in Definition 10.4, and the underlying compact quantum group $S_N^+$ is called quantum permutation group. Quite surprisingly, we have:
We have an embedding $S_N\subset S_N^+$, given at the algebra level by: $$u_{ij}\to\chi\left(\sigma\Big|\sigma(j)=i\right)$$ This is an isomorphism at $N\leq3$, but not at $N\geq4$, where $S_N^+$ is not classical, nor finite.
The fact that we have indeed an embedding as above is clear. Regarding now the second assertion, we can prove this in four steps, as follows:
The fact that $S_2^+$ is indeed classical, and hence collapses to $S_2$, is trivial, because the $2\times2$ magic matrices are as follows, with $p$ being a projection: $$U=\begin{pmatrix}p&1-p\\1-p&p\end{pmatrix}$$
It is enough to check that $u_{11},u_{22}$ commute. But this follows from: $$\begin{aligned}
u_{11}u_{22}
&=&u_{11}u_{22}(u_{11}+u_{12}+u_{13})\\
&=&u_{11}u_{22}u_{11}+u_{11}u_{22}u_{13}\\
&=&u_{11}u_{22}u_{11}+u_{11}(1-u_{21}-u_{23})u_{13}\\
&=&u_{11}u_{22}u_{11}\end{aligned}$$
Indeed, by applying the involution to this formula, we obtain from this that we have $u_{22}u_{11}=u_{11}u_{22}u_{11}$ as well, and so we get $u_{11}u_{22}=u_{22}u_{11}$, as desired.
Consider the following matrix, with $p,q$ being projections: $$U=\begin{pmatrix}
p&1-p&0&0\\
1-p&p&0&0\\
0&0&q&1-q\\
0&0&1-q&q
\end{pmatrix}$$
This matrix is then magic, and if we choose $p,q$ as for the algebra $<p,q>$ to be infinite dimensional, we conclude that $C(S_4^+)$ is infinite dimensional as well.
Here we can use the standard embedding $S_4^+\subset S_N^+$, obtained at the level of the corresponding magic matrices in the following way: $$u\to\begin{pmatrix}u&0\\ 0&1_{N-4}\end{pmatrix}$$
Indeed, with this in hand, the fact that $S_4^+$ is a non-classical, infinite compact quantum group implies that $S_N^+$ with $N\geq5$ has these two properties as well. See [@wan].
At a more advanced level, one can prove that $S_4^+\simeq SO_3^{-1}$. At $N\geq5$ the quantum group $S_N^+$ still has the same fusion rules as $SO_3$, but is not coamenable. See [@bbc].
In relation now with the complex Hadamard matrices, the connection with the quantum permutations is immediate, coming from the following observation:
If $H\in M_N(\mathbb C)$ is Hadamard, the rank one projections $$P_{ij}=Proj\left(\frac{H_i}{H_j}\right)$$ where $H_1,\ldots,H_N\in\mathbb T^N$ are the rows of $H$, form a magic unitary.
This is clear, the verification for the rows being as follows: $$\left<\frac{H_i}{H_j},\frac{H_i}{H_k}\right>=\sum_l\frac{H_{il}}{H_{jl}}\cdot\frac{H_{kl}}{H_{il}}=\sum_l\frac{H_{kl}}{H_{jl}}=N\delta_{jk}$$
The verification for the columns is similar, we follows: $$\left<\frac{H_i}{H_j},\frac{H_k}{H_j}\right>=\sum_l\frac{H_{il}}{H_{jl}}\cdot\frac{H_{jl}}{H_{kl}}=\sum_l\frac{H_{il}}{H_{kl}}=N\delta_{ik}$$
Thus, we have indeed a magic unitary, as claimed.
Summarizing, any complex Hadamard matrix produces a representation of the quantum permutation algebra $C(S_N^+)$. Thus, we can apply the Hopf image construction from Definition 10.10, and we are led in this way into the following notion:
To any Hadamard matrix $H\in M_N(\mathbb C)$ we associate the quantum permutation group $G\subset S_N^+$ given by the following Hopf image factorization, $$\xymatrix{C(S_N^+)\ar[rr]^{\pi}\ar[rd]&&M_N(\mathbb C)\\&C(G)\ar[ur]&}$$ where $\pi(u_{ij})=Proj(H_i/H_j)$, with $H_1,\ldots,H_N\in\mathbb T^N$ being the rows of $H$.
Our claim now is that this construction $H\to G$ is something really useful, with $G$ encoding the combinatorics of $H$. To be more precise, philosophically speaking, the idea will be that “$H$ can be thought of as being a kind of Fourier matrix for $G$”.
There are several results supporting this, with the main evidence coming from the following result, which collects the basic known results regarding the construction:
The construction $H\to G$ has the following properties:
1. For a Fourier matrix $H=F_G$ we obtain the group $G$ itself, acting on itself.
2. For $H\not\in\{F_G\}$, the quantum group $G$ is not classical, nor a group dual.
3. For a tensor product $H=H'\otimes H''$ we obtain a product, $G=G'\times G''$.
All this material is standard, and elementary, as follows:
\(1) Let us first discuss the cyclic group case, $H=F_N$. Here the rows of $H$ are given by $H_i=\rho^i$, where $\rho=(1,w,w^2,\ldots,w^{N-1})$. Thus, we have the following formula: $$\frac{H_i}{H_j}=\rho^{i-j}$$
It follows that the corresponding rank 1 projections $P_{ij}=Proj(H_i/H_j)$ form a circulant matrix, all whose entries commute. Since the entries commute, the corresponding quantum group must satisfy $G\subset S_N$. Now by taking into account the circulant property of $P=(P_{ij})$ as well, we are led to the conclusion that we have $G=\mathbb Z_N$.
In the general case now, where $H=F_G$, with $G$ being an arbitrary finite abelian group, the result can be proved either by extending the above proof, of by decomposing $G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}$ and using (3) below, whose proof is independent from the rest.
\(2) This is something more tricky, needing some general study of the representations whose Hopf images are commutative, or cocommutative. For details here, along with a number of supplementary facts on the construction $H\to G$, we refer to [@bbs], [@bni].
\(3) Assume that we have a tensor product $H=H'\otimes H''$, and let $G,G',G''$ be the associated quantum permutation groups. We have then a diagram as follows: $$\xymatrix@R=45pt@C25pt{
C(S_{N'}^+)\otimes C(S_{N''}^+)\ar[r]&C(G')\otimes C(G'')\ar[r]&M_{N'}(\mathbb C)\otimes M_{N''}(\mathbb C)\ar[d]\\
C(S_N^+)\ar[u]\ar[r]&C(G)\ar[r]&M_N(\mathbb C)
}$$
Here all the maps are the canonical ones, with those on the left and on the right coming from $N=N'N''$. At the level of standard generators, the diagram is as follows: $$\xymatrix@R=45pt@C65pt{
u_{ij}'\otimes u_{ab}''\ar[r]&w_{ij}'\otimes w_{ab}''\ar[r]&P_{ij}'\otimes P_{ab}''\ar[d]\\
u_{ia,jb}\ar[u]\ar[r]&w_{ia,jb}\ar[r]&P_{ia,jb}
}$$
Now observe that this diagram commutes. We conclude that the representation associated to $H$ factorizes indeed through $C(G')\otimes C(G'')$, and this gives the result.
Generally speaking, going beyond Theorem 10.16 is a quite difficult question. There are several computations available here, for the most regarding the deformations of the Fourier matrices, and we will be back to all this later on, in section 12 below.
We would like to end this section with two theoretical extensions of the construction $H\to G$ from Definition 10.15, which are both quite interesting. A first idea, from [@ba7], is that of using complex Hadamard matrices with noncommutative entries.
Consider an arbitrary unital $C^*$-algebra $A$. Two row or column vectors over this algebra, say $a=(a_1,\ldots,a_N)$ and $b=(b_1,\ldots,b_N)$, are called orthogonal when: $$\sum_ia_ib_i^*=\sum_ia_i^*b_i=0$$
Observe that by applying the involution, we have as well $\sum_ib_ia_i^*=\sum_ib_i^*a_i=0$.
With this notion in hand, we can formulate:
An Hadamard matrix over a unital $C^*$-algebra $A$ is a square matrix $H\in M_N(A)$ satisfying the following conditions:
1. All the entries of $H$ are unitaries, $H_{ij}\in U_A$.
2. These entries commute on all rows and all columns of $H$.
3. The rows and columns of $H$ are pairwise orthogonal.
As a first remark, in the simplest case $A=\mathbb C$ the unitary group is the unit circle in the complex plane, $U_\mathbb C=\mathbb T$, and we obtain the usual complex Hadamard matrices.
In the general commutative case, $A=C(X)$, our Hadamard matrix must be made of “fibers”, one for each point $x\in X$. Thus, we must have $H=\{H^x|x\in X\}$, with $H^x$ being complex Hadamard matrices, depending continuously on $x\in X$.
When $A$ is not commutative, we can have many interesting examples, which can be quite far away from the usual Hadamard matrices. We will be back to this later.
In general now, observe that if $H=(H_{ij})$ is Hadamard, then so are the matrices $\bar{H}=(H_{ij}^*)$, $H^t=(H_{ji})$ and $H^*=(H_{ji}^*)$. In addition, we have the following result:
The class of Hadamard matrices $H\in M_N(A)$ is stable under:
1. Permuting the rows or columns.
2. Multiplying the rows or columns by central unitaries.
When successively combining these two operations, we obtain an equivalence relation.
This is clear indeed from definitions.
Observe that in the commutative case $A=C(X)$ any unitary is central, so we can multiply the rows or columns by any unitary. In particular in this case we can always “dephase” the matrix, i.e. assume that its first row and column consist of $1$ entries.
Let us discuss now the tensor products, and their deformations. Following [@dit], the deformed tensor products are constructed as follows:
Let $H\in M_N(A)$ and $K\in M_M(A)$ be Hadamard matrices, and $Q\in M_{N\times M}(U_A)$. Then the “deformed tensor product” $H\otimes_QK\in M_{NM}(A)$, given by $$(H\otimes_QK)_{ia,jb}=Q_{ib}H_{ij}K_{ab}$$ is an Hadamard matrix as well, provided that the entries of $Q$ commute on rows and columns, and that the algebras $<H_{ij}>$, $<K_{ab}>$, $<Q_{ib}>$ pairwise commute.
First, the entries of $L=H\otimes_QK$ are unitaries, and its rows are orthogonal: $$\begin{aligned}
\sum_{jb}L_{ia,jb}L_{kc,jb}^*
&=&\sum_{jb}Q_{ib}H_{ij}K_{ab}\cdot Q_{kb}^*K_{cb}^*H_{kj}^*\\
&=&N\delta_{ik}\sum_bQ_{ib}K_{ab}\cdot Q_{kb}^*K_{cb}^*\\
&=&N\delta_{ik}\sum_jK_{ab}K_{cb}^*\\
&=&NM\cdot\delta_{ik}\delta_{ac}\end{aligned}$$
The orthogonality of columns can be checked as follows: $$\begin{aligned}
\sum_{ia}L_{ia,jb}L_{ia,kc}^*
&=&\sum_{ia}Q_{ib}H_{ij}K_{ab}\cdot Q_{ic}^*K_{ac}^*H_{ik}^*\\
&=&M\delta_{bc}\sum_iQ_{ib}H_{ij}\cdot Q_{ic}^*H_{ik}^*\\
&=&M\delta_{bc}\sum_iH_{ij}H_{ik}^*\\
&=&NM\cdot\delta_{jk}\delta_{bc}\end{aligned}$$
For the commutation on rows we use in addition the commutation on rows for $Q$: $$\begin{aligned}
L_{ia,jb}L_{kc,jb}
&=&Q_{ib}H_{ij}K_{ab}\cdot Q_{kb}H_{kj}K_{cb}\\
&=&Q_{ib}Q_{kb}\cdot H_{ij}H_{kj}\cdot K_{ab}K_{cb}\\
&=&Q_{kb}Q_{ib}\cdot H_{kj}H_{ij}\cdot K_{cb}K_{ab}\\
&=&Q_{kb}H_{kj}K_{cb}\cdot Q_{ib}H_{ij}K_{ab}\\
&=&L_{kc,jb}L_{ia,jb}\end{aligned}$$
The commutation on columns is similar, using the commutation on columns for $Q$: $$\begin{aligned}
L_{ia,jb}L_{ia,kc}
&=&Q_{ib}H_{ij}K_{ab}\cdot q_{ic}H_{ik}K_{ac}\\
&=&Q_{ib}Q_{ic}\cdot H_{ij}H_{ik}\cdot K_{ab}K_{ac}\\
&=&Q_{ic}Q_{ib}\cdot H_{ik}H_{ij}\cdot K_{ac}K_{ab}\\
&=&Q_{ic}H_{ik}K_{ac}\cdot Q_{ib}H_{ij}K_{ab}\\
&=&L_{ia,kc}L_{ia,jb}\end{aligned}$$
Thus all the axioms are satisfied, and $L$ is indeed Hadamard.
As a basic example, we have the following construction:
The following matrix is Hadamard, $$M=\begin{pmatrix}x&y&x&y\\ x&-y&x&-y\\ z&t&-z&-t\\ z&-t&-z&t\end{pmatrix}$$ for any unitaries $x,y,z,t$ satisfying $[x,y]=[x,z]=[y,t]=[z,t]=0$.
This follows indeed from Theorem 10.19, because we have: $$\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\otimes_{\begin{pmatrix}x&y\\ z&t\end{pmatrix}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}
=\begin{pmatrix}x&y&x&y\\ x&-y&x&-y\\ z&t&-z&-t\\ z&-t&-z&t\end{pmatrix}$$
In addition, the commutation relations in Theorem 10.19 are satisfied indeed.
The generalized Hadamard matrices produce quantum groups, as follows:
If $H\in M_N(A)$ is Hadamard, the following matrices $P_{ij}\in M_N(A)$ form altogether a magic matrix $P=(P_{ij})$, over the algebra $M_N(A)$: $$(P_{ij})_{ab}=\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*$$ Thus, we can let $\pi:C(S_N^+)\to M_N(A)$ be the representation associated to $P$, and then factorize $\pi:C(S_N^+)\to C(G)\to M_N(A)$, with $G\subset S_N^+$ chosen minimal.
The magic condition can be checked in three steps, as follows:
\(1) Let us first check that each $P_{ij}$ is a projection, i.e. that we have $P_{ij}=P_{ij}^*=P_{ij}^2$. Regarding the first condition, namely $P_{ij}=P_{ij}^*$, this simply follows from: $$\begin{aligned}
(P_{ij})_{ba}^*
&=&\frac{1}{N}(H_{ib}H_{jb}^*H_{ja}H_{ia}^*)^*\\
&=&\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*\\
&=&(P_{ij})_{ab}\end{aligned}$$
As for the second condition, $P_{ij}=P_{ij}^2$, this follows from the fact that all the entries $H_{ij}$ are assumed to be unitaries, i.e. follows from axiom (1) in Definition 10.17: $$\begin{aligned}
(P_{ij}^2)_{ab}
&=&\sum_c(P_{ij})_{ac}(P_{ij})_{cb}\\
&=&\frac{1}{N^2}\sum_cH_{ia}H_{ja}^*H_{jc}H_{ic}^*H_{ic}H_{jc}^*H_{jb}H_{ib}^*\\
&=&\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*\\
&=&(P_{ij})_{ab}\end{aligned}$$
\(2) Let us check now that fact that the entries of $P$ sum up to 1 on each row. For this purpose we use the equality $H^*H=N1_N$, coming from the axiom (3), which gives: $$\begin{aligned}
(\sum_jP_{ij})_{ab}
&=&\frac{1}{N}\sum_jH_{ia}H_{ja}^*H_{jb}H_{ib}^*\\
&=&\frac{1}{N}H_{ia}(H^*H)_{ab}H_{ib}^*\\
&=&\delta_{ab}H_{ia}H_{ib}^*\\
&=&\delta_{ab}\end{aligned}$$
\(3) Finally, let us check that the entries of $P$ sum up to 1 on each column. This is the trickiest check, because it involves, besides axiom (1) and the formula $H^t\bar{H}=N1_N$ coming from axiom (3), the commutation on the columns of $H$, coming from axiom (2): $$\begin{aligned}
(\sum_iP_{ij})_{ab}
&=&\frac{1}{N}\sum_iH_{ia}H_{ja}^*H_{jb}H_{ib}^*\\
&=&\frac{1}{N}\sum_iH_{ja}^*H_{ia}H_{ib}^*H_{jb}\\
&=&\frac{1}{N}H_{ja}^*(H^t\bar{H})_{ab}H_{jb}\\
&=&\delta_{ab}H_{ja}^*H_{jb}\\
&=&\delta_{ab}\end{aligned}$$
Thus $P$ is indeed a magic matrix in the above sense, and we are done.
As an illustration, consider a usual Hadamard matrix $H\in M_N(\mathbb C)$. If we denote its rows by $H_1,\ldots,H_N$ and we consider the vectors $\xi_{ij}=H_i/H_j$, then we have: $$\xi_{ij}=\left(\frac{H_{i1}}{H_{j1}},\ldots,\frac{H_{iN}}{H_{jN}}\right)$$
Thus the orthogonal projection on this vector $\xi_{ij}$ is given by: $$(P_{\xi_{ij}})_{ab}
=\frac{1}{||\xi_{ij}||^2}(\xi_{ij})_a\overline{(\xi_{ij})_b}
=\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*
=(P_{ij})_{ab}$$
We conclude that we have $P_{ij}=P_{\xi_{ij}}$ for any $i,j$, so our construction from Theorem 10.21 is compatible with the construction for the usual complex Hadamard matrices.
In general, computing $G$ is a quite difficult question, and the answer for instance for the matrices in Proposition 10.20 is not known. We refer to [@ba7] for more on this.
Let us discuss now another generalization of the construction $H\to G$. The idea, following [@bsk], will be that of looking at the partial Hadamard matrices (PHM), and their connection with the partial permutations. Let us start with:
A partial permutation of $\{1\,\ldots,N\}$ is a bijection $\sigma:X\simeq Y$, with $X,Y\subset\{1,\ldots,N\}$. We denote by $\widetilde{S}_N$ the set formed by such partial permutations.
Observe that we have $S_N\subset\widetilde{S}_N$. The embedding $u:S_N\subset M_N(0,1)$ given by permutation matrices can be extended to an embedding $u:\widetilde{S}_N\subset M_N(0,1)$, as follows: $$u_{ij}(\sigma)=
\begin{cases}
1&{\rm if}\ \sigma(j)=i\\
0&{\rm otherwise}
\end{cases}$$
By looking at the image of this embedding, we see that $\widetilde{S}_N$ is in bijection with the matrices $M\in M_N(0,1)$ having at most one 1 entry on each row and column.
In analogy with Wang’s theory in [@wan], we have the following definition:
A submagic matrix is a matrix $u\in M_N(A)$ whose entries are projections, which are pairwise orthogonal on rows and columns. We let $C(\widetilde{S}_N^+)$ be the universal $C^*$-algebra generated by the entries of a $N\times N$ submagic matrix.
The algebra $C(\widetilde{S}_N^+)$ has a comultiplication given by $\Delta(u_{ij})=\sum_ku_{ik}\otimes u_{kj}$, and a counit given by $\varepsilon(u_{ij})=\delta_{ij}$. Thus $\widetilde{S}_N^+$ is a quantum semigroup, and we have maps as follows, with the bialgebras at left corresponding to the quantum semigroups at right: $$\begin{matrix}
C(\widetilde{S}_N^+)&\to&C(S_N^+)\\
\\
\downarrow&&\downarrow\\
\\
C(\widetilde{S}_N)&\to&C(S_N)
\end{matrix}
\quad \quad \quad:\quad \quad\quad
\begin{matrix}
\widetilde{S}_N^+&\supset&S_N^+\\
\\
\cup&&\cup\\
\\
\widetilde{S}_N&\supset&S_N
\end{matrix}$$
The relation of all this with the PHM is immediate, appearing as follows:
If $H\in M_{M\times N}(\mathbb T)$ is a PHM, with rows denoted $H_1,\ldots,H_M\in\mathbb T^N$, then the following matrix of rank one projections is submagic: $$P_{ij}=Proj\left(\frac{H_i}{H_j}\right)$$ Thus $H$ produces a representation $\pi_H:C(\widetilde{S}_M^+)\to M_N(\mathbb C)$, given by $u_{ij}\to P_{ij}$, that we can factorize through $C(G)$, with the quantum semigroup $G\subset\widetilde{S}_M^+$ chosen minimal.
We have indeed the following computation, for the rows: $$\Big\langle\frac{H_i}{H_j},\frac{H_i}{H_k}\Big\rangle=\sum_l\frac{H_{il}}{H_{jl}}\cdot\frac{H_{kl}}{H_{il}}=\sum_l\frac{H_{kl}}{H_{jl}}=<H_k,H_j>=\delta_{jk}$$
The verification for the columns is similar, we follows: $$\left<\frac{H_i}{H_j},\frac{H_k}{H_j}\right>=\sum_l\frac{H_{il}}{H_{jl}}\cdot\frac{H_{jl}}{H_{kl}}=\sum_l\frac{H_{il}}{H_{kl}}=N\delta_{ik}$$
Regarding now the last assertion, we can indeed factorize our representation as indicated, with the existence and uniqueness of the bialgebra $C(G)$, with the minimality property as above, being obtained by dividing $C(\widetilde{S}_M^+)$ by a suitable ideal. See [@bsk].
The very first problem is that of deciding under which exact assumptions our construction is in fact “classical”. In order to explain the answer here, we will need:
A pre-Latin square is a matrix $L\in M_M(1,\ldots,N)$ having the property that its entries are distinct, on each row and each column.
Given such a matrix $L$, to any $x\in\{1,\ldots,N\}$ we can associate the partial permutation $\sigma_x\in\widetilde{S}_M$ given by $\sigma_x(j)=i\iff L_{ij}=x$. We denote by $G\subset\widetilde{S}_M$ the semigroup generated by $\sigma_1,\ldots,\sigma_N$, and call it semigroup associated to $L$.
Also, given an orthogonal basis $\xi=(\xi_1,\ldots,\xi_N)$ of $\mathbb C^N$, we can construct a submagic matrix $P\in M_M(M_N(\mathbb C))$, according to the formula $P_{ij}=Proj(\xi_{L_{ij}})$.
With these notations, we have the following result, from [@bsk]:
If $H\in M_{N\times M}(\mathbb C)$ is a PHM, the following are equivalent:
1. The semigroup $G\subset\widetilde{S}_M^+$ is classical, i.e. $G\subset\widetilde{S}_M$.
2. The projections $P_{ij}=Proj(H_i/H_j)$ pairwise commute.
3. The vectors $H_i/H_j\in\mathbb T^N$ are pairwise proportional, or orthogonal.
4. The submagic matrix $P=(P_{ij})$ comes for a pre-Latin square $L$.
In addition, if so is the case, $G$ is the semigroup associated to $L$.
Here $(1)\iff(2)$ is clear, $(2)\iff(3)$ comes from the fact that two rank 1 projections commute precisely when their images coincide, or are orthogonal, $(3)\iff(4)$ is clear again, and the last assertion comes from Gelfand duality. See [@bsk].
We call “classical” the matrices in Theorem 10.26. There are many examples here, the most basic ones being the upper $M\times N$ submatrices of the Fourier matrices $F_N$, denoted $F_{M,N}$. If we denote by $G_{M,N}\subset\widetilde{S}_M$ the associated semigroups, we have:
In the $N>2M-2$ regime, $G_{M,N}\subset\widetilde{S}_M$ is formed by the maps -10mm$$\begin{matrix}\\ \\ \\ \sigma=\ \ \\ \end{matrix}\xymatrix@R=10mm@C=2mm{
\bullet&\bullet&\bullet&\bullet\ar[dll]&\bullet\ar[dll]&\bullet\ar[dll]&\bullet\\
\bullet&\bullet&\bullet&\bullet&\bullet&\bullet&\bullet}$$ that is, $\sigma:I\simeq J$, $\sigma(j)=j-x$, with $I,J\subset\{1,\ldots,M\}$ intervals, independently of $N$.
Since for $\widetilde{H}=F_N$ the associated Latin square is circulant, $\widetilde{L}_{ij}=j-i$, the pre-Latin square that we are interested in is: $$L=\begin{pmatrix}
0&1&2&\ldots&M-1\\
N-1&0&1&\ldots&M-2\\
N-2&N-1&0&\ldots&M-3\\
\ldots\\
N-M+1&N-M+2&N-M+3&\ldots&0
\end{pmatrix}$$
Observe that, due to our $N>2M-2$ assumption, we have $N-M+1>M-1$, and so the entries above the diagonal are distinct from those below the diagonal.
With this remark in hand, the computation is quite standard. See [@bop].
In the remaining regime, $M<N\leq2M-2$, the semigroup $G_{M,N}\subset\widetilde{S}_M$ looks quite hard to compute, and for the moment we only have some partial results regarding it.
For a partial permutation $\sigma:I\simeq J$ with $|I|=|J|=k$, set $\kappa(\sigma)=k$. We have:
The components $G_{M,N}^{(k)}=\{\sigma\in G_{M,N}|\kappa(\sigma)=k\}$ with $k>2M-N$ are, in the $M<N\leq2M-2$ regime, the same as those in the $N>2M-2$ regime.
The pre-Latin square that we are interested in has as usual 0 on the diagonal, and then takes its entries from the set $S=\{1,\ldots,N-M\}\cup\{N-M+1,\ldots,M-1\}\cup\{M,\ldots,N-1\}$, in a uniform way from each of the 3 components of $S$.
With this remark in hand, the proof is quite standard. See [@bop].
There are many interesting questions regarding the construction $H\to G$, in this generalized PHM/partial permutation setting, and we refer here to [@bop], [@bsk].
Subfactor theory
================
We discuss here some potential applications of the construction $H\to G$, and of the Hadamard matrices in general, to certain questions from mathematical physics.
Generally speaking, all this is related to statistical mechanics. The idea indeed, which is old folklore, is that associated to any 2D spin model should be a quantum permutation group $G\subset S_N^+$, which appears by factorizing the flat representation $C(S_N^+)\to M_N(\mathbb C)$ associated to the $N\times N$ matrix of the Boltzmann weights of the model, and whose representation theory computes the partition function of the model.
All this comes from the work of Jones in subfactor theory [@jo1], [@jo2], [@jo3], and from various general correspondences between quantum groups and subfactors. There are some direct computations as well, supporting this idea, as those in [@bn2].
However, all this is not axiomatized yet. So, as a more modest goal here, we will explain the relation between the Hadamard matrices and the von Neumann algebras, the commuting squares, the subfactor theory, and Jones’ planar algebra work in [@jo3], and then we will comment on some further possible developments of all this.
In order to start, we will need some basic von Neumann algebra theory, coming as a complement to the basic $C^*$-algebra theory explained in section 10 above:
The von Neumann algebras, which are the $C^*$-algebras $A\subset B(H)$ closed under the weak topology, making each $T\to Tx$ continuous, are as follows:
1. They are exactly the $*$-algebras of operators $A\subset B(H)$ which are equal to their bicommutant, $A=A''$.
2. In the commutative case, these are the algebras of type $A=L^\infty(X)$, with $X$ measured space, represented on $H=L^2(X)$, up to a multiplicity.
3. If we write the center as $Z(A)=L^\infty(X)$, then we have a decomposition of type $A=\int_XA_x\,dx$, with the fibers $A_x$ having trivial center, $Z(A_x)=\mathbb C$.
4. The factors, $Z(A)=\mathbb C$, can be fully classified in terms of ${\rm II}_1$ factors, which are those satisfying $\dim A=\infty$, and having a faithful trace $tr:A\to\mathbb C$.
5. The ${\rm II}_1$ factors enjoy the “continuous dimension geometry” property, in the sense that the traces of their projections can take any values in $[0,1]$.
6. Among the ${\rm II}_1$ factors, the most important one is the Murray-von Neumann hyperfinite factor $R$, obtained as an inductive limit of matrix algebras.
This is something quite heavy, the idea being as follows:
\(1) This is von Neumann’s bicommutant theorem, which is well-known in finite dimensions, and whose proof in general is not that complicated, either.
\(2) It is clear, via basic measure theory, that $L^\infty(X)$ is indeed a von Neumann algebra on $H=L^2(X)$. The converse can be proved as well, by using spectral theory.
\(3) This is von Neumann’s reduction theory main result, whose statement is already quite hard to understand, and whose proof uses advanced functional analysis.
\(4) This is something heavy, due to Murray-von Neumann and Connes, the idea being that the other factors can be basically obtained via crossed product constructions.
\(5) This is a jewel of functional analysis, with the rational traces being relatively easy to obtain, and with the irrational ones coming from limiting arguments.
\(6) Once again, heavy results, due to Murray-von Neumann and Connes, the idea being that any finite dimensional construction always leads to the same factor, called $R$.
All the above is of course very brief. We recommend here the original papers of von Neumann and Connes, starting for instance with [@mvo], and then [@co1].
As a philosophical comment, observe the huge technical difference between the basic $C^*$-algebra theory, more or less explained in section 10 above, and the basic von Neumann algebra theory, barely discussed above. Some theories are much more advanced than others, perhaps because they are more interesting, or more beautiful, or both.
As a side remark here, in view of all this, it would have been of course desirable to introduce the compact quantum groups $G$ by talking directly about the associated von Neumann algebras $L^\infty(G)$. Unfortunately this is not possible, because the underlying Hilbert spaces $H=L^2(G)$ do not come “by definition”, but by theorem. In addition, we cannot really talk about von Neumann algebras with generators and relations.
In short, there is some philosophical clash here, between $C^*$-algebra theory and von Neumann algebra theory. This is a bit like a dispute between topology and probability. You don’t really need open and closed sets in order to do interesting mathematics, but if you have the occasion of learning some, why not having them in your bag of tricks.
From a more relaxed perspective, all this can be traced back to the Bohr-Einstein debates, the main question being whether God plays dice or not.
In relation now with our questions, variations of von Neumann’s reduction theory idea, basically using the abelian subalgebra $Z(A)\subset A$, include the use of maximal abelian subalgebras $B\subset A$, called MASA. In the finite von Neumann algebra case, where we have a trace, the use of orthogonal MASA is a standard method as well:
A pair of orthogonal MASA is a pair of maximal abelian subalgebras $$B,C\subset A$$ which are orthogonal with respect to the trace, $(B\ominus\mathbb C1)\perp(C\ominus\mathbb C1)$.
Here the scalar product is by definition $<b,c>=tr(bc^*)$, and by taking into account the multiples of the identity, the orthogonality condition reformulates as follows: $$tr(bc)=tr(b)tr(c)\quad,\quad\forall b\in B,c\in C$$
This notion is potentially useful in the infinite dimensional context, in relation with various structure and classification problems for the ${\rm II}_1$ factors. However, as a “toy example”, we can try and see what happens for the simplest factor that we know, namely the matrix algebra $M_N(\mathbb C)$, with $N\in\mathbb N$, endowed with its usual matrix trace.
In this context, we have the following surprising observation of Popa [@po1]:
Up to a conjugation by a unitary, the pairs of orthogonal MASA in the simplest factor, namely the matrix algebra $M_N(\mathbb C)$, are as follows, $$A=\Delta\quad,\quad B=H\Delta H^*$$ with $\Delta\subset M_N(\mathbb C)$ being the diagonal matrices, and with $H\in M_N(\mathbb C)$ being Hadamard.
Any MASA in $M_N(\mathbb C)$ being conjugated to $\Delta$, we can assume, up to conjugation by a unitary, that we have $A=\Delta$ and $B=U\Delta U^*$, with $U\in U_N$.
Now observe that given two diagonal matrices $D,E\in\Delta$, we have: $$\begin{aligned}
tr(D\cdot UEU^*)
&=&\frac{1}{N}\sum_i(DUEU^*)_{ii}\\
&=&\frac{1}{N}\sum_{ij}D_{ii}U_{ij}E_{jj}\bar{U}_{ij}\\
&=&\frac{1}{N}\sum_{ij}D_{ii}E_{jj}|U_{ij}|^2\end{aligned}$$
Thus, the orthogonality condition $A\perp B$ reformulates as follows: $$\frac{1}{N}\sum_{ij}D_{ii}E_{jj}|U_{ij}|^2=\frac{1}{N^2}\sum_{ij}D_{ii}E_{jj}$$
But this tells us precisely that the entries $|U_{ij}|$ must have the same absolute value, namely $\frac{1}{\sqrt{N}}$, and so that the rescaled matrix $H=\sqrt{N}U$ must be Hadamard.
The above result is something quite fascinating, and in stark contrast with the mathematical solidity and beauty of Theorem 11.1. Bluntly put, there is a “black hole” in the foundations of modern von Neumann algebra theory, produced by their complex Hadamard matrices, and their wild structure, geometry and combinatorics.
Whether this black hole must be studied a bit, run away from, or simply ignored, is one of the main philosophical questions in modern von Neumann algebra theory. Generally speaking, subfactor theory and related areas are in favor of the “study a bit” direction, while free probability and related areas opt for the “run away from” solution.
In relation with this, the present text is based on a bit of a hybrid philosophy, namely study the complex Hadamard matrices, of course and definitely, but with the idea in mind of eventually reaching to tools coming from Voiculescu’s free probability theory.
Getting back now to work, and to Theorem 11.3 above, as it is, along the same lines, but at a more advanced level, we have the following result:
Given a complex Hadamard matrix $H\in M_N(\mathbb C)$, the diagram formed by the associated pair of orthogonal MASA, namely $$\xymatrix@R=35pt@C35pt{
\Delta\ar[r]&M_N(\mathbb C)\\
\mathbb C\ar[u]\ar[r]&H\Delta H^*\ar[u] }$$ is a commuting square in the sense of subfactor theory, in the sense that the expectations onto $\Delta,H\Delta H^*$ commute, and their product is the expectation onto $\mathbb C$.
It follows from definitions that the expectation $E_\Delta:M_N(\mathbb C)\to\Delta$ is the operation $M\to M_\Delta$ which consists in keeping the diagonal, and erasing the rest.
Regarding now the other expectation, $E_{H\Delta H^*}:M_N(\mathbb C)\to H\Delta H^*$, it is better to identify it with the expectation $E_{U\Delta U^*}:M_N(\mathbb C)\to U\Delta U^*$, with $U=H/\sqrt{N}$. This latter expectation must be given by a formula of type $M\to UX_\Delta U^*$, with $X$ satisfying: $$<M,UDU^*>=<UX_\Delta U^*,UDU^*>\quad,\quad\forall D\in\Delta$$
The scalar products being given by $<a,b>=tr(ab^*)$, this condition reads: $$tr(MUD^*U^*)=tr(X_\Delta D^*)\quad,\quad\forall D\in\Delta$$
Thus $X=U^*MU$, and the formulae of our two expectations are as follows: $$\begin{aligned}
E_\Delta(M)&=&M_\Delta\\
E_{U\Delta U^*}(M)&=&U(U^*MU)_\Delta U^*\end{aligned}$$
With these formulae in hand, we have the following computation: $$\begin{aligned}
(E_\Delta E_{U\Delta U^*}M)_{ij}
&=&\delta_{ij}(U(U^*MU)_\Delta U^*)_{ii}\\
&=&\delta_{ij}\sum_kU_{ik}(U^*MU)_{kk}\bar{U}_{ik}\\
&=&\delta_{ij}\sum_k\frac{1}{N}\cdot(U^*MU)_{kk}\\
&=&\delta_{ij}tr(U^*MU)\\
&=&\delta_{ij}tr(M)\\
&=&(E_\mathbb CM)_{ij}\end{aligned}$$
As for the other composition, the computation here is similar, as follows: $$\begin{aligned}
(E_{U\Delta U^*}E_\Delta M)_{ij}
&=&(U(U^*M_\Delta U)_\Delta U^*)_{ij}\\
&=&\sum_kU_{ik}(U^*M_\Delta U)_{kk}\bar{U}_{jk}\\
&=&\sum_{kl}U_{ik}\bar{U}_{lk}M_{ll}U_{lk}\bar{U}_{jk}\\
&=&\frac{1}{N}\sum_{kl}U_{ik}M_{ll}\bar{U}_{jk}\\
&=&\delta_{ij}tr(M)\\
&=&(E_\mathbb CM)_{ij}\end{aligned}$$
Thus, we have indeed a commuting square, as claimed.
We should mention that the notion of commuting square, which was heavily used by Popa in his classification work for the subfactors [@po2], is a bit more complicated that what was said above. However, the other axioms are trivially satisfied for the class of commuting squares from Theorem 11.4, so we will not get into this. See [@po2].
As a conclusion, all this leads us into subfactor theory. So, let us explain now, following Jones [@jo1], the basic theory here. Given an inclusion of ${\rm II}_1$ factors $A_0\subset A_1$, which is actually something quite natural in physics, we can consider the orthogonal projection $e_1:A_1\to A_0$, and set $A_2=<A_1,e_1>$. This procedure, called “basic construction”, can be iterated, and we obtain in this way a whole tower of ${\rm II}_1$ factors, as follows: $$A_0\subset_{e_1}A_1\subset_{e_2}A_2\subset_{e_3}A_3\subset\ldots\ldots$$
The basic construction is something quite subtle, making deep connections with advanced mathematics and physics. All this was discovered by Jones in 1983, and his main result from [@jo1], which came as a big surprise at that time, along with some supplementary fundamental work, done later, in [@jo3], can be summarized as follows:
Let $A_0\subset A_1$ be an inclusion of ${\rm II}_1$ factors.
1. The sequence of projections $e_1,e_2,e_3,\ldots\in B(H)$ produces a representation of the Temperley-Lieb algebra $TL_N\subset B(H)$, where $N=[A_1,A_0]$.
2. The collection $P=(P_k)$ of the linear spaces $P_k=A_0'\cap A_k$, which contains the image of $TL_N$, has a planar algebra structure.
3. The index $N=[A_1,A_0]$, which is a Murray-von Neumann continuous quantity $N\in[1,\infty]$, must satisfy $N\in\{4\cos^2(\frac{\pi}{n})|n\in\mathbb N\}\cup[4,\infty]$.
This is something quite heavy, the idea being as follows:
\(1) The idea here is that the functional analytic study of the basic construction leads to the conclusion that the sequence of projections $e_1,e_2,e_3,\ldots\in B(H)$ behaves algebrically exactly as the sequence of diagrams $\varepsilon_1,\varepsilon_2,\varepsilon_3,\ldots\in TL_N$ given by $\varepsilon_1={\ }^\cup_\cap$, $\varepsilon_2=|\!{\ }^\cup_\cap$, $\varepsilon_3=||\!{\ }^\cup_\cap$, and so on, with the parameter being the index, $N=[A_2,A_1]$.
\(2) Since the orthogonal projection $e_1:A_1\to A_0$ commutes with $A_0$ we have $e_1\in P_2'$, and by translation we obtain $e_1,\ldots,e_{k-1}\in P_k$ for any $k$, and so $TL_N\subset P$. The point now is that the planar algebra structure of $TL_N$, obtained by composing diagrams, can be shown to extend into an abstract planar algebra structure of $P$.
\(3) This is something quite surprising, which follows from (1), via some clever positivity considerations, involving the Perron-Frobenius theorem. In fact, the subfactors having index $N\in[1,4]$ can be classified by ADE diagrams, and the obstruction $N=4\cos^2(\frac{\pi}{n})$ itself comes from the fact that $N$ must be the squared norm of such a graph.
As it was the case with Theorem 11.1 above, our explanations here were very brief. For all this, and more, we recommend Jones’ papers [@jo1], [@jo2], [@jo3], and [@tli].
Getting back now to the commuting squares, the idea is that any such square $C$ produces a subfactor of the hyperfinite ${\rm II}_1$ factor $R$. Indeed, under suitable assumptions on the inclusions $C_{00}\subset C_{10},C_{01}\subset C_{11}$, we can perform the basic construction for them, in finite dimensions, and we obtain a whole array of commuting squares, as follows: $$\xymatrix@R=35pt@C35pt{
A_0&A_1&A_2&\\
C_{02}\ar[r]\ar@.[u]&C_{12}\ar[r]\ar@.[u]&C_{22}\ar@.[r]\ar@.[u]&B_2\\
C_{01}\ar[r]\ar[u]&C_{11}\ar[r]\ar[u]&C_{21}\ar@.[r]\ar[u]&B_1\\
C_{00}\ar[u]\ar[r]&C_{10}\ar[u]\ar[r]&C_{20}\ar[u]\ar@.[r]&B_0}$$
Here the various $A,B$ letters stand for the von Neumann algebras obtained in the limit, which are all isomorphic to the hyperfinite ${\rm II}_1$ factor $R$, and we have:
In the context of the above diagram, the following happen:
1. $A_0\subset A_1$ is a subfactor, and $\{A_i\}$ is the Jones tower for it.
2. The corresponding planar algebra is given by $A_0'\cap A_k=C_{01}'\cap C_{k0}$.
3. A similar result holds for the “horizontal” subfactor $B_0\subset B_1$.
Here (1) is something quite routine, (2) is a subtle result, called Ocneanu compactness theorem [@ocn], and (3) follows from (1,2), by flipping the diagram.
Getting back now to the Hadamard matrices, we can extend our lineup of results, namely Theorem 11.3 and Theorem 11.4, with an advanced statement, as follows:
Given a complex Hadamard matrix $H\in M_N(\mathbb C)$, the diagram formed by the associated pair of orthogonal MASA, namely $$\xymatrix@R=35pt@C35pt{
\Delta\ar[r]&M_N(\mathbb C)\\
\mathbb C\ar[u]\ar[r]&H\Delta H^*\ar[u] }$$ is a commuting square in the sense of subfactor theory, and the planar algebra $P=(P_k)$ of the corresponding subfactor can be explicitely computed in terms of $H$.
The fact that we have a commuting square is from Theorem 11.4, and the computation of the planar algebra is possible thanks to formula in Theorem 11.6.
As for the precise formula of the planar algebra, which is something quite complicated, this can be found in [@jo3], and we will be back to it later, in Theorem 11.9 below.
Let us discuss now the relation with the quantum groups. We will need the following result, valid in the general context of the Hopf image construction:
Given a matrix model $\pi:C(G)\to M_K(C(T))$, the fundamental corepresentation $v$ of its Hopf is subject to the Tannakian conditions $$Hom(v^{\otimes k},v^{\otimes l})=Hom(U^{\otimes k},U^{\otimes l})$$ where $U_{ij}=\pi(u_{ij})$, and where the spaces on the right are taken in a formal sense.
Since the morphisms increase the intertwining spaces, when defined either in a representation theory sense, or just formally, we have inclusions as follows: $$Hom(u^{\otimes k},u^{\otimes l})\subset Hom(U^{\otimes k},U^{\otimes l})$$
More generally, we have such inclusions when replacing $(G,u)$ with any pair producing a factorization of $\pi$. Thus, by Tannakian duality [@wo2], the Hopf image must be given by the fact that the intertwining spaces must be the biggest, subject to these inclusions.
On the other hand, since $u$ is biunitary, so is $U$, and it follows that the spaces on the right form a Tannakian category. Thus, we have a quantum group $(H,v)$ given by: $$Hom(v^{\otimes k},v^{\otimes l})=Hom(U^{\otimes k},U^{\otimes l})$$
By the above discussion, $C(H)$ follows to be the Hopf image of $\pi$, as claimed.
With the above result in hand, we can compute the Tannakian category of the Hopf image, in the Hadamard matrix case, and we are led in this way to:
The Tannakian category of the quantum group $G\subset S_N^+$ associated to a complex Hadamard matrix $H\in M_N(\mathbb C)$ is given by $$T\in Hom(u^{\otimes k},u^{\otimes l})\iff T^\circ G^{k+2}=G^{l+2}T^\circ$$ where the objects on the right are constructed as follows:
1. $T^\circ=id\otimes T\otimes id$.
2. $G_{ia}^{jb}=\sum_kH_{ik}\bar{H}_{jk}\bar{H}_{ak}H_{bk}$.
3. $G^k_{i_1\ldots i_k,j_1\ldots j_k}=G_{i_ki_{k-1}}^{j_kj_{k-1}}\ldots G_{i_2i_1}^{j_2j_1}$.
With the notations in Theorem 11.8, we have the following formula: $$Hom(u^{\otimes k},u^{\otimes l})=Hom(U^{\otimes k},U^{\otimes l})$$
The vector space on the right consists by definition of the complex $N^l\times N^k$ matrices $T$, satisfying the following relation: $$TU^{\otimes k}=U^{\otimes l}T$$
If we denote this equality by $L=R$, the left term $L$ is given by: $$\begin{aligned}
L_{ij}
&=&(TU^{\otimes k})_{ij}\\
&=&\sum_aT_{ia}U^{\otimes k}_{aj}\\
&=&\sum_aT_{ia}U_{a_1j_1}\ldots U_{a_kj_k}\end{aligned}$$
As for the right term $R$, this is given by: $$\begin{aligned}
R_{ij}
&=&(U^{\otimes l}T)_{ij}\\
&=&\sum_bU^{\otimes l}_{ib}T_{bj}\\
&=&\sum_bU_{i_1b_1}\ldots U_{i_lb_l}T_{bj}\end{aligned}$$
Consider now the vectors $\xi_{ij}=H_i/H_j$. Since these vectors span the ambient Hilbert space, the equality $L=R$ is equivalent to the following equality: $$<L_{ij}\xi_{pq},\xi_{rs}>=<R_{ij}\xi_{pq},\xi_{rs}>$$
We use now the following well-known formula, expressing a product of rank one projections $P_1,\ldots,P_k$ in terms of the corresponding image vectors $\xi_1,\ldots,\xi_k$: $$<P_1\ldots P_kx,y>=<x,\xi_k><\xi_k,\xi_{k-1}>\ldots\ldots<\xi_2,\xi_1><\xi_1,y>$$
This gives the following formula for $L$: $$\begin{aligned}
<L_{ij}\xi_{pq},\xi_{rs}>
&=&\sum_aT_{ia}<P_{a_1j_1}\ldots P_{a_kj_k}\xi_{pq},\xi_{rs}>\\
&=&\sum_aT_{ia}<\xi_{pq},\xi_{a_kj_k}>\ldots<\xi_{a_1j_1},\xi_{rs}>\\
&=&\sum_aT_{ia}G_{pa_k}^{qj_k}G_{a_ka_{k-1}}^{j_kj_{k-1}}\ldots G_{a_2a_1}^{j_2j_1}G_{a_1r}^{j_1s}\\
&=&\sum_aT_{ia}G^{k+2}_{rap,sjq}\\
&=&(T^\circ G^{k+2})_{rip,sjq}\end{aligned}$$
As for the right term $R$, this is given by: $$\begin{aligned}
<R_{ij}\xi_{pq},\xi_{rs}>
&=&\sum_b<P_{i_1b_1}\ldots P_{i_lb_l}\xi_{pq},\xi_{rs}>T_{bj}\\
&=&\sum_b<\xi_{pq},\xi_{i_lb_l}>\ldots<\xi_{i_1b_1},\xi_{rs}>T_{bj}\\
&=&\sum_bG_{pi_l}^{qb_l}G_{i_li_{l-1}}^{b_lb_{l-1}}\ldots G_{i_2i_1}^{b_2b_1}G_{i_1r}^{b_1s}T_{bj}\\
&=&\sum_bG^{l+2}_{rip,sbq}T_{bj}\\
&=&(G^{l+2}T^\circ)_{rip,sjq}\end{aligned}$$
Thus, we obtain the formula in the statement. See [@bbs].
The point now is that, with $k=0$, we obtain in this way precisely the spaces $P_l$ computed by Jones in [@jo3]. Thus, we are led to the following result:
Let $H\in M_N(\mathbb C)$ be a complex Hadamard matrix.
1. The planar algebra associated to $H$ is given by $P_k=Fix(u^{\otimes k})$, where $G\subset S_N^+$ is the associated quantum permutation group.
2. The corresponding Poincaré series $f(z)=\sum_k\dim(P_k)z^k$ equals the Stieltjes transform $\int_G\frac{1}{1-z\chi}$ of the law of the main character $\chi=\sum_iu_{ii}$.
This follows by comparing the quantum group and subfactor results:
\(1) As already mentioned above, this simply follows by comparing Theorem 11.9 with the subfactor computation in [@jo3]. For full details here, we refer to [@bbs].
\(2) This is a consequence of (1), and of the Peter-Weyl type results from [@wo1], which tell us that fixed points can be counted by integrating characters.
Regarding now the subfactor itself, the result here is as follows:
The subfactor associated to $H\in M_N(\mathbb C)$ is of the form $$A^G\subset(\mathbb C^N\otimes A)^G$$ with $A=R\rtimes\widehat{G}$, where $G\subset S_N^+$ is the associated quantum permutation group.
This is something more technical, the idea being that the basic construction procedure for the commuting squares, explained before Theorem 11.6, can be performed in an “equivariant setting”, for commuting squares having components as follows: $$D\otimes_GE=(D\otimes(E\rtimes\widehat{G}))^G$$
To be more precise, starting with a commuting square formed by such algebras, we obtain by basic construction a whole array of commuting squares as follows, with $\{D_i\},\{E_i\}$ being by definition Jones towers, and with $D_\infty,E_\infty$ being their inductive limits: $$\xymatrix@R=35pt@C35pt{
D_0\otimes_GE_\infty&D_1\otimes_GE_\infty&D_2\otimes_GE_\infty\\
D_0\otimes_GE_2\ar@.[u]\ar[r]&D_1\otimes_GE_2\ar@.[u]\ar[r]&D_2\otimes_GE_2\ar@.[u]\ar@.[r]&D_\infty\otimes_GE_2\\
D_0\otimes_GE_1\ar[u]\ar[r]&D_1\otimes_GE_1\ar[u]\ar[r]&D_2\otimes_GE_1\ar[u]\ar@.[r]&D_\infty\otimes_GE_1\\
D_0\otimes_GE_0\ar[u]\ar[r]&D_1\otimes_GE_0\ar[u]\ar[r]&D_2\otimes_GE_0\ar[u]\ar@.[r]&D_\infty\otimes_GE_0}$$
The point now is that this quantum group picture works in fact for any commuting square having $\mathbb C$ in the lower left corner. In the Hadamard matrix case, that we are interested in here, the corresponding commuting square is as follows: $$\xymatrix@R=35pt@C35pt{
\mathbb C\otimes_G\mathbb C^N\ar[r]&\mathbb C^N\otimes_G\mathbb C^N\\
\mathbb C\otimes_G\mathbb C\ar[u]\ar[r]&\mathbb C^N\otimes_G\mathbb C\ar[u] }$$
Thus, the subfactor obtained by vertical basic construction appears as follows: $$\mathbb C\otimes_GE_\infty\subset\mathbb C^N\otimes_GE_\infty$$
But this gives the conclusion in the statement, with the ${\rm II}_1$ factor appearing there being by definition $A=E_\infty\rtimes\widehat{G}$, and with the remark that we have $E_\infty\simeq R$. See [@ba1].
All this is of course quite heavy, with the above results being subject to several extensions, and with all this involving several general correspondences between quantum groups, planar algebras, commuting squares and subfactors, that we will not get into.
As a technical comment here, it is possible to deduce Theorem 11.10 directly from Theorem 11.11, via some quantum group computations. However, Theorem 11.11 and its proof involve some heavy algebra and functional analysis, coming on top of the heavy algebra and functional analysis required for the general theory of the commuting squares, and this makes the whole thing quite unusable, in practice. Thus, while being technically weaker, Theorem 11.10 above remains the main result on the subject.
We refer to [@ba1], [@bbs], [@bni] and related papers for the full story of all this.
As already mentioned in the beginning of this section, all this is conjecturally related to statistical mechanics. Indeed, the Tannakian category/planar algebra formula from Theorem 11.9 has many similarities with the transfer matrix computations for the spin models, and this is explained in Jones’ paper [@jo3], and known for long before that, from his 1989 paper [@jo2]. However, the precise significance of the Hadamard matrices in statistical mechanics, or in related areas such as link invariants, remains a bit unclear.
From a quantum group perspective, the same questions make sense. The idea here, which is old folklore, going back to the 1998 discovery by Wang [@wan] of the quantum permutation group $S_N^+$, is that associated to any 2D spin model should be a quantum permutation group $G\subset S_N^+$, which appears by factorizing the flat representation $C(S_N^+)\to M_N(\mathbb C)$ associated to the $N\times N$ matrix of the Boltzmann weights of the model, and whose representation theory computes the partition function of the model.
This is supported on one hand by Jones’ theory in [@jo2], [@jo3], via the connecting results presented above, and on the other hand by a number of more recent results, such as those in [@bn2], having similarities with the computations for the Ising and Potts models. However, the whole thing remains not axiomatized, at least for the moment, and in what regards the Hadamard matrices, their precise physical significance remains unclear.
Getting back to work now, the above discussion suggests heavily investing time and energy into the computation of integrals over Hopf images, because it is via such integrals that the mathematics of the corresponding lattice model is supposed to appear.
To be more precise, we would like for instance to have advanced representation theory results, of probabilistic flavor, in the spirit of [@csn], [@dif], [@dsh].
Let us begin with some generalities. Our claim is that the “good” problem, about any compact quantum group, is that of computing the law of the main character.
This claim, which is something well-known, and generally agreed upon, is supported by a wealth of interesting results, which can be summarized as follows:
Given a Woronowicz algebra $(A,u)$, the law of the main character $$\chi=\sum_{i=1}^Nu_{ii}$$ with respect to the Haar integration has the following properties:
1. The moments of $\chi$ are the numbers $M_k=\dim(Fix(u^{\otimes k}))$.
2. $M_k$ counts as well the lenght $p$ loops at $1$, on the Cayley graph of $A$.
3. $law(\chi)$ is the Kesten measure of the associated discrete quantum group.
4. When $u\sim\bar{u}$ the law of $\chi$ is a usual measure, supported on $[-N,N]$.
5. The algebra $A$ is amenable precisely when $N\in supp(law(Re(\chi)))$.
6. Any morphism $f:(A,u)\to (B,v)$ must increase the numbers $M_k$.
7. Such a morphism $f$ is an isomorphism when $law(\chi_u)=law(\chi_v)$.
All this is quite advanced, the idea being as follows:
\(1) This comes from the Peter-Weyl type theory in [@wo1], which tells us the number of fixed points of $v=u^{\otimes k}$ can be recovered by integrating the character $\chi_v=\chi_u^k$.
\(2) This is something true, and well-known, for $A=C^*(\Gamma)$, with $\Gamma=<g_1,\ldots,g_N>$ being a discrete group. In general, the proof is quite similar.
\(3) This is actually the definition of the Kesten measure, in the case $A=C^*(\Gamma)$, with $\Gamma=<g_1,\ldots,g_N>$ being a discrete group. In general, this follows from (2).
\(4) The equivalence $u\sim\bar{u}$ translates into $\chi_u=\chi_u^*$, and this gives the first assertion. As for the support claim, this follows from $uu^*=1\implies||u_{ii}||\leq1$, for any $i$.
\(5) This is the Kesten amenability criterion, which can be established as in the classical case, $A=C^*(\Gamma)$, with $\Gamma=<g_1,\ldots,g_N>$ being a discrete group.
\(6) This is something elementary, which follows from (1) above, and from the fact that the morphisms of Woronowicz algebras increase the spaces of fixed points.
\(7) This follows by using (6), and the Peter-Weyl type theory from [@wo1], the idea being that if $f$ is not injective, then it must strictly increase one of the spaces $Fix(u^{\otimes k})$.
As a conclusion, computing $\mu=law(\chi)$ is indeed the main question to be solved, from a massive number of mathematical viewpoints. In addition to all this, in view of the above, the measure $\mu=law(\chi)$ is expected to have an interesting physical meaning.
More concretely now, let us first investigate the quantum groups $S_N,S_N^+$. For the symmetric group $S_N$ that the standard coordinates are given by $u_{ij}=\chi(\sigma|\sigma(j)=i)$, and so the main character counts the number of fixed points: $$\chi(\sigma)=\sum_i\delta_{\sigma(i),i}=\#\left\{i\in\{1,\ldots,N\}\Big|\sigma(i)=i\right\}$$
A well-known computation, based on the inclusion-exclusion principle, shows that with $N\to\infty$ the probability for a random permutation $\sigma\in S_N$ to have no fixed points is $\simeq\frac{1}{e}$. More generally, one can show that the probability for $\sigma\in S_N$ to have exactly $k$ fixed points is $\simeq\frac{1}{k!e}$. Thus, $\mu=law(\chi)$ becomes with $N\to\infty$ a Poisson variable.
Regarding now $S_N^+$, the computation here is a bit more complicated, leading to a “free version” of the Poisson law. In order to explain this, we will need the following result, with $*$ being the classical convolution, and $\boxplus$ being Voiculescu’s free convolution:
The following Poisson type limits converge, for any $t>0$, $$p_t=\lim_{n\to\infty}\left(\left(1-\frac{1}{n}\right)\delta_0+\frac{1}{n}\delta_t\right)^{* n}\quad,\quad
\pi_t=\lim_{n\to\infty}\left(\left(1-\frac{1}{n}\right)\delta_0+\frac{1}{n}\delta_t\right)^{\boxplus n}$$ the limiting measures being the Poisson law $p_t$, and the Marchenko-Pastur law $\pi_t$, $$p_t=\frac{1}{e^t}\sum_{k=0}^\infty\frac{t^k\delta_k}{k!}\quad,\quad \pi_t=\max(1-t,0)\delta_0+\frac{\sqrt{4t-(x-1-t)^2}}{2\pi x}\,dx$$ whose moments are given by the following formulae: $$M_k(p_t)=\sum_{\pi\in P(k)}t^{|\pi|}\quad,\quad M_k(\pi_t)=\sum_{\pi\in NC(k)}t^{|\pi|}$$ The Marchenko-Pastur measure $\pi_t$ is also called free Poisson law.
This is something quite advanced, related to probability theory, free probability theory, and random matrices, the idea being as follows:
\(1) The first step is that of finding suitable functional transforms, which linearize the convolution operations in the statement. In the classical case this is the logarithm of the Fourier transform $\log F$, and in the free case this is Voiculescu’s $R$-transform.
\(2) With these tools in hand, the above limiting theorems can be proved in a standard way, a bit as when proving the Central Limit Theorem. The computations give the moment formulae in the statement, and the density computations are standard as well.
\(3) Finally, in order for the discussion to be complete, what still remains to be explained is the precise nature of the “liberation” operation $p_t\to\pi_t$, as well as the random matrix occurrence of $\pi_t$. This is more technical, and we refer here to [@bpa], [@mpa], [@vdn].
Getting back now to quantum groups, the results here are as follows:
The law of $\chi=\sum_iu_{ii}$ is as follows:
1. For $S_N$ with $N\to\infty$ we obtain the Poisson law $p_1$.
2. For $S_N^+$ with $N\geq4$ we obtain the free Poisson law $\pi_1$.
Also, the law of $\chi_t=\sum_{i=1}^{[tN]}u_{ii}$ for $S_N/S_N^+$, with $t\in(0,1]$, becomes $p_t/\pi_t$ with $N\to\infty$.
This is something quite technical, the idea being as follows:
\(1) In the classical case this is well-known, and follows for instance by using the inclusion-exclusion principle, and then letting $N\to\infty$, as explained above.
\(2) In the free case it is known that $P_k=Fix(u^{\otimes k})$ equals $TL_N(k)$ at $N\geq4$, and at the probabilistic level, this leads to the formulae in the statement. See [@bbc].
Let us go back now to the Hadamard matrices, and do some computations here. In the general matrix model context, from Definition 10.9 above, we have the following formula for the Haar integration functional of the Hopf image, coming from [@wo1]:
Given an inner faithful model $\pi:C(G)\to M_K(C(T))$, we have $$\int_G=\lim_{k\to\infty}\frac{1}{k}\sum_{r=1}^k\int_G^r$$ where $\int_G^r=(\varphi\circ\pi)^{*r}$, with $\varphi=tr\otimes\int_T$ being the random matrix trace.
We must prove that the limit in the statement $\int_G'$ converges, and that we have $\int_G'=\int_G$. It is enough to check this on the coefficients of corepresentations: $$\left(id\otimes\int_G'\right)v=\left(id\otimes\int_G\right)v$$
We know from Theorem 10.8 that the matrix on the right is the orthogonal projection onto $Fix(v)$. As for the matrix on the left, this is the orthogonal projection onto the $1$-eigenspace of $(id\otimes\varphi\pi)v$. Now observe that, if we set $V_{ij}=\pi(v_{ij})$, we have: $$(id\otimes\varphi\pi)v=(id\otimes\varphi)V$$
Thus, as in the proof of Theorem 10.8, we conclude that the $1$-eigenspace that we are interested in equals $Fix(V)$. But, according to Theorem 11.8, we have: $$Fix(V)=Fix(v)$$
Thus, we have proved that we have $\int_G'=\int_G$, as desired.
In practice, we are led to the computation of the truncated integrals $\int_G^r$ appearing in the above result, and the formula of these truncated integrals is as follows:
The truncated integrals $\int_G^r=(\varphi\circ\pi)^{*r}$ are given by $$\int_G^ru_{a_1b_1}^{\varepsilon_1}\ldots u_{a_pb_p}^{\varepsilon_p}=(T_\varepsilon^r)_{a_1\ldots a_p,b_1\ldots b_p}$$ for any exponents $\varepsilon_i\in\{1,*\}$, with the matrix on the right being given by $$(T_\varepsilon)_{i_1\ldots i_p,j_1\ldots j_p}=\left(tr\otimes\int_T\right)(U_{i_1j_1}^{\varepsilon_1}\ldots U_{i_pj_p}^{\varepsilon_p})$$ where $U_{ij}=\pi(u_{ij})$ are the images of the standard coordinates in the model.
This is something straightforward, which comes from the definition of the truncated integrals, namely $\int_G^r=(\varphi\circ\pi)^{*r}$, with $\varphi=tr\otimes\int_T$ being the random matrix trace.
Regarding now the main character, the result here is as follows:
Let $\mu^r$ be the law of $\chi=Tr(u)$ with respect to $\int_G^r=(\varphi\circ\pi)^{*r}$.
1. We have the convergence formula $\mu=\lim_{k\to\infty}\frac{1}{k}\sum_{r=0}^k\mu^r$, in moments.
2. The $*$-moments of the truncated measure $\mu^r$ are the numbers $c_\varepsilon^r=Tr(T_\varepsilon^r)$.
These results are both elementary, the proof being as follows:
\(1) This follows from the general limiting formula in Theorem 11.15.
\(2) This follows from the formula in Proposition 11.16, by summing over $a_i=b_i$.
In connection with the complex Hadamard matrices, we can use this technology in order to discuss the behavior of the construction $H\to G$ with respect to the operations $H\to H^t,\bar{H},H^*$. Let us first introduce the following abstract duality:
Let $\pi:C(G)\to M_N(\mathbb C)$ be inner faithful, mapping $u_{ij}\to U_{ij}$.
1. We set $(U'_{kl})_{ij}=(U_{ij})_{kl}$, and define $\widetilde{\rho}:C(U_N^+)\to M_N(\mathbb C)$ by $v_{kl}\to U_{kl}'$.
2. We perform the Hopf image construction, as to get a model $\rho:C(G')\to M_N(\mathbb C)$.
In this definition $U_N^+$ is Wang’s quantum unitary group, whose standard coordinates are subject to the biunitarity condition $u^*=u^{-1},u^t=\bar{u}^{-1}$. Observe that the matrix $U'$ constructed in (1) is given by $U'=\Sigma U$, where $\Sigma$ is the flip. Thus this matrix is indeed biunitary, and produces a representation $\rho$ as in (1), and then a factorization as in (2).
The operation $A\to A'$ is a duality, in the sense that we have $A''=A$, and in the Hadamard matrix case, this comes from the operation $H\to H^t$. See [@bbi].
We denote by $D$ the dilation operation for probability measures, or for general $*$-distributions, given by the formula $D_r(law(X))=law(rX)$. We have then:
Consider the rescaled measure $\eta^r=D_{1/N}(\mu^r)$.
1. The moments $\gamma_p^r=c_p^r/N^p$ of $\eta^r$ satisfy $\gamma_p^r(G)=\gamma_r^p(G')$.
2. $\eta^r$ has the same moments as the matrix $T_r'=T_r(G')$.
3. In the real case $u=\bar{u}$ we have $\eta^r=law(T_r')$.
All the results follow from Theorem 11.17, as follows:
\(1) We have the following computation: $$\begin{aligned}
c_p^r(A)
&=&\sum_i(T_p)_{i_1^1\ldots i_p^1,i_1^2\ldots i_p^2}\ldots\ldots(T_p)_{i_1^r\ldots i_p^r,i_1^1\ldots i_p^1}\\
&=&\sum_itr(U_{i_1^1i_1^2}\ldots U_{i_p^1i_p^2})\ldots\ldots tr(U_{i_1^ri_1^1}\ldots U_{i_p^ri_p^1})\\
&=&\frac{1}{N^r}\sum_i\sum_j(U_{i_1^1i_1^2})_{j_1^1j_2^1}\ldots(U_{i_p^1i_p^2})_{j_p^1j_1^1}\ldots\ldots(U_{i_1^ri_1^1})_{j_1^rj_2^r}\ldots(U_{i_p^ri_p^1})_{j_p^rj_1^r}\end{aligned}$$
In terms of the matrix $(U'_{kl})_{ij}=(U_{ij})_{kl}$, then by permuting the terms in the product on the right, and finally with the changes $i_a^b\leftrightarrow i_b^a,j_a^b\leftrightarrow j_b^a$, we obtain: $$\begin{aligned}
c_p^r(A)
&=&\frac{1}{N^r}\sum_i\sum_j(U'_{j_1^1j_2^1})_{i_1^1i_1^2}\ldots(U'_{j_p^1j_1^1})_{i_p^1i_p^2}\ldots\ldots(U'_{j_1^rj_2^r})_{i_1^ri_1^1}\ldots(U'_{j_p^rj_1^r})_{i_p^ri_p^1}\\
&=&\frac{1}{N^r}\sum_i\sum_j(U'_{j_1^1j_2^1})_{i_1^1i_1^2}\ldots(U'_{j_1^rj_2^r})_{i_1^ri_1^1}\ldots\ldots(U'_{j_p^1j_1^1})_{i_p^1i_p^2}\ldots(U'_{j_p^rj_1^r})_{i_p^ri_p^1}\\
&=&\frac{1}{N^r}\sum_i\sum_j(U'_{j_1^1j_1^2})_{i_1^1i_2^1}\ldots(U'_{j_r^1j_r^2})_{i_r^1i_1^1}\ldots\ldots(U'_{j_1^pj_1^1})_{i_1^pi_2^p}\ldots(U'_{j_r^pj_r^1})_{i_r^pi_1^p}\end{aligned}$$
On the other hand, if we use again the above formula of $c_p^r(A)$, but this time for the matrix $U'$, and with the changes $r\leftrightarrow p$ and $i\leftrightarrow j$, we obtain: $$c_r^p(A')=\frac{1}{N^p}\sum_i\sum_j(U'_{j_1^1j_1^2})_{i_1^1i_2^1}\ldots(U'_{j_r^1j_r^2})_{i_r^1i_1^1}\ldots\ldots(U'_{j_1^pj_1^1})_{i_1^pi_2^p}\ldots(U'_{j_r^pj_r^1})_{i_r^pi_1^p}$$
Now by comparing this with the previous formula, we obtain $N^rc_p^r(A)=N^pc_r^p(A')$. Thus we have $c_p^r(A)/N^p=c_r^p(A')/N^r$, and this gives the result.
\(2) By using (1) and the formula in Theorem 11.17, we obtain: $$\frac{c_p^r(A)}{N^p}=\frac{c_r^p(A')}{N^r}=\frac{Tr((T'_r)^p)}{N^r}=tr((T'_r)^p)$$
But this gives the equality of moments in the statement.
\(3) This follows from the moment equality in (2), and from the standard fact that for self-adjoint variables, the moments uniquely determine the distribution.
All this is interesting in connection with the transposition operation $H\to H^t$ for the complex Hadamard matrices, and its relation with the lattice model problematics. Indeed, in the context of the classical spin models, the matrix of Boltzmann weights must be symmetric, and the precise meaning of the “Hadamard matrix models” depends on this. For more on this, and other related issues, we refer to [@bbi] and related papers.
Fourier models
==============
We have seen that associated to any Hadamard matrix $H\in M_N(\mathbb C)$ is a quantum permutation group $G\subset S_N^+$. The construction $H\to G$ is something very simple, obtained by factorizing the representation $\pi:C(S_N^+)\to M_N(\mathbb C)$ given by $u_{ij}\to Proj(H_i/H_j)$, where $H_1,\ldots,H_N\in\mathbb T^N$ are the rows of $H$. As a basic example, a Fourier matrix $H=F_G$ produces in this way the group $G$ itself, acting on itself.
Generally speaking, the quantum group $G\subset S_N^+$ is expected to capture the mathematics and physics of the Hadamard matrix $H\in M_N(\mathbb C)$, via its representation theory.
All this is, unfortunately, a bit conjectural for the moment. In what concerns von Neumann algebra, orthogonal MASA, commuting square, subfactor and planar algebra aspects, this is definitely the case, thanks to the results explained in section 11.
Getting beyond this level, however, as to reach to some clear statistical mechanical results, in the spirit of [@jo1], [@jo2], [@jo3] and beyond, remains an open problem.
From a purely mathematical perspective, there are many interesting questions which are open as well. We would like for instance to know if the defect, and the other geometric invariants of $H$, are captured or not by the representation theory of $G$. Similar questions make sense for the glow. Nothing much is known here, and for some comments and speculations on this subject, we refer to [@ba1], [@ba2], [@ba3], [@ba4], [@ba5], [@ba6].
In the lack of an answer to these questions, let us go back to the construction $H\to G$, as it is, and try to have more examples worked out. Generally speaking, going beyond Theorem 10.16 is a difficult task, and only one computation is available so far.
This computation, performed in [@bur] by using the subfactor formalism, and in [@bbi] by using the quantum group formalism, regards the Diţă deformations of the tensor products $F_{G\times H}=F_G\otimes F_H$ of Fourier matrices. Besides [@bbi], [@bur], some further results on all this are available from [@ba6] and from [@bic]. We will follow here the approach in [@bbi].
Before starting, we should mention that, in view of the results from section 4 above, the natural question regarding the deformed Fourier matrices would be that of computing the quantum groups associated to the Nicoara-White deformations of $F_G$.
However, no result is available here so far, the point being that the above-mentioned papers [@ba6], [@bbi], [@bic], [@bur] were all written before the Nicoara-White discovery in [@nwh].
In short, we have to be modest here as well. In what follows we will explain the material from the paper [@bbi], which is somehow central to the subject. This will consist in the computation for the Diţă deformations of the tensor products $F_{G\times H}=F_G\otimes F_H$, at generic values of the parameters, and of some related probabilistic work.
Let us begin by recalling, following [@dit], the definition of the deformations:
The matrix $\mathcal F_{G\times H}\in M_{G\times H}(\mathbb T^{G\times H})$ given by $$(\mathcal F_{G\times H})_{ia,jb}(Q)=Q_{ib}(F_G)_{ij}(F_H)_{ab}$$ is complex Hadamard, and its fiber at $Q=(1_{ib})$ is the Fourier matrix $F_{G\times H}$.
The fact that the rows of $F_G\otimes_QF_H=\mathcal F_{G\times H}(Q)$ are pairwise orthogonal follows from definitions. With $1=(1_{ij})$ we have $(F_G\otimes_1F_H)_{ia,jb}=(F_G)_{ij}(F_H)_{ab}$, and we recognize here the formula of $F_{G\times H}=F_G\otimes F_H$, in double index notation.
As in [@bbi], it is convenient to take an abstract approach to all this:
Associated to a finite abelian group $X$ is the Fourier model $$\pi:C(X)\to M_{|X|}(\mathbb C)$$ coming from the matrix $(U_{ij})_{kl}=\frac{1}{N}F_{i-j,k-l}$, where $F=F_X$.
Now let $X,Y$ be finite abelian groups, and let us try to understand the model constructed by deforming the tensor product of the corresponded Fourier models:
Given two finite abelian groups $X,Y$, we consider the corresponding Fourier models $U,V$, we construct the deformation $W=U\otimes_QV$, and we factorize $$\xymatrix{C(S_{X\times Y}^+)\ar[rr]^{\pi_Q}\ar[rd]&&M_{X\times Y}(\mathbb C)\\&C(G_Q)\ar[ur]_\pi&}$$ with $C(G_Q)$ being the Hopf image of $\pi_Q$.
Explicitely computing the compact quantum group $G_Q$, as function of the parameter matrix $Q\in M_{X\times Y}(\mathbb T)$, will be our main purpose, in what follows.
In order to do so, we use the following notion:
Let $C(S_M^+)\to A$ and $C(S_N^+)\to B$ be Hopf algebra quotients, with fundamental corepresentations denoted $u,v$. We let $$A*_wB=A^{*N}*B/<[u_{ab}^{(i)},v_{ij}]=0>$$ with the Hopf algebra structure making $w_{ia,jb}=u_{ab}^{(i)}v_{ij}$ a corepresentation.
The fact that we have indeed a Hopf algebra follows from the fact that $w$ is magic. In terms of quantum groups, if $A=C(G)$, $B=C(H)$, we write $A*_wB=C(G\wr_*H)$: $$C(G)*_wC(H)=C(G\wr_*H)$$
The $\wr_*$ operation is then the free analogue of $\wr$, the usual wreath product. See [@bbi].
We will need as well the following elementary result:
If $X$ is a finite abelian group then $$C(X)=C(S_X^+)/<u_{ij}=u_{kl}|\forall i-j=k-l>$$ with all the indices taken inside $X$.
Observe first that $C(Y)=C(S_X^+)/<u_{ij}=u_{kl}|\forall i-j=k-l>$ is commutative, because $u_{ij}u_{kl}=u_{ij}u_{i,l-k+i}=\delta_{j,l-k+i}u_{ij}$ and $u_{kl}u_{ij}=u_{i,l-k+i}u_{ij}=\delta_{j,l-k+i}u_{ij}$. Thus we have $Y\subset S_X$, and since $u_{ij}(\sigma)=\delta_{i\sigma(j)}$ for any $\sigma\in Y$, we obtain: $$i-j=k-l\implies(\sigma(j)=i\iff\sigma(l)=k)$$
But this condition tells us precisely that $\sigma(i)-i$ must be independent on $i$, and so $\sigma(i)=i+x$ for some $x\in X$, and so $\sigma\in X$, as desired.
We can now factorize representation $\pi_Q$ in Definition 12.3, as follows:
We have a factorization $$\xymatrix{C(S_{X\times Y}^+)\ar[rr]^{\pi_Q}\ar[rd]&&M_{X\times Y}(\mathbb C)\\&C(Y\wr_*X)\ar[ur]_\pi&}$$ given by $U_{ab}^{(i)}=\sum_jW_{ia,jb}$ and by $V_{ij}=\sum_aW_{ia,jb}$, independently of $b$.
With $K=F_X,L=F_Y$ and $M=|X|,N=|Y|$, the formula of the magic matrix $W\in M_{X\times Y}(M_{X\times Y}(\mathbb C))$ associated to $H=K\otimes_QL$ is: $$\begin{aligned}
(W_{ia,jb})_{kc,ld}
&=&\frac{1}{MN}\cdot\frac{Q_{ic}Q_{jd}}{Q_{id}Q_{jc}}\cdot\frac{K_{ik}K_{jl}}{K_{il}K_{jk}}\cdot\frac{L_{ac}L_{bd}}{L_{ad}L_{bc}}\\
&=&\frac{1}{MN}\cdot\frac{Q_{ic}Q_{jd}}{Q_{id}Q_{jc}}\cdot K_{i-j,k-l}L_{a-b,c-d}\end{aligned}$$
Our claim that the representation $\pi_Q$ constructed in Definition 12.3 can be factorized in three steps, up to the factorization in the statement, as follows: $$\xymatrix@R=40pt@C=40pt{C(S_{X\times Y}^+)\ar[rr]^{\pi_Q}\ar[d]&&M_{X\times Y}(\mathbb C)\\C(S_Y^+\wr_*S_X^+)\ar[r]\ar@{.>}[rru]&C(S_Y^+\wr_*X)\ar[r]\ar@{.>}[ur]&C(Y\wr_*X)\ar@{.>}[u]}$$
Indeed, the construction of the map on the left is standard, and this produces the first factorization. Regarding the second factorization, this comes from the fact that since the elements $V_{ij}$ depend on $i-j$, they satisfy the defining relations for the quotient algebra $C(S_X^+)\to C(X)$, coming from Proposition 12.5. Finally, regarding the third factorization, observe that the above matrix $W_{ia,jb}$ depends only on $a-b$. By summing over $j$ we obtain that $U_{ab}^{(i)}$ depends only on $a-b$, and we are done.
In order to further factorize the above representation, we use:
If $H\curvearrowright\Gamma$ is a finite group acting by automorphisms on a discrete group, the corresponding crossed coproduct Hopf algebra is $$C^*(\Gamma)\rtimes C(H)=C^*(\Gamma)\otimes C(H)$$ with comultiplication given by the following formula, $$\Delta(r\otimes\delta_k)=\sum_{h\in H}(r\otimes\delta_h)\otimes(h^{-1}\cdot r\otimes\delta_{h^{-1}k})$$ for $r\in\Gamma$ and $k\in H$.
Observe that $C(H)$ is a subcoalgebra, and that $C^*(\Gamma)$ is not a subcoalgebra. The quantum group corresponding to $C^*(\Gamma)\rtimes C(H)$ is denoted $\widehat{\Gamma}\rtimes H$.
Now back to the factorization in Theorem 12.6, the point is that we have:
With $L=F_Y,N=|Y|$ we have an isomorphism $$C(Y\wr_*X)\simeq C^*(Y)^{*X}\rtimes C(X)$$ given by $v_{ij}\to1\otimes v_{ij}$ and $u_{ab}^{(i)}=\frac{1}{N}\sum_cL_{b-a,c}c^{(i)}\otimes 1$.
We know that $C(Y\wr_*X)$ is the quotient of $C(Y)^{*X}*C(X)$ by the relations $[u_{ab}^{(i)},v_{ij}]=0$. Now since $v_{ij}$ depends only on $j-i$, we obtain: $$[u_{ab}^{(i)},v_{kl}]=[u_{ab}^{(i)},v_{i,l-k+i}]=0$$
Thus, we are in a usual tensor product situation, and we have: $$C(Y\wr_*X)=C(Y)^{*X}\otimes C(X)$$
Let us compose now this identification with $\Phi^{*X}\otimes id$, where $\Phi:C(Y)\to C^*(Y)$ is the Fourier transform. We obtain an isomorphism as in the statement, and since $\Phi(u_{ab})=\frac{1}{N}\sum_cL_{b-a,c}c$, the formula for the image of $u_{ab}^{(i)}$ is indeed the one in the statement.
Here is now our key result, which will lead to further factorizations:
With $c^{(i)}=\sum_aL_{ac}u_{a0}^{(i)}$ and $\varepsilon_{ke}=\sum_iK_{ik}e_{ie}$ we have: $$\pi(c^{(i)})(\varepsilon_{ke})=\frac{Q_{i,e-c}Q_{i-k,e}}{Q_{ie}Q_{i-k,e-c}}\varepsilon_{k,e-c}$$ In particular if $c_1+\ldots+c_s=0$ then $\pi(c_1^{(i_1)}\ldots c_s^{(i_s)})$ is diagonal, for any $i_1,\ldots,i_s$.
We have the following formula: $$\pi(c^{(i)})=\sum_aL_{ac}\pi(u_{a0}^{(i)})=\sum_{aj}L_{ac}W_{ia,j0}$$
On the other hand, in terms of the basis in the statement, we have: $$W_{ia,jb}(\varepsilon_{ke})=\frac{1}{N}\delta_{i-j,k}\sum_d\frac{Q_{id}Q_{je}}{Q_{ie}Q_{jd}}L_{a-b,d-e}\varepsilon_{kd}$$
We therefore obtain, as desired: $$\begin{aligned}
\pi(c^{(i)})(\varepsilon_{ke})
&=&\frac{1}{N}\sum_{ad}L_{ac}\frac{Q_{id}Q_{i-k,e}}{Q_{ie}Q_{i-k,d}}L_{a,d-e}\varepsilon_{kd}\\
&=&\frac{1}{N}\sum_d\frac{Q_{id}Q_{i-k,e}}{Q_{ie}Q_{i-k,d}}\varepsilon_{kd}\sum_aL_{a,d-e+c}\\
&=&\sum_d\frac{Q_{id}Q_{i-k,e}}{Q_{ie}Q_{i-k,d}}\varepsilon_{kd}\delta_{d,e-c}\\
&=&\frac{Q_{i,e-c}Q_{i-k,e}}{Q_{ie}Q_{i-k,e-c}}\varepsilon_{k,e-c}\end{aligned}$$
Regarding now the last assertion, this follows from the fact that each matrix of type $\pi(c_r^{(i_r)})$ acts on the standard basis elements $\varepsilon_{ke}$ by preserving the left index $k$, and by rotating by $c_r$ the right index $e$. Thus when we assume $c_1+\ldots+c_s=0$ all these rotations compose up to the identity, and we obtain indeed a diagonal matrix.
We have now all needed ingredients for refining Theorem 12.6, as follows:
We have a factorization as follows, $$\xymatrix{C(S_{X\times Y}^+)\ar[rr]^{\pi_Q}\ar[rd]&&M_{X\times Y}(\mathbb C)\\&C^*(\Gamma_{X,Y})\rtimes C(X)\ar[ur]_\rho&}$$ where $\Gamma_{X,Y}=Y^{*X}/<[c_1^{(i_1)}\ldots c_s^{(i_s)},d_1^{(j_1)}\ldots d_s^{(j_s)}]=1|\sum_rc_r=\sum_rd_r=0>$.
Assume that we have a representation $\pi:C^*(\Gamma)\rtimes C(X)\to M_L(\mathbb C)$, let $\Lambda$ be a $X$-stable normal subgroup of $\Gamma$, so that $X$ acts on $\Gamma/\Lambda$ and that we can form the crossed coproduct $C^*(\Gamma/\Lambda)\rtimes C(X)$, and assume that $\pi$ is trivial on $\Lambda$. Then $\pi$ factorizes as: $$\xymatrix{C^*(\Gamma)\rtimes C(X)\ar[rr]^\pi\ar[rd]&&M_L(\mathbb C)\\&C^*(\Gamma/\Lambda)\rtimes C(X)\ar[ur]_\rho}$$
With $\Gamma=Y^{*X}$, and by using the above results, this gives the result.
In general, further factorizing the representation found in Theorem 12.10 is a quite complicated task. In what follows we restrict attention to the case where the parameter matrix $Q$ is generic, in the sense that its entries are as algebrically independent as possible, and we prove that the representation in Theorem 12.10 is the minimal one.
Our starting point is the group $\Gamma_{X,Y}$ found above:
Associated to two finite abelian groups $X,Y$ is the discrete group $$\Gamma_{X,Y}=Y^{*X}\Big/\left<[c_1^{(i_1)}\ldots c_s^{(i_s)},d_1^{(j_1)}\ldots d_s^{(j_s)}]=1\Big|\sum_rc_r=\sum_rd_r=0\right>$$ where the superscripts refer to the $X$ copies of $Y$, inside the free product.
We will need a more convenient description of this group. The idea here is that the above commutation relations can be realized inside a suitable semidirect product.
Given a group acting on another group, $H\curvearrowright G$, we denote as usual by $G\rtimes H$ the semidirect product of $G$ by $H$, which is the set $G\times H$, with multiplication: $$(a,s)(b,t)=(as(b),st)$$
Now given a group $G$, and a finite abelian group $Y$, we can make $Y$ act on $G^Y$, and form the product $G^Y\rtimes Y$.
Since the elements of type $(g,\ldots,g)$ are invariant, we can form as well the product $(G^Y/G)\rtimes Y$, and by identifying $G^Y/G\simeq G^{|Y|-1}$ via the map $(1,g_1,\ldots,g_{|Y|-1})\to(g_1,\ldots,g_{|Y|-1})$, we obtain a product $G^{|Y|-1}\rtimes Y$.
With these notations, we have the following result:
The group $\Gamma_{X,Y}$ has the following properties:
1. $\Gamma_{X,Y}\simeq\mathbb Z^{(|X|-1)(|Y|-1)}\rtimes Y$.
2. $\Gamma_{X,Y}\subset\mathbb Z^{(|X|-1)|Y|}\rtimes Y$ via $c^{(0)}\to(0,c)$ and $c^{(i)}\to(b_{i0}-b_{ic},c)$ for $i\neq 0$, where $b_{ic}$ are the standard generators of $\mathbb Z^{(|X|-1)|Y|}$.
We prove these assertions at the same time. We must prove that we have group morphisms, given by the formulae in the statement, as follows: $$\Gamma_{X,Y}\simeq\mathbb Z^{(|X|-1)(|Y|-1)}\rtimes Y\subset \mathbb Z^{(|X|-1)|Y|}\rtimes Y$$
Our first claim is that the formula in (2) defines a morphism as follows: $$\Gamma_{X,Y}\to\mathbb Z^{(|X|-1)|Y|}\rtimes Y$$
Indeed, the elements $(0,c)$ produce a copy of $Y$, and since we have a group embedding $Y\subset\mathbb Z^{|Y|}\rtimes Y$ given by $c\to(b_0-b_c,c)$, the elements $C^{(i)}=(b_{i0}-b_{ic},c)$ produce a copy of $Y$, for any $i\neq 0$. In order to check now the commutation relations, observe that: $$C_1^{(i_1)}\ldots C_s^{(i_s)}=\left(b_{i_10}-b_{i_1c_1}+b_{i_2c_1}-b_{i_2,c_1+c_2}+\ldots+b_{i_s,c_1+\ldots+c_{s-1}}-b_{i_s,c_1+\ldots+c_s},\sum_rc_r\right)$$
Thus $\sum_rc_r=0$ implies $C_1^{(i_1)}\ldots C_s^{(i_s)}\in\mathbb Z^{(|X|-1)|Y|}$, and since we are now inside an abelian group, we have the commutation relations, and our claim is proved.
Using the considerations before the statement of the proposition, it is routine to construct an embedding $\mathbb Z^{(|X|-1)(|Y|-1)}\rtimes Y\subset \mathbb Z^{(|X|-1)|Y|}\rtimes Y$ such that we have group morphisms whose composition is the group morphism just constructed, as follows: $$\Gamma_{X,Y}\to\mathbb Z^{(|X|-1)(|Y|-1)}\rtimes Y\subset \mathbb Z^{(|X|-1)|Y|}\rtimes Y$$
It remains to prove that the map on the left is injective. For this purpose, consider the morphism $\Gamma_{X,Y}\to Y$ given by $c^{(i)}\to c$, whose kernel $T$ is formed by the elements of type $c_1^{(i_1)} \ldots c_s^{(i_s)}$, with $\sum_rc_r=0$. We get an exact sequence, as follows: $$1\to T\to\Gamma_{X,Y}\to Y\to1$$
This sequence splits by $c\to c^{(0)}$, so we have $\Gamma_{X,Y}\simeq T \rtimes Y$. Now by the definition of $\Gamma_{X,Y}$, the subgroup $T$ constructed above is abelian, and is moreover generated by the elements $(-c)^{(0)}c^{(i)}$, $i,c \not=0$.
Finally, the fact that $T$ is freely generated by these elements follows from the computation in the proof of Proposition 12.14 below.
Let us specify now what our genericity assumptions are:
We use the following notions:
1. We call $p_1,\ldots,p_m\in\mathbb T$ root independent if for any $r_1,\ldots, r_m\in\mathbb Z$ we have $p_1^{r_1}\ldots p_m^{r_m}=1\implies r_1=\ldots=r_m=0$.
2. A matrix $Q\in M_{X\times Y}(\mathbb T)$, taken to be dephased $(Q_{0c}=Q_{i0}=1)$, is called generic if the elements $Q_{ic}$, with $i,c\neq0$, are root independent.
We will need the following technical result:
Assume that $Q\in M_{X\times Y}(\mathbb T)$ is generic, and put $$\theta_{ic}^{ke}=\frac{Q_{i,e-c}Q_{i-k,e}}{Q_{ie}Q_{i-k,e-c}}$$ For every $k \in X$, we have a representation $\pi^k : \Gamma_{X,Y}\rightarrow U_{|Y|}$ given by: $$\pi^k(c^{(i)})\epsilon_e=\theta_{ic}^{ke}\epsilon_{e-c}$$ The family of representations $(\pi^k)_{k \in X}$ is projectively faithful in the sense that if for some $t \in \Gamma_{X,Y}$, we have that $\pi^k(t)$ is a scalar matrix for any $k$, then $t=1$.
The representations $\pi^k$ arise from above. With $\Gamma_{X,Y}=T\rtimes Y$, as in the proof of Proposition 12.12, we see that for $t \in \Gamma_{X,Y}$ such that $\pi^k(t)$ is a scalar matrix for any $k$, then $t \in T$, since the elements of $T$ are the only ones having their image by $\pi^k$ formed by diagonal matrices. Now write $t$ as follows, with the generators of $T$ being as in the proof of Proposition 12.12 above, and with $R_{ic}\in\mathbb Z$ being certain integers: $$t=\prod_{i \not=0, c\not=0} ((-c)^{(0)}(c)^{(i)})^{R_{ic}}$$
Consider now the following quantities: $$\begin{aligned}
A(k,e)&=&\prod_{i\neq0}\prod_{c\neq0}(\theta_{ic}^{ke}(\theta_{0c}^{ke})^{^{-1}})^{R_{ic}}\\
&=&\prod_{i\neq0}\prod_{c\neq0} (\theta_{ic}^{ke})^{R_{ic}}(\theta_{0c}^{ke})^{-R_{ic}}\\
&=&\prod_{i\neq0}\prod_{c\neq0}(\theta_{ic}^{ke})^{R_{ic}}
\cdot\prod_{c\neq0}(\theta_{0c}^{ke})^{-\sum_{i\neq0}R_{ic}}\\
&=&\prod_{j\neq0}\prod_{c\neq0} (\theta_{jc}^{ke})^{R_{jc}}
\cdot\prod_{c\neq0}\prod_{j\neq0}(\theta_{jc}^{ke})^{\sum_{i\neq0}R_{ic}}\\
&=&\prod_{j\neq0}\prod_{c\neq0}(\theta_{jc}^{ke})^{R_{jc}+\sum_{i\neq0}R_{ic}}\end{aligned}$$
We have $\pi^k(t)(\epsilon_e)= A(k,e)\epsilon_e$ for any $k,e$. Our assumption is that for any $k$, we have $A(k,e)=A(k,f)$ for any $e,f$. Using the root independence of the elements $Q_{ic}$, $i,c \not=0$, we see that this implies $R_{ic}=0$ for any $i,c$, and this proves our assertion.
We will need as well the following result, technical as well:
Let $\pi:C^*(\Gamma)\rtimes C(H) \rightarrow L$ be a surjective Hopf algebra map, such that $\pi_{|C(H)}$ is injective, and such that for $r \in \Gamma$ and $f \in C(H)$, we have: $$\pi(r \otimes 1)=\pi(1 \otimes f) \implies r=1$$ Then $\pi$ is an isomorphism.
We use here various Hopf algebra tools. Put $A=C^*(\Gamma)\rtimes C(H)$. We start with the following Hopf algebra exact sequence, where $i(f)=1\otimes f$ and $p=\varepsilon\otimes 1$: $$\mathbb C\to C(H)\overset{i}\to A \overset{p}\to C^*(\Gamma)\to
\mathbb C$$
Since $\pi\circ i$ is injective, and Hopf subalgebra $\pi\circ i(C(H))$ is central in $L$, we can form the quotient Hopf algebra $\overline{L} = L/ (\pi\circ i(C(H))^+L$, and we get another exact sequence: $$\mathbb C\to C(H)\xrightarrow{\pi \circ i} L \overset{q}\to \overline{L} \to
\mathbb C$$
Note that this sequence is indeed exact, e.g. by centrality. So we get the following diagram with exact rows, with the Hopf algebra map on the right surjective: $$\xymatrix{\mathbb C\ar[r]&C(H)\ar@2@{-}[d]\ar[r]^i&A\ar[d]^\pi\ar[r]^p&C^*(\Gamma)\ar[r]\ar[d]&\mathbb C\\
\mathbb C\ar[r]&C(H)\ar[r]^{\pi\circ i}&L\ar[r]^q&\overline{L}\ar[r]&\mathbb C}$$
Since a quotient of a group algebra is still a group algebra, we get a commutative diagram with exact rows as follows: $$\xymatrix{\mathbb C\ar[r]&C(H)\ar@2@{-}[d]\ar[r]^i&A\ar[d]^\pi\ar[r]^p&C^*(\Gamma)\ar[r]\ar[d]&\mathbb C\\
\mathbb C\ar[r]&C(H)\ar[r]^{\pi\circ i}&L\ar[r]^{q'}&C^*(\overline{\Gamma})\ar[r]&\mathbb C}$$
Here the Hopf algebra map on the right is induced by a surjective morphism $u : \Gamma \rightarrow \overline{\Gamma}$, $g \mapsto \overline{g}$. By the five lemma we just have to show that $u$ is injective. So, let $g \in \Gamma$ be such that $u(g)=1$. Then $q' \pi(g \otimes 1) = u p(g\otimes 1)=u(g)=\overline{g}=1$. For $g \in \Gamma$, put: $$_gA= \{a \in A \ | \ p(a_1) \otimes a_2= g \otimes a\}$$ $$_{\overline{g}}L= \{l \in L \ | \ q'(l_1) \otimes l_2= \overline{g} \otimes l\}$$
The commutativity of the right square ensures that $\pi(_gA) \subset {_{\overline{g}}L}$. Then with the previous $g$, we have $\pi(g \otimes 1) \in {_{\overline{1}}L} = \pi i (C(H))$ (exactness of the sequence), so $\pi(g \otimes 1)= \pi(1 \otimes f)$ for some $f \in C(H)$. We conclude by our assumption that $g=1$.
We have now all the needed ingredients for proving a main result, as follows:
When $Q$ is generic, the minimal factorization for $\pi_Q$ is $$\xymatrix{C(S_{X\times Y}^+)\ar[rr]^{\pi_Q}\ar[rd]&&M_{X\times Y}(\mathbb C)\\&C^*(\Gamma_{X,Y})\rtimes C(X)\ar[ur]_\pi&}$$ where $\Gamma_{X,Y}\simeq\mathbb Z^{(|X|-1)(|Y|-1)}\rtimes Y$ is the discrete group constructed above.
We want to apply Proposition 12.14 to the morphism $\theta : C^*(\Gamma_{X,Y})\rtimes C(X)\to L$ arising from the factorization in Theorem 12.10, where $L$ denotes the Hopf image of $\pi_Q$, which produces the following commutative diagram: $$\xymatrix{
C(S_{X \times Y}^+) \ar[rr]^{\pi_Q} \ar[dr]_{} \ar@/_/[ddr]_{}& & M_{X \times Y}(\mathbb C) \\
& L \ar[ur]_{} & \\
& C^*(\Gamma_{X,Y})\rtimes C(X) \ar@{-->}[u]_\theta \ar@/_/[uur]_{\pi}&
}$$
The first observation is that the injectivity assumption on $C(X)$ holds by construction, and that for $f \in C(X)$, the matrix $\pi(f)$ is “block scalar”, the blocks corresponding to the indices $k$ in the basis $\varepsilon_{ke}$ in the basis from Proposition 12.14.
Now for $r \in \Gamma_{X,Y}$ with $\theta(r\otimes 1)=\theta(1 \otimes f)$ for some $f \in C(X)$, we see, using the commutative diagram, that we will have that $\pi(r \otimes 1)$ is block scalar.
By Proposition 12.12, the family of representations $(\pi^k)$ of $\Gamma_{X,Y}$, corresponding to the blocks $k$, is projectively faithful, so $r=1$.
We can apply indeed Proposition 12.14, and we are done.
Let us try now to compute the Kesten measure $\mu=law(\chi)$. Our results here will be a combinatorial moment formula, a geometric interpretation of it, and an asymptotic result.
Let us begin with the moment formula, which is as follows:
We have the moment formula $$\int_G\chi^p
=\frac{1}{|X|\cdot|Y|}\#\left\{\begin{matrix}i_1,\ldots,i_p\in X\\ d_1,\ldots,d_p\in Y\end{matrix}\Big|\begin{matrix}[(i_1,d_1),(i_2,d_2),\ldots,(i_p,d_p)]\ \ \ \ \\=[(i_1,d_p),(i_2,d_1),\ldots,(i_p,d_{p-1})]\end{matrix}\right\}$$ where the sets between square brackets are by definition sets with repetition.
According to the various formulae above, the factorization found in Theorem 12.16 is, at the level of standard generators, as follows: $$\begin{matrix}
C(S_{X\times Y}^+)&\to&C^*(\Gamma_{X,Y})\otimes C(X)&\to&M_{X\times Y}(\mathbb C)\\
u_{ia,jb}&\to&\frac{1}{|Y|}\sum_cF_{b-a,c}c^{(i)}\otimes v_{ij}&\to&W_{ia,jb}
\end{matrix}$$
Thus, the main character is given by: $$\chi=\frac{1}{|Y|}\sum_{iac}c^{(i)}\otimes v_{ii}=\sum_{ic}c^{(i)}\otimes v_{ii}=\left(\sum_{ic}c^{(i)}\right)\otimes\delta_1$$
Now since the Haar functional of $C^*(\Gamma)\rtimes C(H)$ is the tensor product of the Haar functionals of $C^*(\Gamma),C(H)$, this gives the following formula, valid for any $p\geq1$: $$\int_G\chi^p=\frac{1}{|X|}\int_{\widehat{\Gamma}_{X,Y}}\left(\sum_{ic}c^{(i)}\right)^p$$
Let $S_i=\sum_cc^{(i)}$. By using the embedding in Proposition 12.12 (2), with the notations there we have $S_i=\sum_c(b_{i0}-b_{ic},c)$, and these elements multiply as follows: $$S_{i_1}\ldots S_{i_p}=\sum_{c_1\ldots c_p}
\begin{pmatrix}
b_{i_10}-b_{i_1c_1}+b_{i_2c_1}-b_{i_2,c_1+c_2}&&\\
+b_{i_3,c_1+c_2}-b_{i_3,c_1+c_2+c_3}+\ldots\ldots&,&c_1+\ldots+c_p&\\
\ldots\ldots+b_{i_p,c_1+\ldots+c_{p-1}}-b_{i_p,c_1+\ldots+c_p}&&
\end{pmatrix}$$
In terms of the new indices $d_r=c_1+\ldots+c_r$, this formula becomes: $$S_{i_1}\ldots S_{i_p}=\sum_{d_1\ldots d_p}
\begin{pmatrix}
b_{i_10}-b_{i_1d_1}+b_{i_2d_1}-b_{i_2d_2}&&\\
+b_{i_3d_2}-b_{i_3d_3}+\ldots\ldots&,&d_p&\\
\ldots\ldots+b_{i_pd_{p-1}}-b_{i_pd_p}&&
\end{pmatrix}$$
Now by integrating, we must have $d_p=0$ on one hand, and on the other hand: $$[(i_1,0),(i_2,d_1),\ldots,(i_p,d_{p-1})]=[(i_1,d_1),(i_2,d_2),\ldots,(i_p,d_p)]$$
Equivalently, we must have $d_p=0$ on one hand, and on the other hand: $$[(i_1,d_p),(i_2,d_1),\ldots,(i_p,d_{p-1})]=[(i_1,d_1),(i_2,d_2),\ldots,(i_p,d_p)]$$
Thus, by translation invariance with respect to $d_p$, we obtain: $$\int_{\widehat{\Gamma}_{X,Y}}S_{i_1}\ldots S_{i_p}
=\frac{1}{|Y|}\#\left\{d_1,\ldots,d_p\in Y\Big|\begin{matrix}[(i_1,d_1),(i_2,d_2),\ldots,(i_p,d_p)]\ \ \ \ \\=[(i_1,d_p),(i_2,d_1),\ldots,(i_p,d_{p-1})]\end{matrix}\right\}$$
It follows that we have the following moment formula: $$\int_{\widehat{\Gamma}_{X,Y}}\left(\sum_iS_i\right)^p
=\frac{1}{|Y|}\#\left\{\begin{matrix}i_1,\ldots,i_p\in X\\ d_1,\ldots,d_p\in Y\end{matrix}\Big|\begin{matrix}[(i_1,d_1),(i_2,d_2),\ldots,(i_p,d_p)]\ \ \ \ \\=[(i_1,d_p),(i_2,d_1),\ldots,(i_p,d_{p-1})]\end{matrix}\right\}$$
Now by dividing by $|X|$, we obtain the formula in the statement.
The formula in Theorem 12.17 can be interpreted as follows:
With $M=|X|,N=|Y|$ we have the formula $$law(\chi)=\left(1-\frac{1}{N}\right)\delta_0+\frac{1}{N}law(A)$$ where $A\in C(\mathbb T^{MN},M_M(\mathbb C))$ is given by $A(q)=$ Gram matrix of the rows of $q$.
According to Theorem 12.17, we have the following formula: $$\begin{aligned}
\int_G\chi^p
&=&\frac{1}{MN}\sum_{i_1\ldots i_p}\sum_{d_1\ldots d_p}\delta_{[i_1d_1,\ldots,i_pd_p],[i_1d_p,\ldots,i_pd_{p-1}]}\\
&=&\frac{1}{MN}\int_{\mathbb T^{MN}}\sum_{i_1\ldots i_p}\sum_{d_1\ldots d_p}\frac{q_{i_1d_1}\ldots q_{i_pd_p}}{q_{i_1d_p}\ldots q_{i_pd_{p-1}}}\,dq\\
&=&\frac{1}{MN}\int_{\mathbb T^{MN}}\sum_{i_1\ldots i_p}\left(\sum_{d_1}\frac{q_{i_1d_1}}{q_{i_2d_1}}\right)\left(\sum_{d_2}\frac{q_{i_2d_2}}{q_{i_3d_2}}\right)\ldots\left(\sum_{d_p}\frac{q_{i_pd_p}}{q_{i_1d_p}}\right)dq\end{aligned}$$
Consider now the Gram matrix in the statement, $A(q)_{ij}=<R_i,R_j>$, where $R_1,\ldots,R_M$ are the rows of $q\in \mathbb T^{MN}\simeq M_{M\times N}(\mathbb T)$. We have then: $$\begin{aligned}
\int_G\chi^p
&=&\frac{1}{MN}\int_{\mathbb T^{MN}}<R_{i_1},R_{i_2}><R_{i_2},R_{i_3}>\ldots<R_{i_p},R_{i_1}>\\
&=&\frac{1}{MN}\int_{\mathbb T^{MN}}A(q)_{i_1i_2}A(q)_{i_2i_3}\ldots A(q)_{i_pi_1}\\
&=&\frac{1}{MN}\int_{\mathbb T^{MN}}Tr(A(q)^p)dq\\
&=&\frac{1}{N}\int_{\mathbb T^{MN}}tr(A(q)^p)dq\end{aligned}$$
But this gives the formula in the statement, and we are done.
The problem now is that of finding the good regime, $M=f(K),N=g(K),K\to\infty$, where the measure in Theorem 12.18 converges, after some suitable manipulations.
We denote by $NC(p)$ the set of noncrossing partitions of $\{1,\ldots,p\}$, and for $\pi\in P(p)$ we denote by $|\pi|\in\{1,\ldots,p\}$ the number of blocks. We will need:
With $M=\alpha K,N=\beta K$, $K\to\infty$ we have: $$\frac{c_p}{K^{p-1}}\simeq\sum_{r=1}^p\#\left\{\pi\in NC(p)\Big||\pi|=r\right\}\alpha^{r-1}\beta^{p-r}$$ In particular, with $\alpha=\beta$ we have $c_p\simeq\frac{1}{p+1}\binom{2p}{p}(\alpha K)^{p-1}$.
We use the combinatorial formula in Theorem 12.17 above. Our claim is that, with $\pi=\ker(i_1,\ldots,i_p)$, the corresponding contribution to $c_p$ is: $$C_\pi\simeq
\begin{cases}
\alpha^{|\pi|-1}\beta^{p-|\pi|}K^{p-1}&{\rm if}\ \pi\in NC(p)\\
O(K^{p-2})&{\rm if}\ \pi\notin NC(p)
\end{cases}$$
As a first observation, since there are $M(M-1)\ldots (M-|\pi|+1)\simeq M^{|\pi|}$ choices for a multi-index $(i_1,\ldots,i_p)\in X^p$ satisfying $\ker i=\pi$, we have: $$C_\pi\simeq M^{|\pi|-1}N^{-1}\#\left\{d_1,\ldots,d_p\in Y\Big|[d_\alpha|\alpha\in b]=[d_{\alpha-1}|\alpha\in b],\forall b\in\pi\right\}$$
Consider now the partition $\sigma=\ker d$. The contribution of $\sigma$ to the above quantity $C_\pi$ is then given by $\Delta(\pi,\sigma)N(N-1)\ldots(N-|\sigma|+1)\simeq\Delta(\pi,\sigma)N^{|\sigma|}$, where: $$\Delta(\pi,\sigma)=\begin{cases}
1&{\rm if}\ |b\cap c|=|(b-1)\cap c|,\forall b\in\pi,\forall c\in\sigma\\
0&{\rm otherwise}
\end{cases}$$
We use now the standard fact that for $\pi,\sigma\in P(p)$ satisfying $\Delta(\pi,\sigma)=1$ we have: $$|\pi|+|\sigma|\leq p+1$$
In addition, the equality case happens when $\pi,\sigma\in NC(p)$ are inverse to each other, via Kreweras complementation. This shows that for $\pi\notin NC(p)$ we have $C_\pi=O(K^{p-2})$, and that for $\pi\in NC(p)$ we have: $$\begin{aligned}
C_\pi
&\simeq&M^{|\pi|-1}N^{-1}N^{p-|\pi|-1}\\
&=&\alpha^{|\pi|-1}\beta^{p-|\pi|}K^{p-1}\end{aligned}$$
Thus, we have obtained the result.
We denote by $D$ the dilation operation, $D_r(law(X))=law(rX)$. We have:
With $M=\alpha K,N=\beta K$, $K\to\infty$ we have: $$\mu=\left(1-\frac{1}{\alpha\beta K^2}\right)\delta_0+\frac{1}{\alpha\beta K^2}D_{\frac{1}{\beta K}}(\pi_{\alpha/\beta})$$ In particular with $\alpha=\beta$ we have $\mu=\left(1-\frac{1}{\alpha^2K^2}\right)\delta_0+\frac{1}{\alpha^2K^2}D_{\frac{1}{\alpha K}}(\pi_1)$.
At $\alpha=\beta$, this follows from Proposition 12.19. In general now, we have: $$\begin{aligned}
\frac{c_p}{K^{p-1}}
&\simeq&\sum_{\pi\in NC(p)}\alpha^{|\pi|-1}\beta^{p-|\pi|}\\
&=&\frac{\beta^p}{\alpha}\sum_{\pi\in NC(p)}\left(\frac{\alpha}{\beta}\right)^{|\pi|}\\
&=&\frac{\beta^p}{\alpha}\int x^pd\pi_{\alpha/\beta}(x)\end{aligned}$$
When $\alpha\geq\beta$, where $d\pi_{\alpha/\beta}(x)=\varphi_{\alpha/\beta}(x)dx$ is continuous, we obtain: $$\begin{aligned}
c_p
&=&\frac{1}{\alpha K}\int(\beta Kx)^p\varphi_{\alpha/\beta}(x)dx\\
&=&\frac{1}{\alpha\beta K^2}\int x^p\varphi_{\alpha/\beta}\left(\frac{x}{\beta K}\right)dx\end{aligned}$$
But this gives the formula in the statement. When $\alpha\leq\beta$ the computation is similar, with a Dirac mass as 0 dissapearing and reappearing, and gives the same result.
As a first comment, when interchanging $\alpha,\beta$ we obtain $D_{\frac{1}{\beta K}}(\pi_{\alpha/\beta})=D_{\frac{1}{\alpha K}}(\pi_{\beta/\alpha})$, which is a consequence of the well-known formula $\pi_{t^{-1}}=D_t(\pi_t)$. This latter formula is best understood by using Kreweras complementation, which gives indeed: $$\begin{aligned}
\int x^pd\pi_t(x)
&=&\sum_{\pi\in NC(p)}t^{|\pi|}\\
&=&t^{p+1}\sum_{\pi\in NC(p)}t^{-|\pi|}\\
&=&t\int(tx)^pd\pi_{t^{-1}}(x)\end{aligned}$$
Let us state as well an explicit result, regarding densities:
With $M=\alpha K,N=\beta K$, $K\to\infty$ we have: $$\mu=\left(1-\frac{1}{\alpha\beta K^2}\right)\delta_0+\frac{1}{\alpha\beta K^2}\cdot\frac{\sqrt{4\alpha\beta K^2-(x-\alpha K-\beta K)^2}}{2\pi x}\,dx$$ In particular with $\alpha=\beta$ we have $\mu=\left(1-\frac{1}{\alpha^2K^2}\right)\delta_0+\frac{1}{\alpha^2K^2}\cdot\frac{\sqrt{\frac{4\alpha K}{x}-1}}{2\pi}$.
According to the formula for the density of the free Poisson law,, the density of the continuous part $D_{\frac{1}{\beta K}}(\pi_{\alpha/\beta})$ is indeed given by: $$\frac{\sqrt{4\frac{\alpha}{\beta}-(\frac{x}{\beta K}-1-\frac{\alpha}{\beta})^2}}
{2\pi\cdot\frac{x}{\beta K}}=\frac{\sqrt{4\alpha\beta K^2-(x-\alpha K-\beta K)^2}}{2\pi x}$$
With $\alpha=\beta$ now, we obtain the second formula in the statement, and we are done.
Observe that at $\alpha=\beta=1$, where $M=N=K\to\infty$, the measure in Theorem 12.21, namely $\mu=\left(1-\frac{1}{K^2}\right)\delta_0+\frac{1}{K^2}D_{\frac{1}{K}}(\pi_1)$, is supported by $[0,4K]$. On the other hand, since the groups $\Gamma_{M,N}$ are all amenable, the corresponding measures are supported on $[0,MN]$, and so on $[0,K^2]$ in the $M=N=K$ situation. The fact that we don’t have a convergence of supports is not surprising, because our convergence is in moments.
We have as well the following result, which includes computations from [@ba6]:
Given two finite abelian groups $G,H$, with $|G|=M,|H|=N$, consider the main character $\chi$ of the quantum group associated to $\mathcal F_{G\times H}$. We have then $$law\left(\frac{\chi}{N}\right)=\left(1-\frac{1}{M}\right)\delta_0+\frac{1}{M}\,\pi_t$$ in moments, with $M=tN\to\infty$, where $\pi_t$ is the free Poisson law of parameter $t>0$. In addition, this formula holds for any generic fiber of $\mathcal F_{G\times H}$.
We already know that the second assertion holds, as explained in Theorem 12.21.
Regarding now the first assertion, which is from [@ba6], our first claim is that for the representation coming from the parametric matrix $\mathcal F_{G\times H}$ we have the following formula, where $M=|G|,N=|H|$, and the sets between brackets are sets with repetitions: $$c_p^r=\frac{1}{M^{r+1}N}\#\left\{\begin{matrix}i_1,\ldots,i_r,a_1,\ldots,a_p\in\{0,\ldots,M-1\},\\
b_1,\ldots,b_p\in\{0,\ldots,N-1\},\\
[(i_x+a_y,b_y),(i_{x+1}+a_y,b_{y+1})|y=1,\ldots,p]\\
=[(i_x+a_y,b_{y+1}),(i_{x+1}+a_y,b_y)|y=1,\ldots,p], \forall x
\end{matrix}\right\}$$
Indeed, by using the general moment formula with $K=F_G$, $L=F_H$, we have: $$\begin{aligned}
c_p^r
&=&\frac{1}{(MN)^r}\int_{T^r}\sum_{i_1^1\ldots i_p^r}\sum_{b_1^1\ldots b_p^r}\frac{Q^1_{i_1^1b_1^1}Q^1_{i_1^2b_2^1}}{Q^1_{i_1^1b_2^1}Q^1_{i_1^2b_1^1}}\ldots\frac{Q^1_{i_p^1b_p^1}Q^1_{i_p^2b_1^1}}{Q^1_{i_p^1b_1^1}Q^1_{i_p^2b_p^1}}\ldots\ldots\frac{Q^r_{i_1^rb_1^r}Q^r_{i_1^1b_2^r}}{Q^r_{i_1^rb_2^r}Q^r_{i_1^1b_1^r}}\ldots\frac{Q^r_{i_p^rb_p^r}Q^r_{i_p^1b_1^r}}{Q^r_{i_p^rb_1^r}Q^r_{i_p^1b_p^r}}\\
&&\hskip15mm\frac{1}{M^{pr}}\sum_{j_1^1\ldots j_p^r}\frac{K_{i_1^1j_1^1}K_{i_1^2j_2^1}}{K_{i_1^1j_2^1}K_{i_1^2j_1^1}}\ldots\frac{K_{i_p^1j_p^1}K_{i_p^2j_1^1}}{K_{i_p^1j_1^1}K_{i_p^2j_p^1}}\ldots\ldots\frac{K_{i_1^rj_1^r}K_{i_1^1j_2^r}}{K_{i_1^rj_2^r}K_{i_1^1j_1^r}}\ldots\frac{K_{i_p^rj_p^r}K_{i_p^1j_1^r}}{K_{i_p^rj_1^r}K_{i_p^1j_p^r}}\\
&&\hskip15mm\frac{1}{N^{pr}}\sum_{a_1^1\ldots a_p^r}\frac{L_{a_1^1b_1^1}L_{a_1^2b_2^1}}{L_{a_1^1b_2^1}L_{a_1^2b_1^1}}\ldots\frac{L_{a_p^1b_p^1}L_{a_p^2b_1^1}}{L_{a_p^1b_1^1}L_{a_p^2b_p^1}}\ldots\ldots\frac{L_{a_1^rb_1^r}L_{a_1^1b_2^r}}{L_{a_1^rb_2^r}L_{a_1^1b_1^r}}\ldots\frac{L_{a_p^rb_p^r}L_{a_p^1b_1^r}}{L_{a_p^rb_1^r}L_{a_p^1b_p^r}}\,dQ\\\end{aligned}$$
Since we are in the Fourier matrix case, $K=F_G,L=F_H$, we can perform the sums over $j,a$. To be more precise, the last two averages appearing above are respectively: $$\begin{aligned}
\Delta(i)&=&\prod_x\prod_y\delta(i^x_y+i^{x+1}_{y-1},i^{x+1}_y+i^x_{y-1})\\
\Delta(b)&=&\prod_x\prod_y\delta(b^x_y+b^{x+1}_{y-1},b^{x+1}_y+b^x_{y-1})\end{aligned}$$
We therefore obtain the following formula for the truncated moments of the main character, where $\Delta$ is the product of Kronecker symbols constructed above: $$c_p^r=\frac{1}{(MN)^r}\int_{T^r}\sum_{\Delta(i)=\Delta(b)=1}\frac{Q^1_{i_1^1b_1^1}Q^1_{i_1^2b_2^1}}{Q^1_{i_1^1b_2^1}Q^1_{i_1^2b_1^1}}\ldots\frac{Q^1_{i_p^1b_p^1}Q^1_{i_p^2b_1^1}}{Q^1_{i_p^1b_1^1}Q^1_{i_p^2b_p^1}}\ldots\ldots\frac{Q^r_{i_1^rb_1^r}Q^r_{i_1^1b_2^r}}{Q^r_{i_1^rb_2^r}Q^r_{i_1^1b_1^r}}\ldots\frac{Q^r_{i_p^rb_p^r}Q^r_{i_p^1b_1^r}}{Q^r_{i_p^rb_1^r}Q^r_{i_p^1b_p^r}}\,dQ$$
Now by integrating with respect to $Q\in(\mathbb T^{G\times H})^r$, we are led to counting the multi-indices $i,b$ satisfying the condition $\Delta(i)=\Delta(b)=1$, along with the following conditions, where the sets between brackets are by definition sets with repetitions: $$\begin{bmatrix}
i_1^1b_1^1&\ldots&i_p^1b_p^1&i_1^2b_2^1&\ldots&i_p^2b_1^1
\end{bmatrix}=
\begin{bmatrix}
i_1^1b_2^1&\ldots&i_p^1b_1^1&i_1^2b_1^1&\ldots&i_p^2b_p^1
\end{bmatrix}$$ $$\vdots$$ $$\begin{bmatrix}
i_1^rb_1^r&\ldots&i_p^rb_p^r&i_1^1b_2^r&\ldots&i_p^1b_1^r
\end{bmatrix}
=\begin{bmatrix}
i_1^rb_2^r&\ldots&i_p^rb_1^r&i_1^1b_1^r&\ldots&i_p^1b_p^r
\end{bmatrix}$$
In a more compact notation, the moment formula is therefore as follows: $$c_p^r=\frac{1}{(MN)^r}\#\left\{i,b\Big|\Delta(i)=\Delta(b)=1,\ [i^x_yb^x_y,i^{x+1}_yb^x_{y+1}]=[i^x_yb^x_{y+1},i^{x+1}_yb^x_y],\forall x\right\}$$
Now observe that the above Kronecker type conditions $\Delta(i)=\Delta(b)=1$ tell us that the arrays of indices $i=(i^x_y),b=(b^x_y)$ must be of the following special form: $$\begin{pmatrix}i^1_1&\ldots&i^1_p\\&\ldots\\ i^1_r&\ldots&i^r_p\end{pmatrix}=\begin{pmatrix}i_1+a_1&\ldots&i_1+a_p\\&\ldots\\ i_r+a_1&\ldots&i_r+a_p\end{pmatrix}\ ,\
\begin{pmatrix}b^1_1&\ldots&b^1_p\\&\ldots\\ b^1_r&\ldots&b^r_p\end{pmatrix}=\begin{pmatrix}j_1+b_1&\ldots&j_1+b_p\\&\ldots\\ j_r+b_1&\ldots&j_r+b_p\end{pmatrix}$$
Here all the new indices $i_x,j_x,a_y,b_y$ are uniquely determined, up to a choice of $i_1,j_1$. Now by replacing $i^x_y,b^x_y$ with these new indices $i_x,j_x,a_y,b_y$, with a $MN$ factor added, which accounts for the choice of $i_1,j_1$, we obtain the following formula: $$c_p^r=\frac{1}{(MN)^{r+1}}\#\left\{i,j,a,b\Big|\begin{matrix}[(i_x+a_y,j_x+b_y),(i_{x+1}+a_y,j_x+b_{y+1})]\\
=[(i_x+a_y,j_x+b_{y+1}),(i_{x+1}+a_y,j_x+b_y)],\forall x\end{matrix}\right\}$$
Now observe that we can delete if we want the $j_x$ indices, which are irrelevant. Thus, we obtain the announced formula. The continuation is via combinatorics, see [@ba6].
As already mentioned, on several occasions, such computations are expected to have some applications in statistical mechanics, but this remains to be worked out.
There are many other possible applications of the complex Hadamard matrices. Let us mention for instance some potential relations with noncommutative geometry, particle physics, and CKM type matrices, in the spirit of the paper of Connes [@co2].
Some other well-known applications of the Hadamard matrices concern questions in quantum information theory. We refer here to the MUB literature [@bbe], [@deb], [@tz2], with the remark that the relation of this with the above still remains to be understood.
This list, which adds to the previous considerations in this book, is certainly incomplete, the Fourier type matrices being potentially useful a bit everywhere.
[99]{}
S. Agaian, Hadamard matrices and their applications, Springer (1985).
J. Avan, T. Fonseca, L. Frappat, P. Kulish, E. Ragoucy and G. Rollet, Temperley-Lieb R-matrices from generalized Hadamard matrices, [*Theor. Math. Phys.*]{} [**178**]{} (2014), 223–240.
J. Backelin, Square multiples $n$ give infinitely many cyclic $n$-roots, preprint 1989.
T. Banica, Compact Kac algebras and commuting squares, [*J. Funct. Anal.*]{} [**176**]{} (2000), 80–99.
T. Banica, The defect of generalized Fourier matrices, [*Linear Algebra Appl.*]{} [**438**]{} (2013), 3667–3688.
T. Banica, First order deformations of the Fourier matrix, [*J. Math. Phys.*]{} [**55**]{} (2014), 1–22.
T. Banica, Counting results for thin Butson matrices, [*Electron. J. Combin.*]{} [**21**]{} (2014), 1–14.
T. Banica, The glow of Fourier matrices: universality and fluctuations, [*Oper. Matrices*]{} [**9**]{} (2015), 457–474.
T. Banica, Deformed Fourier models with formal parameters, [*Studia Math.*]{} [**239**]{} (2017), 201–224.
T. Banica, Complex Hadamard matrices with noncommutative entries, [*Ann. Funct. Anal.*]{} [**9**]{} (2018), 354–368.
T. Banica and J. Bichon, Random walk questions for linear quantum groups, [*Int. Math. Res. Not.*]{} [**24**]{} (2015), 13406–13436.
T. Banica, J. Bichon and B. Collins, Quantum permutation groups: a survey, [*Banach Center Publ.*]{} [**78**]{} (2007), 13–34.
T. Banica, J. Bichon and J.-M. Schlenker, Representations of quantum permutation algebras, [*J. Funct. Anal.*]{} [**257**]{} (2009), 2864–2910.
T. Banica, B. Collins and J.-M. Schlenker, On orthogonal matrices maximizing the 1-norm, [*Indiana Univ. Math. J.*]{} [**59**]{} (2010), 839–856.
T. Banica and I. Nechita, Almost Hadamard matrices: the case of arbitrary exponents, [*Discrete Appl. Math.*]{} [**161**]{} (2013), 2367–2379.
T. Banica and I. Nechita, Flat matrix models for quantum permutation groups, [*Adv. Appl. Math.*]{} [**83**]{} (2017), 24–46.
T. Banica and I. Nechita, Almost Hadamard matrices with complex entries, [*Adv. Oper. Theory*]{} [**3**]{} (2018), 149–189.
T. Banica, I. Nechita and J.-M. Schlenker, Analytic aspects of the circulant Hadamard conjecture, [*Ann. Math. Blaise Pascal*]{} [**21**]{} (2014), 25–59.
T. Banica, I. Nechita and J.-M. Schlenker, Submatrices of Hadamard matrices: complementation results, [*Electron. J. Linear Algebra*]{} [**27**]{} (2014), 197–212.
T. Banica, I. Nechita and K. Życzkowski, Almost Hadamard matrices: general theory and examples, [*Open Syst. Inf. Dyn.*]{} [**19**]{} (2012), 1–26.
T. Banica and R. Nicoara, Quantum groups and Hadamard matrices, [*Panamer. Math. J.*]{} [**17**]{} (2007), 1–24.
T. Banica, D. Özteke and L. Pittau, Isolated partial Hadamard matrices, and related topics, [*Open Syst. Inf. Dyn.*]{} [**25**]{} (2018), 1–27.
T. Banica and A. Skalski, The quantum algebra of partial Hadamard matrices, [*Linear Algebra Appl.*]{} [**469**]{} (2015), 364–380.
L.D. Baumert, S.W. Golomb and M. Hall, Discovery of an Hadamard matrix of order 92, [*Bull. Amer. Math. Soc.*]{} [**68**]{} (1962), 237–238.
K. Beauchamp and R. Nicoara, Orthogonal maximal abelian $*$-subalgebras of the $6\times 6$ matrices, [*Linear Algebra Appl.*]{} [**428**]{} (2008), 1833–1853.
I. Bengtsson, W. Bruzda, Å. Ericsson, J.-Å. Larsson, W. Tadej and K. Życzkowski, Mutually unbiased bases and Hadamard matrices of order six, [*J. Math. Phys.*]{} [**48**]{} (2007), 1–33.
H. Bercovici and V. Pata, Stable laws and domains of attraction in free probability theory, [*Ann. of Math.*]{} [**149**]{} (1999), 1023–1060.
J. Bichon, Quotients and Hopf images of a smash coproduct, [*Tsukuba J. Math.*]{} [**39**]{} (2015), 285–310.
P. Biran, M. Entov and L. Polterovich, Calabi quasimorphisms for the symplectic ball, [*Commun. Contemp. Math.*]{} [**6**]{} (2004), 793–802.
G. Björck, Functions of modulus $1$ on ${\rm Z}_n$ whose Fourier transforms have constant modulus, and cyclic $n$-roots, [*NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci.*]{} [**315**]{} (1990), 131–140.
G. Björck and R. Fröberg, A faster way to count the solutions of inhomogeneous systems of algebraic equations, with applications to cyclic $n$-roots, [*J. Symbolic Comput.*]{} [**12**]{} (1991), 329–336.
G. Björck and U. Haagerup, All cyclic $p$-roots of index 3 found by symmetry-preserving calculations, preprint 2008.
R. Burstein, Group-type subfactors and Hadamard matrices, [*Trans. Amer. Math. Soc.*]{} [**367**]{} (2015), 6783–6807.
A.T. Butson, Generalized Hadamard matrices, [*Proc. Amer. Math. Soc.*]{} [**13**]{} (1962), 894–898.
C.-H. Cho, Holomorphic discs, spin structures, and Floer cohomology of the Clifford torus, [*Int. Math. Res. Not.*]{} [**35**]{} (2004), 1803–1843.
B. Collins and P. Śniady, Integration with respect to the Haar measure on unitary, orthogonal and symplectic groups, [*Comm. Math. Phys.*]{} [**264**]{} (2006), 773–795.
A. Connes, Une classification des facteurs de type ${\rm III}$, [*Ann. Sci. Ec. Norm. Sup.*]{} [**6**]{} (1973), 133–252.
A. Connes, A unitary invariant in Riemannian geometry, [*Int. J. Geom. Methods Mod. Phys.*]{} [**5**]{} (2008), 1215–1242.
R. Craigen and H. Kharaghani, On the nonexistence of Hermitian circulant complex Hadamard matrices, [*Australas. J. Combin.*]{} [**7**]{} (1993), 225–227.
W. de Launey, On the non-existence of generalized weighing matrices, [*Ars Combin.*]{} [**17**]{} (1984), 117–132.
W. de Launey and J.E. Dawson, An asymptotic result on the existence of generalised Hadamard matrices, [*J. Combin. Theory Ser. A*]{} [**65**]{} (1994), 158–163.
W. de Launey, D.L. Flannery and K.J. Horadam, Cocyclic Hadamard matrices and difference sets, [*Discrete Appl. Math.*]{} [**102**]{} (2000), 47–61.
W. de Launey and D.M. Gordon, A comment on the Hadamard conjecture, [*J. Combin. Theory Ser. A*]{} [**95**]{} (2001), 180–184.
W. de Launey and D.A. Levin, A Fourier-analytic approach to counting partial Hadamard matrices, [*Cryptogr. Commun.*]{} [**2**]{} (2010), 307–334.
P. Di Francesco, Meander determinants, [*Comm. Math. Phys.*]{} [**191**]{} (1998), 543–583.
P. Diaconis and M. Shahshahani, On the eigenvalues of random matrices, [*J. Applied Probab.*]{} [**31**]{} (1994), 49–62.
P. Diţă, Some results on the parametrization of complex Hadamard matrices, [*J. Phys. A*]{} [**37**]{} (2004), 5355–5374.
T. Durt, B.-G. Englert, I. Bengtsson and K. Życzkowski, On mutually unbiased bases, [*Int. J. Quantum Inf.*]{} [**8**]{} (2010), 535–640.
J.-C. Faugère, Finding all the solutions of Cyclic 9 using Gröbner basis techniques, [*Lecture Notes Ser. Comput.*]{} [**9**]{} (2001), 1–12.
P.C. Fishburn and N.J.A. Sloane, The solution to Berlekamp’s switching game, [*Discrete Math.*]{} [**74**]{} (1989), 263–290.
U. Haagerup, Orthogonal maximal abelian $*$-subalgebras of the $n\times n$ matrices and cyclic $n$-roots, in “Operator algebras and quantum field theory”, International Press (1997), 296–323.
U. Haagerup, Cyclic $p$-roots of prime lengths $p$ and related complex Hadamard matrices, preprint 2008.
J. Hadamard, Résolution d’une question relative aux déterminants, [*Bull. Sci. Math.*]{} [**2**]{} (1893), 240–246.
M. Hall, Integral matrices $A$ for which $AA^T=mI$, in “Number Theory and Algebra”, Academic Press (1977), 119–134.
G. Hiranandani and J.-M. Schlenker, Small circulant complex Hadamard matrices of Butson type, [*European J. Combin.*]{} [**51**]{} (2016), 306–314.
K.J. Horadam, Hadamard matrices and their applications, Princeton Univ. Press (2007).
M. Idel and M.M. Wolf, Sinkhorn normal form for unitary matrices, [*Linear Algebra Appl.*]{} [**471**]{} (2015), 76–84.
N. Ito, Hadamard Graphs I, [*Graphs Combin.*]{} [**1**]{} (1985), 57–64.
V.F.R. Jones, Index for subfactors, [*Invent. Math.*]{} [**72**]{} (1983), 1–25.
V.F.R. Jones, On knot invariants related to some statistical mechanical models, [*Pacific J. Math.*]{} [**137**]{} (1989), 311–334.
V.F.R. Jones, Planar algebras I, preprint 1999.
A. Karabegov, The reconstruction of a unitary matrix from the moduli of its elements and symbols on a finite phase space, preprint 1989.
H. Kharaghani and J. Seberry, The excess of complex Hadamard matrices, [*Graphs Combin.*]{} [**9**]{} (1993), 47–56.
H. Kharaghani and B. Tayfeh-Rezaie, A Hadamard matrix of order 428, [*J. Combin. Des.*]{} [**13**]{} (2005), 435–440.
C. Koukouvinos, M. Mitrouli and J. Seberry, An algorithm to find formulae and values of minors for Hadamard matrices, [*Linear Algebra Appl.*]{} [**330**]{} (2001), 129–147.
T.Y. Lam and K.H. Leung, On vanishing sums of roots of unity, [*J. Algebra*]{} [**224**]{} (2000), 91–109.
V.A. Marchenko and L.A. Pastur, Distribution of eigenvalues in certain sets of random matrices, [*Mat. Sb.*]{} [**72**]{} (1967), 507–536.
M. Matolcsi, J. Réffy and F. Szöllősi, Constructions of complex Hadamard matrices via tiling abelian groups, [*Open Syst. Inf. Dyn.*]{} [**14**]{} (2007), 247–263.
D. McNulty and S. Weigert, Isolated Hadamard matrices from mutually unbiased product bases, [*J. Math. Phys.*]{} [**53**]{} (2012), 1–21.
M.T. Mohan, On some p-almost Hadamard matrices, [*Oper. Matrices*]{} [**13**]{} (2019), 253–281.
F.J. Murray and J. von Neumann, On rings of operators. IV, [*Ann. of Math.*]{} [**44**]{} (1943), 716–808.
R. Nicoara, A finiteness result for commuting squares of matrix algebras, [*J. Operator Theory*]{} [**55**]{} (2006), 295–310.
R. Nicoara and J. White, Analytic deformations of group commuting squares and complex Hadamard matrices, [*J. Funct. Anal.*]{} [**272**]{} (2017), 3486–3505.
A. Ocneanu, Quantum symmetry, differential geometry of finite graphs, and classification of subfactors, Univ. of Tokyo Seminary Notes (1990).
R. Paley, On orthogonal matrices, [*J. Math. Phys.*]{} [**12**]{} (1933), 311–320.
K.-H. Park and H.-Y. Song, Quasi-Hadamard matrices, [*Proc. ISIT 2010*]{}, Austin, TX (2010).
M. Petrescu, Existence of continuous families of complex Hadamard matrices of certain prime dimensions and related results, Ph.D. Thesis, UCLA (1997).
G. Pólya, Über eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz, [*Math. Ann.*]{} [**84**]{} (1921), 149–160.
S. Popa, Orthogonal pairs of $*$-subalgebras in finite von Neumann algebras, [*J. Operator Theory*]{} [**9**]{} (1983), 253–268.
S. Popa, Classification of amenable subfactors of type II, [*Acta Math.*]{} [**172**]{} (1994), 163–255.
L.B. Richmond and J. Shallit, Counting abelian squares, [*Electron. J. Combin.*]{} [**16**]{} (2009), 1–9.
R. Roth and K. Viswanathan, On the hardness of decoding the Gale-Berlekamp code, [*IEEE Trans. Inform. Theory*]{} [**54**]{} (2008), 1050–1060.
H.J. Ryser, Combinatorial mathematics, Wiley (1963).
J. Seberry and M. Yamada, Hadamard matrices, sequences, and block designs, Wiley (1992).
J.J. Sylvester, Thoughts on inverse orthogonal matrices, simultaneous sign-successions, and tesselated pavements in two or more colours, with applications to Newton’s rule, ornamental tile-work, and the theory of numbers, [*Phil. Mag.*]{} [**34**]{} (1867), 461–475.
F. Szöllősi, Parametrizing complex Hadamard matrices, [*European J. Combin.*]{} [**29**]{} (2008), 1219–1234.
F. Szöllősi, Exotic complex Hadamard matrices and their equivalence, [*Cryptogr. Commun.*]{} [**2**]{} (2010), 187–198.
W. Tadej and K. Życzkowski, A concise guide to complex Hadamard matrices, [*Open Syst. Inf. Dyn.*]{} [**13**]{} (2006), 133–177.
W. Tadej and K. Życzkowski, Defect of a unitary matrix, [*Linear Algebra Appl.*]{} [**429**]{} (2008), 447–481.
T. Tao, Fuglede’s conjecture is false in 5 and higher dimensions, [*Math. Res. Lett.*]{} [**11**]{} (2004), 251–258.
T. Tao and V. Vu, On random $\pm 1$ matrices: singularity and determinant, [*Random Structures Algorithms*]{} [**28**]{} (2006), 1–23.
N.H. Temperley and E.H. Lieb, Relations between the “percolation” and “colouring” problem and other graph-theoretical problems associated with regular planar lattices: some exact results for the “percolation” problem, [*Proc. Roy. Soc. London*]{} [**322**]{} (1971), 251–280.
R.J. Turyn, Character sums and difference sets, [*Pacific J. Math.*]{} [**15**]{} (1965), 319–346.
E. Verheiden, Integral and rational completions of combinatorial matrices, [*J. Combin. Theory Ser. A*]{} [**25**]{} (1978) 267–276.
D.V. Voiculescu, K.J. Dykema and A. Nica, Free random variables, AMS (1992).
S. Wang, Quantum symmetry groups of finite spaces, [*Comm. Math. Phys.*]{} [**195**]{} (1998), 195–211.
J. Williamson, Hadamard’s determinant theorem and the sum of four squares, [*Duke Math. J.*]{} [**11**]{} (1944), 65–81.
A. Winterhof, On the non-existence of generalized Hadamard matrices, [*J. Statist. Plann. Inference*]{} [**84**]{} (2000), 337–342.
S.L. Woronowicz, Compact matrix pseudogroups, [*Comm. Math. Phys.*]{} [**111**]{} (1987), 613–665.
S.L. Woronowicz, Tannaka-Krein duality for compact matrix pseudogroups. Twisted SU(N) groups, [*Invent. Math.*]{} [**93**]{} (1988), 35–76.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Systemic risk arises as a multi-layer network phenomenon. Layers represent direct financial exposures of various types, including interbank liabilities, derivative- or foreign exchange exposures. Another network layer of systemic risk emerges through common asset holdings of financial institutions. Strongly overlapping portfolios lead to similar exposures that are caused by price movements of the underlying financial assets. Based on the knowledge of portfolio holdings of financial agents we quantify systemic risk of overlapping portfolios. We present an optimization procedure, where we minimize the systemic risk in a given financial market by optimally rearranging overlapping portfolio networks, under the constraints that the expected returns and risks of the individual portfolios are unchanged. We explicitly demonstrate the power of the method on the overlapping portfolio network of sovereign exposure between major European banks by using data from the European Banking Authority stress test of 2016. We show that systemic-risk-efficient allocations are accessible by the optimization. In the case of sovereign exposure, systemic risk can be reduced by more than a factor of two, without any detrimental effects for the individual banks. These results are confirmed by a simple simulation of fire sales in the government bond market. In particular we show that the contagion probability is reduced dramatically in the optimized network.'
address:
- 'Institute for New Economic Thinking, University of Oxford, Walton Well Road, Oxford OX2 6ED, UK'
- 'Smith School of Enterprise and the Environment, University of Oxford, South Parks Road, Oxford OX1 3QY, UK'
- 'Complexity Science Hub Vienna, Josefstädter Straße 39, A-1080, Austria'
- 'IIASA, Schlossplatz 1, A-2361 Laxenburg, Austria'
- 'Section for Science of Complex Systems, Medical University of Vienna, Spitalgasse 23, A-1090, Austria'
- 'Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA'
author:
- Anton Pichler
- Sebastian Poledna
- Stefan Thurner
bibliography:
- 'ref.bib'
title: 'Systemic-risk-efficient asset allocation: Minimization of systemic risk as a network optimization problem'
---
systemic risk ,systemic-risk-efficient ,overlapping portfolios ,financial networks ,contagion ,network optimization ,quadratic programming ,government bonds ,DebtRank
Introduction {#intro}
============
Modern economies rely heavily on financial markets as they exercise important functions such as capital provision to the real economy sector. Facing the high costs associated with financial crises, there is a strong societal need of understanding financial systems and ensuring their systemic stability. When financial institutions enter into contracts they usually only consider their individual risk position and neglect their impact on the overall financial system. In this sense, systemic risk–the risk that a significant fraction of the financial system will stop functioning–can be viewed as an externality [@thurner13; @acharya17]. Systemic risk can be characterized on three different levels: a total market level [@markose12], an individual institution level [@battiston12; @thurner13] and transaction-based [@poledna16]. For the purpose of the following analysis we define the adverse impact of a single institution on the entire system as the systemic risk level associated with that institution.
Economic interactions between institutions are manifold and happen on different markets (layers). Therefore, in case of defaults financial contagion can unfold through many different channels [@upper11]. Network models are frequently used to capture these interdependencies, where every node represents an institution and a link corresponds to financial assets [@allen00], [@freixas00], [@eisenberg01], [@upper04] or [@boss04]. Since these links can represent various types of financial exposures, a natural representation of such systems are multi-layer networks, where every layer is associated with a different class of financial exposure [@bargigli15; @montagna16]. For example, [@poledna15] analyze four layers of financial exposures in the Mexican banking network, including derivatives exposures and security-cross-holdings. They find that focusing solely on a single layer can drastically underestimate systemic risk by more than $90\%.$ Systemic risk in financial markets clearly is a multi-layer network phenomenon.
While these network layers represent direct financial exposures, another essential source of systemic risk arises through the overlap between the portfolios of different institutions. Financial contagion in this channel can appear in the following way: an institution under stress is forced to sell substantial amounts of a particular asset, such that it is devaluated due to the market impact of the sale. If the same asset is held by other firms, their portfolios suffer losses. This in turn, could trigger further sales and subsequently lead to a fire-sale cascade which devaluates the institutions’ portfolios significantly. In case of large losses, this can deteriorate the equity positions of the institutions [@cifuentes05; @thurner12; @cont17]. Systemic risk arising from common asset holdings is different from the examples of direct exposures discussed above, since here the risk is not manifested in direct exposures between the institutions. Systemic risk is generated indirectly by selling not perfectly liquid assets. [@caccioli14] demonstrate that the layer of overlapping portfolios can amplify financial contagion significantly and [@cont13] show that in times of financial distress, fundamentally uncorrelated assets exhibit positive realized correlations. This can reduce positive effects of diversification for individual financial institutions. On the market level, the impact of asset diversification on systemic risk is non-trivial [@battiston12c; @battiston12b; @caccioli15]. It is not straightforward to decide, which market allocations yield the most resilient systems. In the context of systemically optimal interbank networks, agent-based approaches have been introduced by [@thurner13] and [@poledna16] that show how systemically risk-free financial networks can evolve in a self-organized way under appropriate systemic-risk-based incentive schemes.
This paper studies the contagious channel of overlapping portfolios as an important layer of the financial multi-layer system and presents a general theoretical approach that allows us to think of systemic-risk-efficient portfolio allocations. The procedure provides a best-case benchmark which could be a centerpiece for actively monitoring systemic risk on a regularly basis, by observing divergence (convergence) of the market from (to) the theoretical benchmark.
In we introduce a general model, which allows us to quantify systemic risk in an overlapping portfolio framework. The used data is briefly discussed in . In we present a method to reduce systemic risk as a generic network optimization problem and discuss the results in . Both networks, the original and the optimized, are compared in a fire-sale simulation in . highlights practical implications and possible extensions of this work.
Systemic risk of overlapping portfolios {#model}
=======================================
In this section we present a simple general model to quantify systemic risk for overlapping portfolios, which will be used to minimize systemic risk in the market. First, we discuss the bipartite nature of overlapping portfolios. Then we introduce a simple linear price impact model which is needed for projecting the common asset exposure onto the set of banks. In a final step we show how the systemic-risk measure DebtRank [@battiston12; @thurner13] can be applied to this network.
Network of overlapping portfolios
---------------------------------
Let us consider two sets of nodes, one representing $N$ financial institutions (for simplicity called banks), labeled by $i= 1,...,N$, and the other $K$ different assets, labeled by $k = 1,...,K$. If bank $i$ is invested in asset $k$, a weighted link is drawn between $i$ and $k$. The weight $V_{ki}$ represents the amount of the investment in monetary units. A schematic bipartite bank-asset network is shown in Figure \[fig:bipnet\]A.
Although banks are not directly linked, bank $i$ can have an effective risk exposure towards another bank $j$, if they hold the same, not perfectly liquid asset. If $j$ sells the commonly held asset $k$, the price $p_k$ might decrease to $p_k'$ due to market impact and the value of $i$’s portfolio decreases correspondingly. A naive one-mode projection of the bipartite network onto the set of banks cannot quantify the risk exposure between the banks, since the appropriate price effects due to market impact must be explicitly taken into account.
Price impact
------------
We assume a linear price impact model [@kyle85; @bouchaud10]. The price change $\Delta p_k$ is a linear function of trading volume and is independent of time, $$\label{pimpact}
\Delta p_k(z) = \alpha \frac{z}{D_k} \quad,$$ where $z$ denotes the signed volume in monetary units, $D_k$ is a market depth parameter and $\alpha = 1$, if the volume of buys exceeds the volume of sells and $\alpha=-1$ in the other case. Market depth $D_k$ is a measure of liquidity of a particular security and is defined such that selling (buying) the value $\frac{D_k}{100}$ of security $k$ moves the price down (up) by 1%. Following the approach of [@braverman14], [@guo16] and [@cont17], market depth is estimated by $$\label{Depth}
D_k = c \frac{ADV_k}{\sigma_k} \quad,$$ where $c$ is a scaling parameter larger than zero, $ADV_k$ the average traded daily volume in monetary units and $\sigma_k$ the empirical volatility (not the implied) of a particular security measured as the standard deviation of the daily log-returns. We set $c=0.4$ as suggested in [@cont17]. Note that $D_k$ is related to the frequently used ‘Amihud measure’ [@amihud02].
Price impact adjusted one-mode projection
-----------------------------------------
Given the price impact, a proper one-mode projection of the bipartite network $V_{ki}$ can be constructed, which models the exposure of asset holdings between banks. The value of asset $k$ in the portfolio of bank $i$ is $V_{ki}=\beta_{ki}p_k$, where $\beta_{ki}$ is the number of units of asset $k$ held by $i$ and $p_k$ is the corresponding price. The total portfolio value of bank $i$ is $V_i = \sum_{k} \beta_{ki} p_k$. Consider a bank $j$, which holds the same asset $k$. The maximum loss that $j$ can experience from sales of $k$ by bank $i$ is $V_{kj} \frac{V_{ki}}{D_k}$. The overall exposure from $i$ to $j$, i.e. the maximum impact of $i$ on $j$, is $$\label{ol_exp}
w_{ij} = \sum_{k=1}^{K} V_{kj} V_{ki} \frac{1}{D_k} \quad.$$ In matrix form, the weighted $N \times N$ adjacency matrix is given by $$\label{expmat}
w = V^\top D^{-1} V \quad,$$ where $V$ is the $K \times N$ matrix of asset values in the portfolios containing $V_{ki}$ and $D$ is a $K \times K$ diagonal matrix with diagonal elements $D_{kk} = D_k$. The weighted adjacency matrix $w$ has elements on its diagonal and thus contains self-loops which represent the exposure towards sales from the own portfolio. corresponds to a simple one-mode projection of a bipartite network that is corrected for limited liquidity of the assets. is closely related to the model studied in [@cont17], where the linear price impact is also a function of monetary units. In contrast, [@braverman14] and [@guo16] base their definition of the price impact on units of assets. Note that ‘liquidity is an elusive concept’ [@amihud02] and different concepts of liquidity and associated price impacts do exist [@bouchaud09; @bouchaud10]. The reason for our choice of is simplicity only. It can be generalized easily to more refined price impact functions.
DebtRank for overlapping portfolio networks
-------------------------------------------
A measure of systemic risk introduced by [@battiston12] and applied to the interbank market [@thurner13] is the so-called DebtRank (DR). DR is a feedback centrality measure for financial networks that ascribes to every bank a systemic risk level between zero and one, where one means that the entire network will default in case of the bank’s bankruptcy (for the detailed definition see \[dr\]). DR was constructed for direct financial exposures between nodes, such as networks of interbank liabilities, but can be adapted for indirect exposures $w_{ij}$ of common asset holdings. A central element for applying DR to a financial network is the impact matrix $$\label{impact_matrix_text}
W_{ij} = \min\left\{1, \frac{A_{ij}}{E_j} \right\} \quad,$$ where $A_{ij}$ is the direct exposure in monetary units from $j$ to $i$ and $E_j$ the (Tier 1) equity of $j$. By defining the relative economic value of node $j$ as $$\label{DRvorig_text}
v_j = \frac{\sum_{i} A_{ij} } {\sum_{i} \sum_{j} A_{ij}} \quad,$$ the DR of bank $i$ can be represented as $$\label{DR_text}
R_i=\sum_j h_j (T)v_j - \sum_j h_j (1) v_j \quad,$$ where $h_j$ is a state variable which sums up the financial distress in the whole network based on the impact matrix $W_{ij}.$ The state variable is necessary, since DR cannot be represented in closed form (see \[dr\] for details).
allows us to derive an impact matrix for the overlapping portfolio network model. The DR impact matrix for overlapping portfolios is $$\label{DRimp}
\tilde{W}_{ij} = \min\left\{1, \frac{w_{ij}}{E_j} \right\} \quad ,$$ which is the total impact of $i$ on $j$ if $i$ sells its entire portfolio. The impact $\tilde{W}_{ij}$ is bounded between zero and one, where one means that the total equity buffer of $j$ is ‘destructed’ due to the sales of $i$. The relative economic value in an overlapping portfolio setting is given by $$\label{DRv}
\tilde{v_i} = \frac{V_i}{\sum_k \sum_j V_{kj}} \quad.$$
By replacing with and with , can be applied to financial networks of asset holdings and a systemic risk assessments can be carried out in the usual way. To characterize the systemic risk level of the entire market, we compute the average DR of all $N$ banks $$\label{meanDR}
\bar{R} = \frac{1}{N}\sum_{i=1}^{N} R_i \quad.$$
Data
====
We compute exposures from common asset holdings for the government bond portfolios of European banks that were used in the EU-wide stress test 2016. The data is publicly available and provided by the European Banking Authority (EBA)[^1]. In our analysis we include 49 major European banks that are invested in 36 different sovereign bonds. The obtained bipartite network $V_{ki}$ represents investments of European banks in government bonds, see Figure \[fig:bipnet\]B. We refer to this network as the European government bond market in the remainder of this text. The total market volume amounts to EUR $2,617.39$ billion and corresponds to roughly $10\%$ of the banks’ total assets. The investment in government debt as a share of total assets varies substantially. While for some banks government bonds account only for a few percent of the total asset size, others spend a large fraction of up to $47\%$ of total assets in government debt.
To estimate the market depth of the bonds we pool market price data with reported data on trading activity and outstanding volume. A detailed description of the data and the estimation procedure is found in the Supplementary Information. The summary statistics of the market depths estimates is displayed in .
Optimizing systemic risk {#optimization}
========================
In this section we show that we can use DR to derive a mathematical optimization problem that allows us to compute systemic-risk-efficient portfolio allocations for the European government bond market. By rewiring the bipartite bank-bond network, we can obtain a different impact matrix $\tilde{W}_{ij}$, which leads to a lower level of systemic risk in the market $\bar{R}$. We must ensure that after the rewiring of the bipartite bank-bond network no institution is economically worse off than before. We characterize the quality of the banks’ portfolios within the classical mean-variance framework of [@markowitz52]. A difficulty arising when optimizing a network with respect to its average DR $\bar{R}$ is the fact that DR is not representable in closed form. A reasonable approximation is to focus on the direct impacts $\tilde{W} \tilde{v}$ instead. By doing so, a quadratic optimization problem can be formulated.
Let $\sigma_{kl}^2$ be the covariance of bond $k$ and $l$, and let $r_k$ denote the expected return of bond $k$. The expected return and variance of portfolio $i$ – the risk profile– are given by $\tilde{r_i} = \sum_k V_{ki} r_k$ and $\tilde{\sigma_i}^2 = \sum_k \sum_l V_{ki} V_{li} \sigma_{kl}^2,$ respectively. The total value of bond $k$ in the market is denoted as $S_k$. Consider the following optimization problem, $$\label{optim}
\begin{aligned}
& \underset{x_{ki} \ge 0 \; \forall k,i}{\text{min}}
& & \sum_i \sum_j \frac{\tilde{v_j}}{E_j} \sum_{k} x_{ki} x_{kj} \frac{1}{D_k} \\
& \text{subject to} & & V_i = \sum_k x_{ki}, \quad \forall i,\\
& & & S_k = \sum_i x_{ki}, \quad \forall k,\\
& & &\tilde{r_i} \leq \sum_k x_{ki} r_k, \quad \forall i, \\
& & & \tilde{\sigma_i}^2 \ge \sum_k \sum_l x_{ki} x_{li} \sigma_{kl}^2, \quad \forall i,
\end{aligned}$$ where the variable $x_{ki}$ denotes the investments that can be reallocated. Problem (\[optim\]) minimizes the total direct impacts in case of defaults without deteriorating the banks’ risk profiles. By doing so, the total portfolio volumes and the total outstanding volumes are kept constant, i.e. the network is only rewired. The minimum operators in the impact matrix $\tilde{W}$ are dropped in order to ensure smoothness of the objective function, which simplifies the optimization. Problem (\[optim\]) can now be reformulated as a general quadratically constrained quadratic program (QCQP) of the form $$\label{qcqp}
\begin{aligned}
& \underset{y \ge 0}{\text{min}} & &\smash{\frac{1}{2} y^\top (P_0^\top + P_0) y} \\
& \text{subject to} & & y^\top P_i y - \tilde{\sigma_i} &\le 0, & \quad i = 1,...,N,\\
& & & A_1y + c_1 &\le 0, & \\
& & & A_2y + c_2 &= 0. &
\end{aligned}$$ Here, $P_0$ and $P_i$ are $KN \times KN$ matrices, $A_1$ is a $N \times KN$ matrix, $A_2$ a $(K+N) \times KN$-matrix and $c_1$ and $c_2$ are vectors of corresponding dimensions. We let $y=vec(X)$ be the vectorization of the $K \times N$ matrix $X$ with elements $x_{ki}$. The exact specifications of the vectors and matrices are given in \[app.qcqp\]. The quadratic constraint ensures that new portfolio allocations do not increase the portfolio variances and the linear inequality constraint prevents a decrease of the portfolio returns. The linear equality constraint controls for the basic market structure, such that the portfolios are only reshuffled, but not changed in total size and no assets are added or removed from the market.
Solving the optimization problem
--------------------------------
The described dataset consists of $36$ bonds and $49$ banks ($1,764$ variables). We are bound to $85$ linear equality constraints, $49$ linear inequality constraints and $49$ quadratic constraints. Expected returns are estimated from historical returns. The portfolio variances are calculated from historical price data, see Supplementary Information. The symmetric matrix $P_i$ is positive semidefinite since it is a block diagonal matrix with the covariance matrix on its diagonal. However, it turns out that in our case the matrix $\frac{1}{2}(P_0^\top + P_0)$ is indefinite, which turns the problem into a non-convex QCQP problem. Its solutions are in general NP-hard to find [@anstreicher09]. Nevertheless, there are solvers available that can handle this type of problem, for instance by implementing branch-and-bound algorithms. To solve Problem (\[qcqp\]), we run it on four different solvers: KNITRO [@byrd06], BARON [@sahinidis96], MINOS [@murtagh83] and Couenne [@belotti09]. We formulated the problem in AMPL [@fourer90] and made use of the NEOS-server [@czyzyk98], where we submitted it to the four solvers. In the following we show the results from the Couenne solver, which provides the minimal objective values.
Results
=======
We first compute the DR for every bank and estimate the average total overlapping portfolio systemic risk $\bar{R}_{orig} = 6.66\%$ in the European government bond market. We then optimize the portfolio holdings according to and compute $\bar{R}_{opt} = 2.89 \%$[^2]. We see that systemic risk of the market is reduced by more than a half (factor of 2.27) and the maximum DR in the financial network decreases from $0.22$ to $0.09$, see . In particular banks with originally high systemic risk levels loose systemic relevance in the market. For some of the least systemic banks the DR levels increase, and overall, a more systemic-risk-efficient allocation is achieved (A).
The optimization also changes the order of systemic relevance of banks, i.e. a bank that was considered riskier than another particular bank in the original network can be relatively less risky in the optimized network. Overall, the order of systemic relevance changes in the optimized network, but is still positively correlated with the original orders, see .
Intuitively, a positive relationship between banks’ systemic relevance and market share can be expected. B shows that this positive relationship (slope) is reduced in the optimized network. From a systemic risk management perspective, particularly problematic banks are those banks which are relatively small in size, but take on very central positions in the network (upper left corner (red) in B), meaning that the default of a small bank has adverse effects for large fractions of the total market. The optimization has lead to a substantial reduction of systemic risk especially in the group of small, but originally very systemic banks.
Network topology
----------------
![Overlapping portfolio networks before (A) and after (B) the optimization. The size of the nodes corresponds to the total investments in the bond market, the strength of the links is based on the level of exposure between the banks. Banks are colored according to their DR. Self-loops are not shown.[]{data-label="fig:OLPnet"}](netlegend){width=".15\textwidth"}
The optimization of systemic risk changes the network topology of the market as can be seen by looking at basic network statistics. The density of a network is given by dividing the number of present links by the number of potential links. In a bipartite network the number of potential links is $NK$. In the given sovereign exposure bipartite network the density is $0.51$, i.e. about half of all possible links of the network are actually present. The average number of bonds in a portfolio (average degree of bank nodes) is roughly $18.27$ and the average number of banks holding a particular bond (average degree of bond nodes) is about $24.86$. The liquidity adjusted bank projection shown in A yields a dense network (density $= 0.967$) with an average degree of larger than $46$. Given that the maximum number of neighbors is $48$, we see that most banks are directly connected to each other by holding same government bonds. The (unweighted) diameter of this network is $2$, meaning that financial contagion originating from any node theoretically can spread over the whole market in just two steps. The severity of such a contagious process, however, depends on the exposure weights.
It is not trivial to answer the question in advance, whether the described optimization procedure leads to a higher or lower connectedness of the network. As argued by [@gandy17] and [@battiston12c], the stability of a financial network is not a monotonic function of its degrees. Financial contagion in a highly connected network will spread more evenly, but will reach nodes with a higher probability. In a sparse network, in contrast, the probability of contagion is generally less, but this positive effect can be outweighed by a higher severity of the financial loss due to the more uneven spread of contagion. Thus, the optimal network topology with respect to its completeness will depend on the financial conditions of the nodes (e.g. capital buffers) and the type of interlinkages (e.g. high exposures between relevant banks). In our case, the optimization leads to a denser network, see Figure \[fig:OLPnet\]B. In fact, the optimized bipartite network is a fully connected; every bank is invested in every asset in the market. To examine in which way the links are reshuffled, we can rank the links according to their weight and check which nodes they connect. The results indicate a tendency to wiring links of high exposures between less systemic banks compared to the original network. For example, in the optimized network the largest exposure is between two banks which belong to the ten least systemic banks, whereas in the original network the largest weight is on the edge between banks with the third and fourth highest DR values. This qualitative pattern can be observed for most of the largest weighted links. In that sense, the optimization produces a network that takes the systemic relevance of the banks into account. gives an overview of some basic network statistics. For exact definitions consult \[app\_e\].
To quantify how diversified the portfolios are before and after the optimization we compute the Herfindahl-Hirschman index (HHI) for every portfolio. See \[app\_d\] for details on the measures. The HHI is a measure for diversification, where values close to one indicate highly concentrated portfolios. Values close to zero indicate a high level of diversification. The average diversification increases after optimization, see . The results indicate that the number of small investments increase with the optimization.
[**original**]{} [**optimized**]{}
------------------------------------ --------------------- ------------------ -------------------
[**Market depth** ]{} Min. 3.65E7
1st Qu. 2.91E9
Median 3.20E10
Mean 1.52E12
3rd Qu. 2.59E11
Max. 3.34E13
[**DebtRank** ]{} Min. 0.001 0.003
1st Qu. 0.018 0.010
Median 0.046 0.020
Mean 0.067 0.029
3rd Qu. 0.090 0.052
Max. 0.215 0.087
[**Average degree**]{} weighted 3704.98E6 3616.37E6
unweighted 46.41 48
[**Clustering coefficient**]{} weighted 0.992 1
unweighted 0.975 1
[**Nearest-neighbor degree**]{} weighted 197.57E6 119.09E6
unweighted 46.67 48
[**Spearman’s $\rho$**]{} 0.70
[**Kendall’s $\tau$** ]{} 0.55
[**Herfindahl-Hirschman index**]{} 0.49 0.43
[**Contagion probability**]{} moderate fire sales 16.7$\%$ 0$\%$
extreme fire sales 100$\%$ 0$\%$
: Results table[]{data-label="resulttable"}
SR of original vs. optimized network – a fire-sale simulation \[simulation\]
============================================================================
We now test the efficiency of the optimized network in a fire-sale simulation and compare it with the original network. The assets considered for fire sales are the banks’ bond holdings only. This is a major simplification of a real setting, where also other liquid securities such as stocks or derivatives can be sold. The aim of this simulation is not to present a realistic model of banks in financial distress, but is to show the difference between both networks in the general case of fire-sale cascades in the market. Nothing prevents an extension of the model to include other assets if the corresponding data is available.
Fire-sale dynamics
------------------
The basic decision rules for banks in the fire-sale simulation are inspired by the approach of [@cont17] and [@greenwood15]. Let us consider a simple model for balance sheets. The bond portfolio value of bank $i$ at time step $t$ is denoted by $V_i(t)$. The value of all other assets of bank $i$ is denoted by the constant $O_i$. $E_i(t)$ is the equity of $i$. The balance sheet identity must hold for all $i$ and every $t$, $$\label{bsident}
V_i(t) + O_i \overset{!}{=} \text{Debt}_i(t) + E_i(t) \quad.$$ The leverage ratio of bank $i$ is defined as $$\label{lev}
L_i(t) = \frac{V_i(t) + O_i}{E_i(t)} \quad .$$ In this framework, the only possibility to delever is by selling government bonds. Let us introduce an exogenously specified benchmark leverage ratio $L_i'$, which bank $i$ must not exceed, i.e. $L_i(t) \le L_i'$ for all $t$[^3]. Should $L_i(t) > L_i'$, the bank needs to sell a fraction $\gamma_i$ of its bonds to fulfill the maximum leverage $L_i'$. Then a $\gamma_i \in [0,1]$ must be determined such that $$\label{leverage}
(1-\epsilon_i) L_i' = \frac{ \left(1- \gamma_i(t) \right) V_i(t) + O_i}{E_i(t)} \quad ,$$ with a small $\epsilon_i>0$ that takes into account self-triggered price effects emerging from reducing the balance sheet. Thus, every bank evaluates at every time step $$\label{gamma}
\gamma_i (t) = \begin{cases}
\min \left\{\frac{V(t) + O - (1-\epsilon_i) L_i'}{V_i(t)}, 1 \right\} &\text{if } L_i(t)>L_i' \\
0 & \text{if } L_i(t) \le L_i' \quad .
\end{cases}$$ If the whole portfolio must be liquidated ($\gamma_i(t)=1$), then there is no possibility left to delever. $\gamma_i$ is set equal to zero for all subsequent times. Note that represents a simplified case, where banks sell bonds proportionately and do not sell more liquid assets first. The sale of government bonds leads to a linear decrease in the price according to , $$p_k(t+1) = \max\left\{p_k (t) \left(1-\frac{\sum_i \gamma_i V_{ki}(t)}{D_k}\right), 0 \right\} \quad ,$$ and the new bond portfolio value is $$V_i(t+1) = \max\left\{ (1- \gamma_i) \sum_k V_{ki}(t) \left(1-\frac{\sum_i \gamma_i V_{ki}(t)}{D_k}\right) , 0 \right\} \quad,$$ where the maximum operators ensure non-negative prices and non-negative portfolio values. A bank $i$ experiences a price effect on the bonds for sale $\gamma_iV_i(t)$ as well as on the remaining bonds $(1-\gamma_i)V_i(t)$ in the portfolio. The total loss of bank $i$ is then given by $$C_i(t) = \sum_k V_{ki}(t) \frac{\sum_i \gamma_i V_{ki}(t)}{D_k} \quad .$$ This loss changes the equity at $t+1$ to $$E_i(t+1) = \max \left\{ E_i(t) - C_i(t), 0 \right\} \quad.$$ If $E_i(t)=0$, the bank defaults. In this case its portfolio is liquidated at the current price and the bank is excluded in further rounds. At $t+1$ all solvent banks examine again, whether the leverage condition holds and the dynamics is repeated. The algorithm stops, once no more selling takes place in the market.
Results {#results}
-------
To induce a fire-sale scenario, one bank is selected and exogenously declared to be bankrupt. Its entire portfolio is sold, which triggers a price impact on the assets. This devaluates the portfolios of other banks and the fire-sale dynamics described above starts. We repeat it for every single bank and compare the results from the both networks. We define the situation, where at least one bank goes bankrupt as a response to the initial perturbation, as a *contagion event*. The *contagion probability* is the probability of observing a contagion event in a fire-sale simulation. We run two different scenarios, a moderate and an extreme fire-sale scenario. Motivated by Basel III [@bis14], we use a maximum leverage threshold $L_i' = 33$ for all $i$ in the moderate scenario. To induce extreme fire sales we require $L_i' = L_i(0)$ for all $i$. Here, banks want to delever to their initial leverage in case of shocks. This is a more drastic scenario, since every bank that experiences a price impact will violate the leverage constraint in the first time step and is forced to sell bonds. Obviously, this behavior is maybe more drastic than a realistic setting, where banks will typically not try to get below an initially declared leverage target by all means. The scenario is designed to investigate the resilience of both network types in extreme cases, where large fractions of the market are involved.
### Scenario 1 – moderate fire sales
A shows the histogram of the total bond market value after the fire-sale cascade in percent of the original market value. We see a clear difference for the original (red) and the optimized (green) network. In some cases the portfolio values are reduced by about $7\%$ in the original network. In the optimized network the portfolio values never decrease by more than $2\%$. The left panel of B shows boxplots of destroyed equity as a consequence of the initial default. Although there is only a minor impact on the equity levels in this scenario, one can still observe that the equity is less affected in the optimized network. The boxplots in the right panel of B show the average leverage ratios of the banks after the fire sales. The horizontal black line is the average leverage ratio before the simulation. Note that if a bank is close to default, its equity approaches zero, which may lead to high leverage ratios. The average leverage ratios in the optimized network after the simulation remain close to the initial levels. This shows that banks can delever successfully. In the original network, however, we see that for some simulations leverage ratios increase significantly, pointing to an increased vulnerability of the banks. The improved resilience in the optimized network is particularly visible in the lowered contagion probability, see . In the original network the contagion probability is $16.7\%$, in the optimized network not a single bank defaults.
\
### Scenario 2 – extreme fire sales
The impact of initial defaults on the market is much stronger in this scenario than in the moderate scenario, see C. In the original network on average only $3.1\%$ of the total portfolio value remains on the balance sheets after the fire sales. There is also a strong impact in the optimized network, however, their values are systematically higher, $6.7\%$. The impact on the banks’ equity D is higher in the optimized network compared to those in the original network. This might be surprising at first sight. However, when looking at the number of bankruptcies in , as a consequence of the initial perturbation, we find that the equity buffers are used very differently in the optimized network. The contagion probability in the original network is $100\%$, i.e. in every single simulation banks are defaulting. In contrast, in the optimized network there is not a single default happening, the contagion probability is zero. Thus, equity buffers fulfill their intended function much more efficiently in the optimized network than in the empirical network. The fire sales use up more equity in the optimized network, but this is done in a way such that the shocks are absorbed. In the original network, however, some banks hold too little equity and will default while others hold ‘excess’ equity, which absorptive capacity remains untouched. While banking regulation based on the Basel accords is stipulating fixed equity levels as a ratio of risk-weighted and total assets to all banks, this result shows that the buffer performance of equity is highly network-dependent. Improving the resilience of a financial network efficiently would mean that the centrality of the institutions position in the network is taken into account when defining capital requirements. This would relax capital requirements for less systemically risky banks. This will incentivize banks to become less systemic in a given financial network. No such incentives are present in the current regulation scheme. The right panel of D confirms that the optimized network is much more resilient than the original. Average leverage ratios increase sharply in the original network. In the optimized network the leverage ratios, even after extreme fire sales, remain similar to the initial levels[^4].
It could be argued that restricting the simulation to bond holdings only will artificially increase financial contagion in the market since other liquid assets cannot be sold. Note however, that this is not necessarily the case. Recall that for some banks government bonds do only account for a small fraction of the balance sheet. For these banks a devaluation of bonds has only a minor impact on equity and leverage.
Discussion \[discussion\]
=========================
We quantifiied the systemic risk arising through overlapping portfolios in the European government bond market. We then proposed a general network optimization problem, which is formulated as a standard quadratically constrained quadratic programming problem. Network optimization allows us to compute the optimal systemic-risk-efficient asset allocations. When looking for the optimal allocations, we control for the expected return and the standard deviation of the individual portfolios, such that the principal investment strategies of the banks are untouched. We then compared the resilience of the original financial network with the optimized network.
We showed that systemic risk can be reduced substantially, by more than $50\%$, for sovereign exposures between important European banks without changing the risk profiles of the banks’ portfolios. A simple fire-sale simulation confirms that the resilience is indeed increased significantly by the optimization: in case of financial distress, leverage levels and default probabilities are much lower in the optimized network than in the original network. The essence of the approach is that in the optimally rearranged network the equity values absorb economic shocks much more efficiently.
The knowledge of the optimal network topology could be useful to derive optimal benchmark networks for regulatory purposes. For example, the optimal network could serve as a benchmark to monitor, whether empirical markets are diverging (converging) from (to) the optimum. It could also be used as a benchmark in testing various incentive schemes to reduce systemic risk. For example, the effect of different measures like systemic risk taxes can be studied with agent-based models. The benchmark model then can be used to calculate the effectiveness of the applied measures.
The method proposed here can be extended to other markets than government debt. In particular with assets traded mostly on standardized exchanges such as stocks, reliable liquidity estimates can be obtained. By extending the model to other asset classes or/and to financial institutions other than banks, the ‘curse of dimensionality’ must be considered. Every additional asset or institution increases the number of variables by $(N)$ or $K$, respectively and increases the computational cost of the optimization disproportionately. A practically viable remedy could be to exclude less risky institutions in the optimization or to segment markets according to different asset categories and to apply the approach to every market segment individually.
The proposed optimization uses constraints on the standard mean-variance characteristics of the portfolios. However, this is only one way of defining economically reasonable constraints and other constraints can be considered that are more appropriate for specific applications. For instance, risk-weights for each asset can be derived and a condition imposed such that risk-weighted investments remain below the total capital level. By using asset haircuts, a liquidity-based constraint could be introduced, which would ensure that the investments in a portfolio do not decrease below a certain liquidity threshold. Other constraints could be designed to limit the concentration in the portfolios, by defining a maximum proportion of assets per portfolio.
Another interesting extension of the proposed optimization problem would be to optimize financial networks that represent direct exposures, such as interbank liability networks. Here, the mean-variance condition needs to be substituted by constraints that the default risk of the individual banks.
References {#references .unnumbered}
==========
DebtRank \[dr\]
===============
The DR introduced by [@battiston12] measures the systemic relevance of banks in a financial network where links between the institutions represent interbank investments. These interbank relations can be represented in a matrix $A$ with elements $A_{ij}$ denoting the exposure in monetary units of $j$ toward $i$ (e.g. interbank liabilities from $j$ to $i$). Let $E_i$ be the (Tier 1) capital of $i$. A bank $i$ defaults, if $E_i \le 0$. No recovery is assumed in the short run, and therefore, bank $j$ faces a loss of $A_{ij}$, if bank $i$ defaults. In that case, bank $j$ defaults if $A_{ij} > E_j$. The impact matrix $W$ contains elements representing the direct impact of bank $i$ on $j$ in the case of a default of $i$ defined by $$\label{DRimporig}
W_{ij} = \min\left\{1, \frac{A_{ij}}{E_j} \right\} \quad.$$ The relative economic value of node $j$ is defined as $$\label{DRvorig}
v_j = \frac{\sum_{i} A_{ij} } {\sum_{i} \sum_{j} A_{ij} } \quad.$$ Clearly, we have $\sum_j v_j = 1$. The relative value of the impact of $i$ on its neighbors is given by $I_i = \sum_j W_{ij} v_j$. In order to take effects on nodes at distance larger $1$ into account, a PageRank alike feedback centrality measure could be defined as $$I_i = \sum_j W_{ij} v_j + \alpha \sum_{j} W_{ij} I_j \quad,$$ where $\alpha < 1$ is a dampening factor. The problem with this definition in a financial context is that the impact can exceed one in the presence of cycles. [@battiston12] suggest a different method which limits the maximum number of reverberations to one. Consider two state variables for each node, $h_i(t)$ and $s_i(t)$. $h_i(t)$ is a continuous variable between zero and one and $s_i(t)$ can take on three different states, undistressed, distressed and inactive, i.e. $s_i(t) \in \{U,D,I\}$. Let $S$ denote the set of banks which are in distress at time $t=1$ and $\psi \in [0,1]$ be the initial level of distress where $\psi=1$ means default. Then the initial conditions are given by $$h_i(1) = \begin{cases}
\psi, & \forall i \in S\\
0, & \forall i \notin S
\end{cases}
\qquad
\text{and}
\qquad
s_i(1) = \begin{cases}
D, & \forall i \in S\\
U, & \forall i \notin S \quad .
\end{cases}$$ The dynamics for $t\ge 2$ is then characterized by $$h_i(t) = \min\left\{1, \quad h_i(t-1) + \sum_{j|s_j(t-1) = D} W_{ji} h_j(t-1)\right\} \quad,$$ and $$s_i(t) = \begin{cases}
D, & h_i(t) > 0; \quad s_i(t-1) \neq I\\
I, & s_i(t-1) = D\\
s_i(t-1), & \text{else} \quad.
\end{cases}$$ The DR is then defined as $R_S=\sum_j h_j (T)v_j - \sum_j h_j (1) v_j$ which is the total induced financial distress (excluding the initial distress) in the network given the default of a set of nodes $S$. By taking $S=i$, the systemic relevance of a single bank for the overall network can be measured.
QCQP Parameters \[app.qcqp\]
============================
In order to satisfy the equivalence between Problem \[optim\] and Problem \[qcqp\], the rows of the $KN \times KN$ matrix $P_0$ are specified as follows: $$\begin{aligned}
\setlength{\arraycolsep}{1.5pt}
P_0^{r(1)} &= \begin{bmatrix}
\frac{1}{D_1}\frac{\tilde{v}_1}{E_1}, & \smash{\underbrace{0, 0, ..., 0,}_{(K-1)-\text{times}} } &\frac{1}{D_1}\frac{\tilde{v}_1}{E_1}, &\smash{\underbrace{0, 0, ..., 0,}_{(K-1)-\text{times}}} &...
\end{bmatrix}
\\
\\
P_0^{r(2)} &= \begin{bmatrix}
0,& \frac{1}{D_2}\frac{\tilde{v}_1}{E_1}, & \smash{\underbrace{0, 0, ..., 0,}_{(K-2)-\text{times}} } &0, &\frac{1}{D_2}\frac{\tilde{v}_1}{E_1}, &\smash{\underbrace{0, 0, ..., 0,}_{(K-2)-\text{times}}} &...
\end{bmatrix}
\\
&\vdots
\\
P_0^{r(K+1)} &= \begin{bmatrix}
\frac{1}{D_1}\frac{\tilde{v}_2}{E_2}, & \smash{\underbrace{0, 0, ..., 0,}_{(K-1)-\text{times}} } &\frac{1}{D_1}\frac{\tilde{v}_2}{E_2}, &\smash{\underbrace{0, 0, ..., 0,}_{(K-1)-\text{times}}} &...
\end{bmatrix}
\\
\\
P_0^{r(K+2)} &= \begin{bmatrix}
0,& \frac{1}{D_2}\frac{\tilde{v}_2}{E_2}, & \smash{\underbrace{0, 0, ..., 0,}_{(K-2)-\text{times}} } &0, &\frac{1}{D_2}\frac{\tilde{v}_2}{E_2}, &\smash{\underbrace{0, 0, ..., 0,}_{(K-2)-\text{times}}} &...
\end{bmatrix}
\\
&\vdots
\\
P_0^{r(K+N)} &= \begin{bmatrix}
\smash{\underbrace{0, 0, ..., 0,}_{(K-1)-\text{times}} } &\frac{1}{D_K}\frac{\tilde{v}_N}{E_N}, \smash{\underbrace{0, 0, ..., 0,}_{(K-1)-\text{times}}} &\frac{1}{D_K}\frac{\tilde{v}_N}{E_N}, &...
\end{bmatrix}.\end{aligned}$$ $\{P_i\}_{i=1}^N$ is a sequence of $KN \times KN$ block diagonal matrices of the following form: $$P_1 =
\left[
\begin{array}{c|c|c|c}
Q & 0 &... &0 \\
\hline
0 & 0 &... &0\\
\hline
\vdots & \vdots &\ddots &\vdots\\
\hline
0 & 0 &... &0\\
\end{array}
\right],
\quad
P_2 =
\left[
\begin{array}{c|c|c|c}
0 & 0 &... &0 \\
\hline
0 & Q &... &0\\
\hline
\vdots & \vdots &\ddots &\vdots\\
\hline
0 & 0 &... &0\\
\end{array}
\right],
\quad ..., \quad$$
$$P_N =
\left[
\begin{array}{c|c|c|c}
0 & 0 &... &0 \\
\hline
0 & 0 &... &0\\
\hline
\vdots & \vdots &\ddots &\vdots\\
\hline
0 & 0 &... &Q\\
\end{array}
\right],$$
where $Q$ is the $K \times K$ covariance matrix of assets. Furthermore, let $r= (r_1, ..., r_k)^\top$, then $A_1$ is a $N \times KN$ matrix given by $$A_1 =
\left[
\begin{array}{cccc}
-r^\top & 0 &... &0 \\
0 & -r^\top &... &0\\
\vdots & \vdots &\ddots &\vdots\\
0 & 0 &... &-r^\top\\
\end{array}
\right]$$ and $c_1 = (\tilde{r}_1, ..., \tilde{r}_N)^\top.$ By denoting the $K$-dimensional vector of ones as $\mathbf{1}_K$ and the $K \times K$ identity matrix by $\mathbf{I}_K$, we can write $A_2$ as the $(K+N) \times KN$ block matrix
$$A_2 =
\left[
\begin{array}{ccccc}
A^\prime \\
A^{\prime\prime}\\
\end{array}
\right],$$
with the $K \times KN$ matrix
$$A^\prime =
\left[
\begin{array}{ccccc}
\mathbf{I}_K & \mathbf{I}_K &... & \mathbf{I}_K \\
\end{array}
\right]$$
and the $N \times KN$ matrix $$A^{\prime\prime} =
\left[
\begin{array}{ccccc}
\mathbf{1}^\top_K & 0 &... &0 \\
0 & \mathbf{1}^\top_K &... &0\\
\vdots &\vdots &\ddots &\vdots\\
0 & 0 &... &\mathbf{1}^\top_K\\
\end{array}
\right].$$ Finally, $c_2 = -(S_1, ..., S_K, V_1, ..., V_N)^\top.$
Impact of market depth scaling parameter $c$ \[app\_c\]
=======================================================
The parameter $c$ scales the market depth of the whole market. An increase (decrease) in $c$ increases (decreases) the level of liquidity for all securities by the same factor. As the strong assumption of a constant market depth is imposed, $c$ allows to adjust the systemic risk analysis to different liquidity conditions. For example, $c$ close to zero could be used to approximate market conditions in times of financial distress in the entire market. In absence of extreme market events, the parameter should be close to one half [@cont17]. Figure \[fig:MDscale\] shows that systemic risk is inversely related to the level of liquidity in the market. We can see that for extreme liquidity conditions, i.e. regions of low and high $c$, the systemic risk of both networks converges. This indicates that systemic risk can hardly be reduced in cases of extreme (il)liquidity situations.
![Impact of market depth scaling factor on systemic risk. The plot shows systemic risk as a function of the market depth scaling parameter $c$. The squares at the lines indicate the value used for the actual analysis $c=0.4$.[]{data-label="fig:MDscale"}](MDscale){width="50.00000%"}
Network measures \[app\_e\]
===========================
The degree of a node is the number of its links (neighbors). The weighted degree the sum of all weights between a node and its links (also called strength). Since the overlapping portfolio network is symmetric, we can abstract from the direction of the links without loss of information. Let $w$ be the weighted adjacency matrix and $w^{\prime}$ the unweighted adjacency matrix, i.e. $w_{ij}^{\prime} = 1$ if there is a positive weight between $i$ and $j$ and $w_{ij}^{\prime} = 0$, else.
*Degree.* The unweighted degree of node $i$ is $d_i^{u} = \sum_{j}^N w_{ij}^{\prime}$, and the *average unweighted degree* is given by $d^{u} = \frac{1}{N} \sum_{i}^N d_i^u$. Similarly, the weighted degree (strength) of node $i$ is defined as $d_i^{w} = \sum_{j}^N w_{ij}$, and the *weighted average degree* is $d^{w} = \frac{1}{N}\sum_{i}^N d_i^w$.
*Clustering coefficient.* The clustering coefficient gives the fraction of triangles which are present in the network. The *unweighted clustering coefficient* is defined as $$C^u = \frac{\text{number of triangles}\times 3}{\text{number of connected triples}}$$ and the weighted clustering coefficient for node $i$ can be defined as [@barrat04], $$C_i^w = \frac{1}{2 d_i^{w} (d_i^{u} - 1)} \sum_{j,h} (w_{ij} + w_{ih}) w_{ij}^{\prime} w_{ih}^{\prime} w_{jh}^{\prime} \quad .$$ The *weighted clustering coefficient* of the network is just $C^w = \frac{1}{N} \sum_i^N C_i^w$, which adjusts the number of present closed triplets to their total relative weight.
*Average nearest-neighbors degree.* The average nearest-neighbors degree indicates how closely related degrees of connected nodes are. The unweighted average nearest-neighbors degree of node $i$ can be expressed as $$d_{nn,i}^u = \sum_{(d^u)^\prime} (d^u)^\prime P\left((d^u)^\prime|d^u\right) \quad,$$ [@pastor01] and the *unweighted average nearest-neighbors degree* is $d_{nn}^u = \frac{1}{N} \sum_{i}^{N} d_{nn,i}^u$. The weighted average nearest-neighbors degree of node $i$ is $$d_{nn,i}^w = \frac{1}{d_i^w} \sum_{j}^{N} w_{ij} d_j^u \quad,$$ [@barrat04] and the *weighted average nearest-neighbors degree* is $d_{nn}^w = \frac{1}{N} \sum_{i}^{N} d_{nn,i}^w$.
Measuring concentration \[app\_d\]
==================================
The Herfindahl-Hirschman index (HHI) is used to measure the concentration of a portfolio and is defined as $$H_i = \sum_{k}^K \left(\frac{V_{ki}}{V_i} \right)^2 \quad.$$ The index captures different aspects of how well investments are balanced over different assets. Note the similarity to the definition of a sample variance.
[^1]: <http://www.eba.europa.eu/risk-analysis-and-data/eu-wide-stress-testing/2016>
[^2]: Note that the scaling parameter $c$ in affects the liquidity of the market and therefore also the exposure between the banks. \[app\_c\] discusses how the choice of $c$ affects systemic risk in the European government bond market and its implications for the optimization.
[^3]: The condition could be imposed by a regulatory authority or by the bank itself as an internal business guideline.
[^4]: The model was run with three values for $\epsilon=0.01,0.025$ and $0.05$. Results are not sensitive to the different parameter values. Results are only shown for $\epsilon = 0.025$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Periodically-driven flows are known to generate non-zero, time-averaged fluxes of heat or solute species, due to the interactions of out-of-phase velocity and temperature/concentration fields, respectively. Herein, we investigate such transport (a form of the well-known Taylor–Aris dispersion) in the gap between two parallel plates, one of which oscillates vertically, generating a time-periodic squeeze flow of either a newtonian or Maxwellian fluid. Using the method of multiple time-scale homogenization, the mass/heat balance equation describing transport in this flow is reduced to a one-dimensional advection–diffusion–reaction equation. This result indicates three effective mechanisms in the mass/heat transfer in the system: an effective diffusion that spreads mass/heat along the concentration/temperature gradient, an effective advective flux, and an effective reaction that releases or absorbs mass/heat - in the time-averaged frame. Our results demonstrate that there exist resonant modes under which the velocity peaks when the dimensionless plate oscillation frequency (embodied by the Womersley number, the ratio of the transient inertia to viscous forces) approaches specific values. As a result, transport in this flow is significantly influenced by the dimensionless frequency. On the one hand, the effective, time-averaged dispersion coefficient is always larger than the molecular diffusivity, and is sharply enhanced near resonance. The interaction between fluid elasticity and the oscillatory forcing enhances the efficiency of transport in the system. On the other hand, the identified effective advection and reaction mechanisms may transport mass/heat from regions of high concentration/temperature to those of low concentration/temperature, or vice versa, depending on the value of dimensionless frequency. Ultimately, it is shown that the oscillatory squeeze flow can either enhance or diminish transport, depending on the interplay of these three effective (homogenized) mechanisms.'
author:
- Rui Yang
- 'Ivan C. Christov'
- 'Ian M. Griffiths'
- 'Guy Z. Ramon'
bibliography:
- 'Dispersion4.bib'
title: 'Time-averaged transport in oscillatory squeeze flow of a viscoelastic fluid'
---
[^1]
Introduction
============
The spreading of a scalar in a flow due to the combined action of diffusion, advection and reaction is widely known as Taylor–Aris (shear) dispersion [@taylor_dispersion_1953; @aris_dispersion_1960; @sankarasubramanian1973unsteady; @PhysRevFluids.4.034501; @Young19991]. Shear dispersion has been studied extensively, in particular, for flows that are oscillatory in time [@chatwin_longitudinal_1975-1; @watson_diffusion_1983; @joshi_experimental_1983; @vedel_hovad_bruus_2014; @gill1971dispersion; @smith1982contaminant], due to their relevance to transport in arteries [@gentile_transport_2008], pulmonary airways [@fredberg_augmented_1980; @eckmann_grotberg_1988; @Grotberg1994], the cerebrospinal fluid [@PhysRevFluids.5.043102], through bones [@SCHMIDT20052337], liquid membranes [@Leighton1988] and wave boundary layers [@mei_dispersion_1994], actuation of oscillatory flow via electro-osmotic forces [@ramon_solute_2011; @Ramon2018], and so on. The analogous heat-transfer problem – generating a considerable augmentation of the heat flux through flow oscillations – has also been observed (see, for example, [@kurzweg_heat_1984; @kurzweg_enhanced_1985-1; @lambert_heat_2009]).
In what follows, we consider an oscillatory squeeze flow (OSF), in which a viscoelastic fluid is driven periodically by the motion of one of the confining, parallel planes (see Fig. \[fig:schematic\] for a schematic of the studied configuration). This type of setting exists in some devices using magnetorheological fluids [@li_benchmark_2018] or electrorheological fluids [@wingstrand_oscillatory_2016-1], where the fluid is squeezed periodically under variable magnetic or electric field, as means of achieving continuously variable control of mechanical vibrations, and also envisioned as a mode of achieving control over transport in confined systems.
[Figs/Fig1\_schematic\_of\_problem\_2]{} (-3,20.6)[$2 \hat{a}$]{} (56,22)[$\hat{R}$]{} (1,13)[$\hat{h}_0$]{} (28,27)[$\hat{z}=\hat{h}(\hat{t})=\hat{h}_{0}+\hat{a} \cos (\hat{\omega} \hat{t})$]{} (43,-2)[$\hat{r}=0$]{} (83,3)[$\hat{z}=0$]{} (90,20)[$\hat{z},\hat{v}$]{} (95,7)[$\hat{r},\hat{u}$]{}
An early theoretical analysis of OSF was conducted by Phan-Thien and Tanner [@phan-thien_small_1980; @phan-thien_viscoelastic_1983], who provided analytical solutions for the velocity profile and the normal force required to drive the plate. They proposed that the dynamic properties of polymeric liquids could be measured using a plastometer operating in the OSF mode. Since then, OSF equipment has been proposed and widely used in the rheology community for evaluating the processability of polymer melts, and for possible macromolecular characterization [@field_experimental_1996; @engmann_squeeze_2005]. Furthermore, its ability to augment transport of mass and/or heat serves as a motivation for understanding the characteristics of this system.
Studies by @phan-thien_small_1980 and @bell_oscillatory_2006 demonstrated that a periodic flow in the streamwise direction (perpendicular to the oscillation direction of the plates) of an OSF exhibits a time dependence similar to an oscillatory pipe flow. Hence, there is reason to believe that shear-induced dispersion should also exist in the OSF, and dispersion would augment the mixing of the solute and the bulk flow along the streamwise direction. In this vein, @stone_dispersion_1999 investigated the shear dispersion of a solute in a steady radial flow between two parallel plates, and discovered a radially-dependent effective diffusivity in the streamwise direction. @creech_dispersive_2001 proposed a model to analyze dispersion in a specific case of the OSF, which occurs in the tear layer of the eye, sandwiched between the cornea and a soft contact lens. They found that the dispersion coefficient is orders of magnitude higher than the molecular diffusivity, permitting mixing between the cornea and a soft contact lens. Specifically, the dispersion-augmented mixing is much faster than that which can be achieved by diffusion alone. However, this model assumes that the radial velocity is independent of the streamwise direction, which limits the applicability of the study. To the best of our knowledge, Taylor–Aris dispersion in a viscoelastic OSF has not been studied, and this is the goal of the present work.
In this paper, in view of the mathematical and physical analogy between mass diffusion and heat conduction, we investigate the dispersion in both contexts via a unified approach. We consider a domain confined by two parallel plates that enclose a Newtonian or a viscoelastic Maxwell fluid. One of the plates oscillates vertically in a sinusoidal fashion, squeezing the enclosed fluid. We determine the fluid velocity field and reduce the mass/heat balance equation into a one-dimensional advection–diffusion–reaction equation via the asymptotic method of multiple-scales homogenization. Interestingly, our results show that the OSF can either enhance or diminish mass/heat transfer, depending on the interplay of the effective diffusion, advection and reaction mechanisms. To this end, this paper is organized as follows: We begin by formulating the problem of the viscoelastic OSF between two plates. First, we obtain an analytical solution for the velocity profile in \[sec:Velocity-field\]. Then, in \[sec:Diffusion\], we combine the latter with the effective advection–diffusion-reaction equation for transport in the OSF. We present the results and discussion related to the velocity field in \[sec:dis\_velocity\] and the transport characteristics in \[sec:dis\_diffusion\]. Concluding remarks are given in \[sec:Conclusion\]. Appendices are included with a detailed derivation of the homogenized transport equation, as well as a calculation of the power required to drive the system.
Model formulation\[sec:Problem-description\]
============================================
We consider the flow and scalar transport in a viscoelastic fluid flow confined between two concentric discs with the same radii of $\hat{R}$, separated by a distance $\hat{h}_{0}$ (see schematic in Fig. \[fig:schematic\]). The upper disc oscillates sinusoidally with amplitude $\hat{a}$ about a mean position $\hat{h}_0$, such that its position in time is $\hat{z} = \hat{h}(\hat{t})$, and the distance between the two discs is $$\hat{h}(\hat{t})=\hat{h}_{0}+\hat{a}\cos(\hat{\omega} \hat{t}),$$ where $\hat{\omega}$ is the angular frequency, and $\hat{t}$ is time. The vertical velocity of the top disc is therefore $$\hat{h}'(\hat{t})=-\hat{a}\hat{\omega}\sin(\hat{\omega} \hat{t})=\Re\left\{ \hat{a}\hat{\omega}\mathrm{i}\mathrm{e}^{\mathrm{i}\hat{\omega} \hat{t}}\right\},
\label{eq:velocity of plate}$$ where $\Re\{\,\cdot\,\}$ denotes the real part of a complex quantity, and primes denote differentiation.
Velocity field {#sec:Velocity-field}
--------------
### Governing equations
We begin with the equations of motion for an incompressible flow. The continuity equation is $$\frac{1}{\hat{r}}\frac{\partial}{\partial \hat{r}}\left(\hat{r}\hat{u}\right)+\frac{\partial \hat{v}}{\partial \hat{z}}=0,
\label{eq:continuity}$$ where the two-dimensional velocity field is $\hat{\bm{u}}=\hat{u}\bm{e}_r+\hat{v}{\bm{e}_z}$, with $\bm{e}_r$ and ${\bm{e}_z}$ the unit vectors in the $\hat{r}$ and $\hat{z}$ directions, respectively (see Fig. \[fig:schematic\]), and the momentum equation, neglecting body forces, is $$\hat{\rho}\frac{\mathrm{D}\hat{\bm{u}}}{\mathrm{D} \hat{t}}=-\nabla \hat{p}+\nabla\cdot\hat{\boldsymbol{\tau}},
\label{eq:momentum_0}$$ where $\hat{\boldsymbol{\tau}}$ is the (deviatoric) viscous stress tensor, $\hat{\rho}$ is the density of the fluid, and $\hat{p}$ is the pressure. The constitutive equation for the incompressible upper-convected Maxwell fluid (see, for example, [@phan-thien_viscoelastic_1983]) is $$\hat{\boldsymbol{\tau}} + \hat{\lambda}_0\left(\frac{\mathrm{D}\hat{\boldsymbol{\tau}}}{\mathrm{D}\hat{t}}-\left[\hat{\boldsymbol{\tau}}\cdot\left(\nabla\hat{\bm{u}}\right)+\left(\nabla\hat{\bm{u}}\right)^{T}\cdot\hat{\boldsymbol{\tau}}\right]\right)=\hat{\mu}\left(\nabla\hat{\bm{u}}+\nabla\hat{\bm{u}}^{T}\right),
\label{eq:maxwell}$$ where $\hat{\lambda}_0$ is the viscoelastic relaxation time, and $\hat{\mu}$ is the usual, Newtonian, dynamic shear viscosity.
In what follows we shall assume that the amplitude of the disc oscillations is small, i.e., the dimensionless displacement amplitude of the upper plate $\delta\equiv{\hat{a}}/{\hat{h}_{0}}\ll1$, while the frequency is unrestricted. With this assumption, combining Eq. (\[eq:maxwell\]) and Eq. (\[eq:momentum\_0\]) results in (see Appendix \[app:DDt\] for derivation)
$$\begin{aligned}
\hat{\lambda}_0\frac{\partial^{2}\hat{u}}{\partial \hat{t}^{2}} + \frac{\hat{\lambda}_0}{\hat{\rho}}\frac{\partial^{2}\hat{p}}{\partial \hat{t}\partial \hat{r}}+\frac{\partial \hat{u}}{\partial \hat{t}}&=-\frac{1}{\hat{\rho}}\frac{\partial \hat{p}}{\partial \hat{r}}+\frac{\hat{\mu}}{\hat{\rho}}\left(\frac{\partial^{2}\hat{u}}{\partial \hat{r}^{2}}+\frac{1}{\hat{r}}\frac{\partial \hat{u}}{\partial \hat{r}}+\frac{\partial^{2}\hat{u}}{\partial \hat{z}^{2}}-\frac{\hat{u}}{\hat{r}^{2}}\right),
\label{eq:momentum1_dim}
\\
\hat{\lambda}_0\frac{\partial^{2}\hat{v}}{\partial \hat{t}^{2}} + \frac{\hat{\lambda}_0}{\hat{\rho}}\frac{\partial^{2}\hat{p}}{\partial \hat{t}\partial \hat{z}}+\frac{\partial \hat{v}}{\partial \hat{t}}&=-\frac{1}{\hat{\rho}}\frac{\partial \hat{p}}{\partial \hat{z}}+\frac{\hat{\mu}}{\hat{\rho}}\left(\frac{\partial^{2}\hat{v}}{\partial \hat{r}^{2}}+\frac{1}{\hat{r}}\frac{\partial \hat{v}}{\partial \hat{r}}+\frac{\partial^{2}\hat{v}}{\partial \hat{z}^{2}}\right).
\label{eq:momentum2_dim}\end{aligned}$$
We impose no slip and no penetration at the solid walls, which correspond to $$\hat{u}(0)=0,\qquad \hat{u}(\hat{h})=0,\qquad \hat{v}(\hat{h})=\Re\left\{ \hat{a}\hat{\omega}\mathrm{i}\mathrm{e}^{\mathrm{i}\hat{\omega} \hat{t}}\right\} ,\qquad \hat{v}(0)=0.\label{eq:original BC_1palte0}$$ In principle, we must also specify conditions at $\hat{r}=\hat{R}$ to close the problem. We discuss this in more detail in \[Section:Solution velocity field\] below.
### Non-dimensionalization
We posit lubrication-type scalings with the pressure made dimensionless using the viscous stress scale: $$\hat{u}= \hat{U}{u},\quad \hat{v}=\epsilon \hat{U}{v},\quad \hat{t}= \hat{T}{t},\quad \hat{r}= \hat{R}{r},\quad \hat{z}=\epsilon \hat{R}{z},\quad \hat{p}=\frac{\hat{\mu} \hat{U}}{\epsilon^{2}\hat{R}}{p},\quad \hat{h}=\hat{h}_0 h.
\label{eq:nond_notation}$$ Here, $\epsilon\equiv \hat{h}_0/\hat{R}$. The vertical velocity scale $\epsilon \hat{U}$ is set by the oscillations of the wall, hence we have $\hat{U}=\hat{a}\hat{\omega}/\epsilon$. The natural time scale of the problem is the inverse of the oscillation frequency of the wall, i.e., $\hat{T}$ = 1/$\hat{\omega}$. The dimensionless versions of the momentum equations and are then
$$\begin{aligned}
{\textrm{De}}\,\alpha^{2}\frac{\partial^{2}{u}}{\partial{t}^{2}} + {\textrm{De}}\frac{\partial^{2}{p}}{\partial{t}\partial{r}} + \alpha^{2}\frac{\partial{u}}{\partial{t}}&=-\frac{\partial{p}}{\partial{r}}+\epsilon^{2}\frac{\partial^{2}{u}}{\partial{r}^{2}}+\epsilon^{2}\frac{1}{{r}}\frac{\partial{u}}{\partial{r}}+\frac{\partial^{2}{u}}{\partial{z}^{2}}-\epsilon^{2}\frac{{u}}{{r}^{2}},\label{eq:dimless_cont(a)}\\
\epsilon^{2}{\textrm{De}}\,\alpha^{2}\frac{\partial^{2}{v}}{\partial{t}^{2}} + {\textrm{De}}\frac{\partial^{2}{p}}{\partial{t}\partial{z}}+\epsilon^{2}\alpha^{2}\frac{\partial{v}}{\partial{t}}&=-\frac{\partial{p}}{\partial{z}}+\epsilon^{2}\left(\epsilon^{2}\frac{\partial^{2}{v}}{\partial{r}^{2}}+\epsilon^{2}\frac{1}{{r}}\frac{\partial{v}}{\partial{r}}+\frac{\partial^{2}{v}}{\partial{z}^{2}}\right),\label{eq:dimless_cont(b)}\end{aligned}$$
while the dimensionless form of the boundary conditions (\[eq:original BC\_1palte0\]) is $${u}(0)=0,\qquad{u}(1)=0,\qquad {v}(1)=\Re\left\{ \mathrm{i}\mathrm{e}^{\mathrm{i}{t}}\right\},\qquad {v}(0)=0.
\label{eq:original BC_1palte-1}$$ Here, we define the Womersley number [@womersley_method_1955], $\alpha^{2}=\hat{\rho}\hat{\omega} \hat{h}_{0}^{2}/\hat{\mu}$, the ratio of transient inertial forces to viscous forces, and the Deborah number, ${{\textrm{De}}}=\hat{\lambda}_0\hat{\omega}$, which represents the ratio of the elastic relaxation time scale to the oscillation time scale.
Combining Eqs. (\[eq:dimless\_cont(a)\]) and (\[eq:dimless\_cont(b)\]) at leading order in $\epsilon$, we obtain $${\textrm{De}}\,\alpha^{2}\frac{\partial^{3}{u}}{\partial{t}^{2}\partial{z}}+\alpha^{2}\frac{\partial^{2}{u}}{\partial{t}\partial{z}}=\frac{\partial^{3}{u}}{\partial{z}^{3}}.
\label{eq:combined contr}$$
### Solution {#Section:Solution velocity field}
Assuming the following form of the solutions,
\[eq:solution structure\] $$\begin{aligned}
{u}({r},{z},{t})&
=\Re\left\{ {\frac{r}{2}}{f}^{\prime}({z})\mathrm{e}^{\mathrm{i}{t}}\right\} ,\\
{v}({z},{t})&=-\Re\left\{ {f}({z})\mathrm{e}^{\mathrm{i}{t}}\right\} ,\end{aligned}$$\[eq:velocity form\]
and substituting them into Eqs. (\[eq:combined contr\]), we find that $f$ satisfies the following ordinary differential equation: $$(\mathrm{i}-{\textrm{De}})\alpha^{2}{f}^{\prime\prime}({z})-{f}^{(4)}({z})=0
\label{eq:ODE for f}$$ subject to the boundary conditions from Eq. (\[eq:original BC\_1palte-1\]): $${f}^{\prime}(0)=0,\qquad {f}^{\prime}(1)=0,\qquad {f}(1)=-\mathrm{i},\qquad {f}(0)=0.
\label{eq:BC of 1wall}$$
We note that the solution structure imposes a particular behaviour of the fluid at $\hat{r}=\hat{R}$. In practice one should apply appropriate conditions, which specify, for example, how the free surface moves or conditions on a reservoir into which the fluid enters beyond $r=1$. However, these edge effects will play a small role in the overall behaviour as a result of the lubrication approximation that we have made along with the assumption of small wall oscillations, and thus provides a suitable representation of the fluid flow.
The solution to Eq. (\[eq:ODE for f\]) subject to is $${f}({z})=-\frac{\mathrm{ie}^{\mathrm{\Gamma}(1-{z})}\left(\mathrm{e}^{\mathrm{\Gamma}({z}-1)}\left(\Gamma{z}-\mathrm{e}^{\Gamma{z}}+\mathrm{e}^{\mathrm{\Gamma}}(\Gamma{z}-1)+1\right)+1\right)}{\mathrm{e}^{\mathrm{\Gamma}}(\Gamma-2)+\Gamma+2},$$ where $\Gamma=\alpha\sqrt{\mathrm{i}-{\textrm{De}}}$. Thus, the dimensionless velocity components are
\[eq:u\_1\_v\_1wall\] $$\begin{aligned}
{u}({r},{z},{t})&=-\Re\left\{ {\frac{r}{2}}\mathrm{e}^{\mathrm{i}{t}}\frac{\mathrm{i}\Gamma\left(\text{e}^{\Gamma}-\text{e}^{\Gamma{z}}-\text{e}^{\Gamma-\Gamma{z}}+1\right)}{\text{e}^{\Gamma}(\Gamma-2)+\Gamma+2}\right\} ,\label{eq:u_1wall}\\
{v}({z},{t})&=\Re\left\{ \frac{\mathrm{ie}^{\Gamma(1-{z})}\left(\mathrm{e}^{\Gamma({z}-1)}\left(\Gamma{z}-\mathrm{e}^{\mathrm{\Gamma}{z}}+\mathrm{e}^{\Gamma}(\Gamma{z}-1)+1\right)+1\right)}{\mathrm{e}^{\Gamma}(\Gamma-2)+\Gamma+2}\mathrm{e}^{\mathrm{i}{t}}\right\} .\label{eq:v_1wall}\end{aligned}$$
Time-averaged scalar dispersion – mass and heat transfer {#sec:Diffusion}
--------------------------------------------------------
### Governing equations\[subsec:Equations\]
For the flow sketched in Fig. \[fig:schematic\] given by the solution in Eq. , the advection–diffusion equation governing the mass and heat transfer (in the absence of mass/heat sources or sinks) can be expressed as: $$\frac{\partial\hat{\Lambda}}{\partial \hat{t}} + \left(\frac{1}{\hat{r}}\frac{\partial(\hat{r}\hat{u}\hat{\Lambda})}{\partial \hat{r}}+\frac{\partial(\hat{v}\hat{\Lambda})}{\partial \hat{z}}\right)=\hat{D}\frac{1}{\hat{r}}\frac{\partial}{\partial \hat{r}}\left(\hat{r}\frac{\partial\hat{\Lambda}}{\partial \hat{r}}\right)+\hat{D}\frac{\partial^{2}\hat{\Lambda}}{\partial \hat{z}^{2}}.
\label{eq:heat tran. eq.2}$$ where $\hat{D}$ is the mass or thermal diffusivity, and $\hat{\Lambda}$ represents either the concentration or the temperature for the cases of mass and heat transfer, respectively. Assuming that the walls of the plates are impermeable and thermally insulated, we have the following boundary conditions $$\left.\frac{\partial\hat{\Lambda}}{\partial\hat{z}}\right|_{\hat{z}=0}=\left.\frac{\partial\hat{\Lambda}}{\partial\hat{z}}\right|_{\hat{z}=\hat{h}(\hat{t})}=0.
\label{eq:heat tran BC}$$ To close the problem we also require boundary conditions at a fixed radial position, however, these are not necessary to determine the radially averaged governing equation; we consider a particular case in \[Section:Energy considerations\].
By introducing the dimensionless variables from Eq. , and the dimensionless concentration/temperature $\Lambda=(\hat{\Lambda}-\hat{\Lambda}_{\mathrm{min}})/(\hat{\Lambda}_{\mathrm{max}}-\hat{\Lambda}_{\mathrm{min}})$, where $\hat{\Lambda}_{\mathrm{max}}$ and $\hat{\Lambda}_{\mathrm{min}}$ represent the maximal and minimal values of $\hat{\Lambda}$, Eqs. and become
$$\begin{aligned}
\alpha^{2}\sigma\frac{\partial\Lambda}{\partial{t}} + \epsilon {\textrm{Pe}}\left(\frac{1}{{r}}\frac{\partial\left({r}{u}\Lambda\right)}{\partial{r}}+\frac{\partial\left({v}\Lambda\right)}{\partial{z}}\right)&=\frac{\epsilon^{2}}{{r}}\frac{\partial}{\partial{r}}\left({r}\frac{\partial\Lambda}{\partial{r}}\right)+\frac{\partial^{2}\Lambda}{\partial{z}^{2}},\label{eq:dispersion cont}\\[3mm]
\left.\frac{\partial\Lambda}{\partial{z}}\right|_{{z}=0}=\left.\frac{\partial\Lambda}{\partial{z}}\right|_{{z}=h}&=0.\end{aligned}$$
\[eq:dispersion cont and bc\]
where ${\textrm{Pe}}=\hat{h}_0 \hat{U}/\hat{D}$ is the Péclet number, and $\sigma=\hat{\mu}/\hat{\rho}\hat{D}$ is the Schmidt number in mass transfer or the Prandtl number in heat transfer.
### Multiple-time-scales analysis
We now turn to the derivation of the time-averaged transport equation, for which we employ the technique of multiple time-scale homogenization [@kevorkian_multiple_1996]. There are three disparate time scales in this problem; first, the characteristic time for mass/heat diffusion in the transverse direction is: $$\hat{t}_{0}=\mathcal{O} \left (\frac{\hat{h}_0^{2}}{\hat{D}}\right ).\label{eq:assumption of t0}$$ We assume this time scale is comparable to the oscillation period, i.e., $\hat{t}_{0}=\mathcal{O}\left({1}/{\hat\omega}\right)$, which means the mass/heat diffusion in the transverse direction should equilibrate within several oscillation periods [@mei_homogenization_2010; @ng_dispersion_2006]. The second time scale is the characteristic time for advection in the streamwise direction: $$\hat{t}_{1}=\frac{\hat{t}_{0}}{\epsilon}=\mathcal{O} \left (\frac{\hat{R}}{\hat{U}}\right ).
\label{eq:assumption of t1}$$ The third, and longest, time scale is for streamwise diffusion: $$\hat{t}_{2}=\frac{\hat{t}_{0}}{\epsilon^{2}}=\mathcal{O} \left (\frac{\hat{R}^{2}}{\hat{D}} \right).
\label{eq:assumption of t2}$$
In the multiple-time-scales analysis, we assume that all variables are dependent on these three time scales *independently* [@pagitsas_multiple_1986; @kevorkian_multiple_1996]. Then, using the dimensionless versions of these time scales, the time derivative transforms as $$\frac{\partial}{\partial{t}}=\frac{\partial}{\partial{t}_{0}}+\frac{\text{d}{t}_{1}}{\text{d}{t}}\frac{\partial}{\partial{t}_{1}}+\frac{\text{d}{t}_{2}}{\text{d}{t}}\frac{\partial}{\partial{t}_{2}}=\frac{\partial}{\partial{t}_{0}}+\epsilon\frac{\partial}{\partial{t}_{1}}+\epsilon^{2}\frac{\partial}{\partial{t}_{2}}.
\label{eq:time_exp.}$$
We follow @fife_dispersion_1975 and expand $\Lambda$ as follows: $$\Lambda\sim\Lambda_{0}({r},{z},{t}_{1},{t}_{2})+\sum_{n=1}^{\infty}\epsilon^{n}\Lambda_{n}({r},{z},{t}_{0},{t}_{1},{t}_{2})+\sum_{n=0}^{\infty}\epsilon^{n}W_{n}({r},{z},{t}_{0}),$$ where the $\Lambda_{n}$ are assumed to be fully developed terms that are harmonic functions of ${t}_{0}$, while the $W_{n}$ terms are assumed to be transient in ${t}_{0}$ and vanish as ${t}_{0}\to\infty$.
Next, the homogenization [@mei_applications_1996; @mei_homogenization_2010] is performed (see details in Appendix \[app:MMS\]), which transforms Eq. into a homogenized advection–diffusion–reaction equation for transport in the viscoelastic OSF: $$\alpha^{2}\sigma\frac{\partial\Lambda_{0}}{\partial{t}_{2}}=\frac{1}{{r}}\frac{\partial}{\partial{r}}\left(\left(1+{D}_{{\rm eff}}\right){r}\frac{\partial\Lambda_{0}}{\partial{r}}-r{U}_{{\rm eff}}\Lambda_{0}\right)+{S}_{{\rm eff}}\Lambda_{0},
\label{eq:final_PDE_t2(2)}$$ where
\[eq:DUSeff\] $$\begin{aligned}
{D}_\mathrm{eff}(r)&=-\frac{{r}^{2}}{4}{\textrm{Pe}}\, \Re\left\langle {f}'^{*}B_{\text{w}}\right\rangle ,
\label{eq:Deff}\\
{U}_\mathrm{eff}(r)&=-\frac{{r}}{2}{\textrm{Pe}}\, \Re\left\{ \mathrm{i}B_{\text{w}}(1)\right\} ,\label{eq:Ueff}\\
{S}_\mathrm{eff}&=-{\textrm{Pe}}\, \Re\left\{ \mathrm{i}B_{\text{w}}(1)\right\}.
\label{eq:Seff}\end{aligned}$$
Here, the superscript $*$ represents the conjugate of a complex number, and we have further defined $$\begin{aligned}
B_{\text{w}}(z) & =\frac{\Gamma {\textrm{Pe}}\,\mathrm{e}^{-{z}(\Gamma+2A_{1})}\left[A_{2}\left(-e^{A_{3}}+e^{\Gamma+A_{1}(2{z}+1)+\Gamma {z}}+e^{A_{1}(2{z}+1)+\Gamma {z}}-e^{{z}(\Gamma+2A_{1})}\right)+A_{4}\right]}{2\alpha^{2}\sigma A_{2}\left[e^{\Gamma}(\Gamma-2)+\Gamma+2\right]\left(e^{A_{1}}-1\right)},\\
A_{1} &= (-1)^{1/4}\alpha\sqrt{\sigma},\\
A_{2} &= \alpha^{2}\sigma+\mathrm{i}\Gamma^{2},\\
A_{3} &= \Gamma+2A_{1}{z}+\Gamma {z},\\
A_{4} &= \mathrm{i}A_{1}^{2}\left(e^{\Gamma+2A_{1}{z}+A_{1}}+e^{2A_{1}{z}+A_{1}+2\Gamma {z}}-e^{2{z}(\Gamma+A_{1})}-e^{\Gamma+2A_{1}{z}}\right)\nonumber\\
&\hspace{1cm}-\mathrm{i}A_{1}\Gamma\left(e^{({z}+1)(\Gamma+A_{1})}-e^{A_{1}{z}+A_{1}+\Gamma {z}}+e^{\Gamma+3A_{1}{z}+\Gamma {z}}-e^{\left(\Gamma+3A_{1}\right)z}\right).\end{aligned}$$
Using Eq. (\[eq:time\_exp.\]), ${t}_2$ in Eq. (\[eq:final\_PDE\_t2(2)\]) can be replaced by the general time ${t}$, resulting in $$\frac{\alpha^{2}\sigma}{\epsilon^{2}}\frac{\partial\Lambda_{0}}{\partial{t}}=\frac{1}{{r}}\frac{\partial}{\partial{r}}\left(\left(1+{D}_{{\rm eff}}\right){r}\frac{\partial\Lambda_{0}}{\partial{r}}-r{U}_{{\rm eff}}\Lambda_{0}\right)+{S}_{{\rm eff}}\Lambda_{0}.
\label{eq:final_PDE_generalt}$$ Note that the above equation is valid on the long time scale, i.e., ${t}\gtrsim\mathcal{O}({t}_2)$. By recalling Eqs. (\[eq:assumption of t0\]) to (\[eq:assumption of t2\]), we decude the range of ${\textrm{Pe}}$ for which the separation of time scales assumption is valid: i.e., $\epsilon\ll{\textrm{Pe}}\ll {1}/{\epsilon}$.
Equation (\[eq:final\_PDE\_generalt\]) is the homogenized [effective]{} advection–diffusion–reaction equation for transport in the viscoelastic OSF, which embodies three effective mechanisms of mass/heat transfer comprising the dispersion process. The first one is the [effective]{} diffusion, with ${D}_{{\rm eff}}$ representing the [effective]{} streamwise diffusivity. As with molecular diffusion (Fick’s law), the effective diffusive flux is also driven by the concentration gradient. The second is the [effective]{} advection, which indicates that mass/heat is carried along by an [effective]{} streamwise advection velocity ${U}_{{\rm eff}}$, representing a time-averaged drift velocity. When ${U}_{{\rm eff}}>0$, it is directed radially outwards; when ${U}_{{\rm eff}}<0$, it is directed radially inwards. The third mechanism of mass/heat transfer is an [effective]{} reaction term, releasing/absorbing solute or heat when ${S}_{{\rm eff}}>0$, or ${S}_{{\rm eff}}<0$, respectively. This reaction term arises as a result of the wall motion; we recall that no mass or heat is absorbed or emitted by the walls. The ratio $U_{\mathrm{eff}}/S_{\mathrm{eff}}={r}/{2}$, indicating that the effect of effective advection is much less significant than effective reaction when $r\ll1$, which is expected given the structure of the velocity field. When the effective advection and reaction work against the concentration/temperature gradient, and their effects on transport exceeds that of effective diffusion, the spreading of mass/heat will be inhibited. This is similar to the finding of a negative time-averaged dispersion coefficient [@chu2019dispersion], with which the solute cloud will be compressed in the oscillatory flow. Additionally, these three effective mechanisms are not independent, as they all depend on the dimensions of the device, motion of the plate, and properties of the fluid.
Comparing Eq. (\[eq:final\_PDE\_generalt\]) with the long-time equations obtained in similar treatments of transport in oscillatory pipe flow [@mei_homogenization_2010; @ng_dispersion_2006; @ramon_solute_2011; @chu2019dispersion], we find that the dispersion process in an OSF is different, in two aspects. First, the occurrence of the effective advection and reaction terms, beyond the usual dispersion term; these appear due to the first-order term at the boundary, which is embodied in $B_w(1)$. Second, ${D}_{{\rm eff}}$ varies in the streamwise (i.e., radial) direction, as well as ${U}_{{\rm eff}}$. The main reason is that the radial velocity component varies in the streamwise direction, which is also a feature of steady radial flow with converging and diverging streamlines [@stone_dispersion_1999].
Results and discussion
======================
Velocity field {#sec:dis_velocity}
--------------
In Fig. \[fig: distr.of\_u\_1walls\], we present the profiles of the horizontal velocity ${u}$ at different times at the fixed cross-section ${r}=1$. Note that the value of $r$ is chosen arbitrarily for illustrative purposes; the shape of the velocity profile is unchanged by varying $r$. Comparing Figs. \[fig: distr.of\_u\_1walls\](a) and (c), we observe that the Newtonian fluid’s velocity profile ${u}$ becomes flatter near the centerline when $\alpha$ is relatively large. In this case, the forcing scale is smaller than the viscous time scale, so the effects of the oscillation of the top plate are confined to near the wall during one oscillation period, and the fluid in the centerline is not influenced. The negligible pressure gradient in the ${z}$ direction for $De=0$ (see Eq. (\[eq:dimless\_cont(b)\])) ensures the profile in near centerline remains flat. However, when $\alpha$ is small, the oscillations are slow enough so that viscosity affects the entire cross-section, as shown in (a). The velocity profiles shown in Figs. \[fig: distr.of\_u\_1walls\](a) to (c) exhibit similar variation with $\alpha$ as the oscillatory flow in a pipe [@womersley_elastic_1957; @loudon_use_1998; @ramon_solute_2011].
[Figs/Fig2\_velocity\_1wall\_2]{} (17.5,-1.5)[$u$]{} (50.55,-1.5)[$u$]{} (83,-1.5)[$u$]{} (17.5,27)[$u$]{} (51,27)[$u$]{} (83,27)[$u$]{} (-1.5,13)[$z$]{} (-1.5,41.5)[$z$]{}
Next, comparing Figs. \[fig: distr.of\_u\_1walls\](d) and (f), we observe that the velocity has multiple local maxima, across the gap, when ${\textrm{De}}\gg1$. The Deborah number ${\textrm{De}}$ is the ratio of the elastic relaxation time to the oscillation time scale of the OSF, which is set by the plate motion. For a Newtonian fluid (${\textrm{De}}=0$), the flow generated by the movement of the plate is felt instantaneously throughout the entire gap. For a viscoelastic fluid (${\textrm{De}}>0$), on the other hand, the elasticity of the fluid acts as a restoring force that inhibits the transfer of momentum from the oscillatory wall. Thus, for larger values of ${\textrm{De}}$, the influence of the motion of the plate lags behind the forcing, i.e., it takes “longer” to be felt throughout the domain. As a result, the fluid near the center lags behind the fluid near the wall, leading to the multiple peaks in the velocity distribution across the gap.
The nature of the fluid flow also changes dramatically at some specific values of ${\textrm{De}}$ and $\alpha$. For instance, the phase of ${u}$ when ${\textrm{De}}=5$ and $\alpha=4$ (Fig. \[fig: distr.of\_u\_1walls\](e)) is very different from the other cases shown. This is attributed to the triggering of a ‘resonant’ mode. To understand and quantify this effect, we introduce the spatio-temporal average of the longitudinal velocity $${u}_\mathrm{a}\left(r\right) =\frac{1}{2\pi}\int_{0}^{1}\int_{{0}}^{2\pi}\left|{u}\left(r,z,t\right)\right|\mathrm{d}{t}\mathrm{d}{z}.
\label{eq:uavg}$$ The dependence of ${u}_\mathrm{a}$ on $\alpha$, for different values of ${\textrm{De}}$ at radial position ${r}= 1$ is shown in Fig. \[fig: u\_vs\_alpha,resonance\]. It can be seen that, for a viscoelastic fluid (${\textrm{De}}>0$), there are peaks of ${u}_\mathrm{a}$ at specific values of $\alpha$. This phenomenon is caused by the elasticity of the fluid, hence we term it a *viscoelastic resonance*. The corresponding value of $\alpha$ can be considered as the dimensionless resonant frequency. This effect has also been observed in the oscillatory pipe flow of a Maxwell fluid both experimentally [@castrejon-pita_experimental_2003] and numerically [@lambert_heat_2009].
We find that the variation of $u_\mathrm{a}$ is mainly determined by the denominator of $u_\mathrm{a}$, while its numerator is much less significant. We therefore introduce a parameter $$\Theta(\alpha,{\textrm{De}})=\mathrm{e}^{\Gamma}\left(\Gamma-2\right)+\Gamma+2,\label{eq:denominator_of_u}$$ representing the denominator of $u_\mathrm{a}$. As shown in Fig. \[fig: u\_vs\_alpha,resonance\], the peaks of ${u}_\mathrm{a}$ coincide with the local minima of $\left|\Theta\right|$ when ${\textrm{De}}=10$. We further compared the match between the peaks of ${u}_\mathrm{a}$ and minima of $\left|\Theta\right|$ for ${\textrm{De}}\leq100$, observing they match without exception for this range. Hence, it can be deduced that the resonance is governed by $\left|\Theta\right|$, which is a manifestation of the elasticity of the fluid. For a Maxwell fluid, as $\alpha$ increases, the amplitude of the oscillations of $\left|\Theta\right|$ decreases, while $\left|\Theta\right|$ continues to increase, leading to the decay of the oscillations. This effect can be interpreted as viscous dissipation overcoming the elastic restoring force. Consequently, the peaks of ${u}_\mathrm{a}$ gradually reduce with increasing $\alpha$ and eventually disappear beyond some critical value.
[Figs/Fig\_u\_vs\_alpha\_2]{} (2,32)
[90]{}${u}_\mathrm{a}$
(97,32)
[90]{}$\left|\Theta\right|$
(49,-2)[$\alpha$]{} (65,38.5)[${\textrm{De}}=0$]{} (65,33)[${\textrm{De}}=3$]{} (65,27)[${\textrm{De}}=10$]{} (65,21.5)[${\textrm{De}}=10$]{}
The resonant values of $\alpha$ satisfy $\partial |\Theta|/\partial \alpha = 0$, corresponding with local minima of $\left|\Theta\right|$. Specifically, beginning with Eq. , and aided by numerical experimentation, we find the following approximation for the $\alpha$ value corresponding to the $n^{\textrm{th}}$ peak, $$\label{eq:Predict_resonance}
\alpha^\ast_n \approx \frac{\pi \left(2n+1\right)}{\Im\left\{ \sqrt{\mathrm{i}-{\textrm{De}}}\right\}},$$ where $\Im\{\,\cdot\,\}$ denotes the imaginary part of a complex number, and $n \in \mathbb{N}^+$, such that the $\alpha$ corresponding to the first peak of ${u}_\mathrm{a}$ can be approximated with $n=1$, the second peak with $n=2$, and so on. Comparing the estimated values of $\alpha^\ast_n$ obtained from Eq. (\[eq:Predict\_resonance\]), and the corresponding exact values in the range of ${\textrm{De}}\leq100$ and $n\leq10$, we find that the deviations are quite small and gradually decrease as ${\textrm{De}}$ increases. For instance, the error in $\alpha^\ast_1$ is 0.32 $(6.3\%)$, 0.21 $(5.4\%)$ and 0.04 $(4.7\%)$ for ${\textrm{De}}=3$, ${\textrm{De}}=5$ and ${\textrm{De}}=100$, respectively.
Equation (\[eq:Predict\_resonance\]) also indicates that, as ${\textrm{De}}$ is increased, the value of $\alpha^\ast_n$ decreases for fixed $n$, as does the interval between neighboring resonant peaks, as seen in Fig. \[fig: u\_vs\_alpha,resonance\]. Note that there is no visible peak for a Newtonian fluid when the value of $\alpha$ reaches $\alpha^\ast_1$, which is 13.33 according to Eq. (\[eq:Predict\_resonance\]), in Fig. \[fig: u\_vs\_alpha,resonance\], because the amplitude of fluctuation of $\left|\Theta\right|$ is too small to be compared with its absolute value. As ${\textrm{De}}$ increases, the height of the peaks, especially the first peak, increases, presumably because for larger ${\textrm{De}}$, the value of $\alpha^\ast_1$ decreases (see Eq. (\[eq:Predict\_resonance\])), thus $\left|\Theta\right|$ becomes very small, leading to a very high peak of ${u}_\mathrm{a}$. This viscoelastic resonance is significant for the mass and heat transfer in the OSF, which is discussed in the following section.
Time-averaged transport characteristics {#sec:dis_diffusion}
---------------------------------------
### Effective dispersion, advection and reaction
In this section, we first discuss how ${D}_\mathrm{eff}$, ${U}_\mathrm{eff}$ and ${S}_\mathrm{eff}$, at the cross-section of ${r}=1$, vary with the Womersley $\alpha$, Deborah ${\textrm{De}}$ and Schmidt (or Prandtl) $\sigma$ numbers. As with the discussion of velocity field in Sec. \[sec:dis\_velocity\], the value of $r$ is chosen for illustrative purposes, and this choice does not affect the general characteristics identified. To facilitate the discussion, we introduce the tidal displacement, which represents the cross-sectionally averaged longitudinal distance traversed by a fluid particle under the OSF velocity field over half a period. This quantity is calculated through the spatio-temporal average of the velocity magnitude: $$\Delta r\left(r\right)=\frac{\delta}{2\epsilon}\int_{0}^{1}\int_{0}^{2\pi}\left|u\left(r,z,t\right)\right|\mathrm{d}t\mathrm{d}z=\frac{\pi\delta}{\epsilon}u_{\textrm{a}}\left(r\right),
\label{eq:Deltar}$$ where $u_\textrm{a}$ is defined by Eq. and must be evaluated numerically.
In addition, following @kurzweg_tuning_1986, who scaled the effective diffusivity (the dispersion coefficient) for an oscillatory pipe flow by the product of the frequency and the square of the tidal displacement, we define the re-scaled diffusivity, advection velocity and reaction coefficients for the OSF: $$\lambda_{\mathrm{D}}=\frac{{D}_\mathrm{eff}}{\Delta{r}^{2}},\qquad \lambda_{\mathrm{U}}=\frac{{U}_\mathrm{eff}}{\Delta{r}^{2}},\qquad \lambda_{\mathrm{S}}=\frac{{S}_\mathrm{eff}}{\Delta{r}^{2}}.
\label{eq:lambda_D,U,S}$$ After this re-scaling, $\lambda_{\mathrm{D}}$, $\lambda_{\mathrm{U}}$ and $\lambda_{\mathrm{S}}$ are independent of the dimensionless displacement of the upper plate, i.e., $\delta$. Since $\Delta r$ depends on $\delta$, $\lambda_{\mathrm{D}}$, $\lambda_{\mathrm{U}}$ and $\lambda_{\mathrm{S}}$ can also be independent of $\Delta r$ if $u_\mathrm{a}$ and $\epsilon$ stays fixed according to Eq. (\[eq:Deltar\]).
[Figs/Fig\_dispersion\_vs\_alpha\_diffDe\_1wall\_4]{} (16,-2)[$\alpha$]{} (50,-2)[$\alpha$]{} (86,-2)[$\alpha$]{} (-1,17)
[90]{}$\lambda_\mathrm{D}$
(34,14)
[90]{}$\lambda_\mathrm{U}$ or ${\lambda_\mathrm{S}}/{2}$
(71,17)
[90]{}$\Delta r$
It is important to note that $\lambda_\mathrm{D}$, $\lambda_\mathrm{U}$ and $\lambda_\mathrm{S}$ strongly depend on the values of $\alpha$ and ${\textrm{De}}$, as shown in Fig. \[fig:Deff\_vs\_De\_1wall\]. Specifically, optimal values of $\alpha$ have local maxima in terms of $\lambda_\mathrm{D}$, which depend on the value of ${\textrm{De}}$. For a given range of $\alpha$, increasing ${\textrm{De}}$ will generate more peaks, but not necessarily a higher value of $\lambda_\mathrm{D}$. As an example, $\lambda_\mathrm{D}$ has a maximum at $\alpha= 4$, when ${\textrm{De}}=30$, but this maximum value is not higher than the one for ${\textrm{De}}=0$. For a Maxwell fluid ($De>0$), the maximum values of $\lambda_\mathrm{D}$ increase with $\alpha$, in accordance with the oscillatory pipe flow in which the effective diffusivity is proportional to the frequency [@kurzweg_tuning_1986].
The peaks in Fig. \[fig:Deff\_vs\_De\_1wall\](a) are the resonant values of $\lambda_\mathrm{D}$, because the corresponding values of $\alpha$ appear to closely match the resonant frequencies, i.e., the values of $\alpha$ corresponds to the peaks in Fig. \[fig:Deff\_vs\_De\_1wall\](c). In other words, the resonance sharply increases both the values of $\lambda_\mathrm{D}$ and $\Delta{r}$. Hence, the locations of peaks in Fig. \[fig:Deff\_vs\_De\_1wall\](a) and (c) can be approximated by Eq. (\[eq:Predict\_resonance\]), although with slight deviations. From Figs. \[fig:Deff\_vs\_De\_1wall\](a) and (c), we observe opposite trends of the height of peaks in $\lambda_\mathrm{D}$ and $\Delta{r}$ with increasing $\alpha$. Accordingly, the value of $D_\mathrm{eff}$ might be dominated by $\Delta{r}$ for a Maxwell fluid with relatively large ${\textrm{De}}$ at small $\alpha$ or, conversely, by $\lambda_\mathrm{D}$ for a weakly viscoselastic fluid (small ${\textrm{De}}$) but at large $\alpha$. This interplay determines whether the OSF leads to enhanced diffusion (dispersion) or diminished transport. For instance, when $\alpha=1.64$ with ${\textrm{De}}=30$, the value of ${D}_\mathrm{eff}$ is two orders of magnitude larger than that with the absence of resonance, which is mainly attributed to $\Delta{r}$, which is increased by a factor of eight by resonance.
The fact that the ratio $\lambda_{\mathrm{U}}/\lambda_{\mathrm{S}}=r/2$ indicates that transport due to the effective advection is much less significant than that due to effective reaction when $r\ll1$. In Fig. \[fig:Deff\_vs\_De\_1wall\](b), for a Newtonian fluid ($De=0$), $\lambda_{\mathrm{U}}$ and $\lambda_{\mathrm{S}}$ are negative, indicating that the effective advection is always directed from the edge of the plate towards the center, and the effective reaction absorbs mass/heat. For a Maxwell fluid, $\lambda_{\mathrm{U}}$ and $\lambda_{\mathrm{S}}$ fluctuate around 0, with the amplitude increasing with $\alpha$, indicating that the effect of the effective advection and reaction, i.e., carrying mass/heat from high-concentration region to low-concentration region or along the opposite direction, is very sensitive to the oscillatory forcing of the system (quantified by $\alpha$).
[Figs/Fig\_dispersion\_eta\_vs\_Pe2]{} (11,37.5)[$\alpha=1$]{} (11,34.1)[$\alpha=\alpha^\ast_1$]{} (11,30.5)[$\alpha=\alpha^\ast_2$]{} (66,37.5)[$\alpha=1$]{} (66,34.1)[$\alpha=\alpha^\ast_1$]{} (66,30.5)[$\alpha=\alpha^\ast_2$]{} (23,-2)[${\textrm{Pe}}$]{} (78,-2)[${\textrm{Pe}}$]{} (-1,21)
[90]{}$D_\mathrm{eff}$
(55,16)
[90]{}$U_\mathrm{eff}$ or ${{S}_\mathrm{eff}}/{2}$
Figure \[fig:eta\_vs\_Pe\] shows the dependence of ${D}_\mathrm{eff}$, ${U}_\mathrm{eff}$ and ${S}_\mathrm{eff}$ on ${\textrm{Pe}}$, at the first resonant mode, second resonant mode and with the absence of resonance. Importantly, ${D}_\mathrm{eff}$ is significantly increased by the existence of a viscoelastic resonance, because both $\Delta{r}$ and $\lambda_\mathrm{D}$ reach the peak values according to Fig. \[fig:Deff\_vs\_De\_1wall\]. The strongest enhancement exists at the first resonant mode ($\alpha^\ast_1=1.64$), where the value of ${D}_\mathrm{eff}$ is around 200 times larger the corresponding value (at the same Péclet number ${\textrm{Pe}}$) but in the absence of a resonance. Similarly, ${U}_\mathrm{eff}$ and ${S}_\mathrm{eff}$ also increase, although less significantly. Moreover, we can achieve larger ${D}_\mathrm{eff}$, ${U}_\mathrm{eff}$ and ${S}_\mathrm{eff}$ by increasing the displacement amplitude of the upper plate, which is embodied in ${\textrm{Pe}}$.
[Figs/Fig\_dispersion\_vs\_alpha\_diffSc\_1wall\_4]{} (16,-2)[$\alpha$]{} (50,-2)[$\alpha$]{} (86,-2)[$\alpha$]{} (-1,17)
[90]{}$\lambda_\mathrm{D}$
(34,14)
[90]{}$\lambda_\mathrm{U}\ \mathrm{or}\ {\lambda_\mathrm{S}}/{2}$
(71,17)
[90]{}$\Delta r$
It is important to note, however, large values of ${D}_\mathrm{eff}$, ${U}_\mathrm{eff}$ and ${S}_\mathrm{eff}$ do not necessarily lead to the enhancement of mass/heat transfer: the transport in the OSF is controlled jointly by the interaction of these three effective (homogenized) transport mechanisms. The effective diffusion (dispersion) always enhances transport, but whether the effective advection or reaction enhance or impede transport depends on whether they are directed along or against the concentration gradient, which is determined by the sign of $\lambda_{\mathrm{U}}$ (or ${U}_\mathrm{eff}$) and $\lambda_{\mathrm{S}}$ (or ${S}_\mathrm{eff}$) and the boundary conditions. If the effective advection and reaction work against the concentration gradient (so that the corresponding mass flux exceeds that given by effective diffusion), then the transport is diminished. This indicates that, to enhance the mass/heat transfer, the value of $\alpha$ and ${\textrm{De}}$ should be carefully chosen, so that the effective advection and reaction assist in carrying the mass/heat from regions of high temperature or concentration to regions of low temperature or concentration.
From Fig. \[fig:Deff\_vs\_Sc\_1wall-1\], we see that $\lambda_\mathrm{D}$, $\lambda_\mathrm{U}$ and $\lambda_\mathrm{S}$ grow with $\sigma$. However, this does not necessarily lead to the enhancement of effective mass/heat transfer because the effective advection and reaction might inhibit the transport if mass/heat is carried against the temperature or concentration gradient. Moreover, the increase of $\sigma=\hat{\mu}/\hat{\rho}\hat{D}$ requires a decrease of the molecular diffusivity $\hat{D}$ (for fixed $\hat{\mu}/\hat{\rho}$) or an increase of the kinematic viscosity $\hat{\nu}=\hat{\mu}/\hat{\rho}$ (for fixed $\hat{D}$), with the former generally reducing mixing, while the latter increases the energy needed to drive the OSF. Note that $\Delta r$ is a kinematic quantity independent of $\sigma$. Therefore, all the curves in Fig. \[fig:Deff\_vs\_Sc\_1wall-1\](c) overlap.
### Energy considerations for transport enhancement {#Section:Energy considerations}
We now turn to consider the energy consumption associated with the OSF proposed herein for enhancing heat and/or mass transfer. To this end, in this section, we discuss the mass/heat flux, consumed power to drive the upper plate, and the efficiency in a specific example.
Consider that a mass/heat source with a dimensionless radius of 0.1 is immersed in the center of an OSF device, as shown in Fig \[fig: osf\_example\]. We assume it has no influence on the local velocity field. This source ensures $\Lambda_{0}=1$ at ${r}=0.1$. Suppose also that there is a mass/heat sink at $r=1$, which we assume not to affect the fluid flow but which forces $\Lambda_{0}=0$ at ${r}=1$. The top and bottom plates are impermeable and insulated walls, thus the amount of mass/heat injected from the source must equal absorbed by the sink at ${r}=1$. With the boundary condition at $r=1$, Eq. (\[eq:final\_PDE\_generalt\]) is reduced into a diffusion equation, thus the mass/heat flux transported from the source to the sink can be calculated by $$\dot{m}^\ast=-\left(1+D_{{\rm eff}}\right)\left.\frac{\partial\Lambda_{0}}{\partial r}\right|_{r=1}. \label{eq:m_dot}$$
![An example of an OSF configuration for enhancing mass or heat transfer.[]{data-label="fig: osf_example"}](Figs//mass_flux_example.eps){width="60.00000%"}
The power required to drive the upper plate is found to be $$\Tilde{W}_\mathrm{T}=\frac{1}{2\pi}\int_{0}^{2\pi}\left.v\right|_{z=1}\tilde{F}_{T}\,\mathrm{d}t=-\Re\left\{ \frac{{\mathrm{i}}\pi f^{\prime\prime\prime}(1)}{16\left(1+{\mathrm{i}}\,{\textrm{De}}\right)}\right\} ,\label{eq:W_T}$$ where $\tilde{F}_{T}$ is the excess normal force on the top plate (see Appendix \[app:energy\] for details). We can then define a metric for the energy consumption per unit mass/heat flux, or efficiency, $$\eta=\frac{\dot{m}^\ast}{\Tilde{W}_\mathrm{T}}.$$
The dependence of $\dot{m}^\ast$, $\Tilde{W}_\mathrm{T}$ and $\eta$ on $\alpha$, for a Maxwell fluid with ${\textrm{De}}=30$, is shown in Fig. \[fig: mass\_flux\]. For $\alpha<1$, $\dot{m}^\ast$ is approximately constant (around 0.44) because $D_\mathrm{eff} \ll 1$ (see Fig. \[fig:Deff\_vs\_De\_1wall\]), indicating that the transport is mainly driven by molecular diffusion (or conduction), which is independent of $\alpha$. For $\alpha>1$, $\dot{m}^\ast$ oscillates about the value corresponding to molecular diffusion (or pure conduction). According to Eq. (\[eq:m\_dot\]), $\dot{m}^\ast$ is influenced by both $D_\mathrm{eff}$ and ${\partial\Lambda_{0}}/{\partial r}$ at $r=1$. The value of $D_\mathrm{eff}$ is always positive and reaches a near-peak value at the resonant mode (see Fig. \[fig:Deff\_vs\_De\_1wall\]), indicating that the effective diffusion always enhances the mass or heat transfer, especially at resonance. However, the dependence of ${\partial\Lambda_{0}}/{\partial r}$ on $\alpha$ is more complex, as this strongly depends on $U_\mathrm{eff}$, $S_\mathrm{eff}$, and the boundary conditions.
[Figs/Fig\_mass\_flux2.eps]{} (90,58)[$\eta$]{} (90,70)[$\dot{m}^\ast$]{} (90,63)[$\Tilde{W}_\mathrm{T}$]{} (50,-3)[$\alpha$]{} (-1,30)
[90]{}$\dot{m}^\ast,\ \Tilde{W}_\mathrm{T},\ \mathrm{or}\ \eta$
In this example, when the values of $U_\mathrm{eff}$ and $S_\mathrm{eff}$ are positive, the value of ${\partial\Lambda_{0}}/{\partial r}$ is increased and thus the mass flux is enhanced; otherwise, the mass flux is diminished. For instance, the value of $\dot{m}^\ast$ is 30 times larger than that under pure molecular diffusion (which happens when $\alpha\rightarrow0$) when $\alpha\approx$4.1, where both $U_\mathrm{eff}$ and $S_\mathrm{eff}$ are positive. On the other hand, the value of $\dot{m}^\ast$ is only $6\%$ of that under pure molecular diffusion when $\alpha\approx3.7$, because both $U_\mathrm{eff}$ and $S_\mathrm{eff}$ are negative and near the local minimum values. The value of $\Tilde{W}_\mathrm{T}$ rises as $\alpha$ is increased, and reaches the local maximum exactly at the resonant mode, because the abrupt increase of velocity results in a significantly higher viscous loss. The efficiency $\eta$ is large when $\alpha<1$ because molecular diffusion (or pure conduction) does not consume energy; $\eta$ drops when $\alpha$ approaches the first resonant frequency because of the increased $\Tilde{W}_\mathrm{T}$. In this example, we see that the mass/heat transfer in such OSF configurations strongly depends on the Womersley number $\alpha$, which quantifies the oscillatory forcing of the flow. Thus, by choosing $\alpha$ suitably, we can either significantly increase the mass/heat flux and improve transport, or diminish the flux of mass/heat transfer to a value even lower than that of the pure molecular diffusion or pure conduction.
Conclusion {#sec:Conclusion}
==========
In this work, we have investigated Taylor–Aris dispersion related to mass and/or heat transfer in an oscillatory squeeze flow of a viscoelastic (Maxwell) fluid. First, we derived the expression for the post-transient velocity for both Newtonian and Maxwell fluids. Then, we used the method of homogenization (a multiple-time-scale analysis) to determine the effective mass and/or heat transport equation on the long time scale. Specifically, we showed that transport is governed by an effective one-dimensional advection–diffusion–reaction equation. In doing so, we identified the three effective transport mechanisms: the effective diffusive spreading mass/heat along the concentration gradient, the effective advection carrying mass/heat along or against the concentration gradient, and the effective reaction releasing or absorbing solute.
Importantly, we found resonances in the viscoleastic oscillatory squeeze flow, which lead to an abrupt rise in the velocity, when the Womersley number (a dimensionless measure of the forcing frequency and unsteady inertia of the fluid) reaches specific values. We showed that these resonant values can be estimated by a simple expression involving the Deborah number. For a linearly viscoelastic Maxwell fluid, we found that there are multiple values of the Womersley number that may trigger the resonance, but the effect of resonance diminishes gradually with the Womersley number. We also observed that the mass/heat transfer in this viscoelastic fluid flow can be either enhanced or inhibited, depending on the value of the Womersley number. On the one hand, when the dimensionless frequency, i.e., the Womersley number, reaches the resonant frequency, the effective diffusion can be sharply enhanced, which accelerates the spread of mass/heat in the fluid. For instance, for a viscoelastic Maxwell fluid, we found that the effective diffusivity at the first resonant frequency can be 200 times larger than the corresponding one in the absence of a resonance. On the other hand, we found that the values of the effective advection and reaction coefficients oscillate about zero with increasing amplitudes as the Womersley number increases. Notably, these two effective mechanisms may enhance or inhibit mass/heat transfer in this oscillatory viscoelastic flow, depending on their signs, which is determined by the Womersley number and the fluid properties (including the Deborah number and the Schmidt or Prandtl numbers). Therefore, to enhance the mass transfer, the value of the Womersley number should be carefully chosen, so that the effective advection and reaction help to carry solute from regions of high temperature or solute concentration to regions of low concentration.
The results presented in this work suggest a new approach to enhance the mass/heat transfer using oscillatory squeeze flow. Further investigation could focus on the transport problem for large oscillation amplitudes, going beyond the assumptions inherent to the symptotic nature of the homogenization theory employed here.
Acknowledgements {#acknowledgements .unnumbered}
================
This research was partially supported by grant 2018/17 from the Israel Science Foundation (ISF). R.Y. was supported, in part, by a fellowship from the Israel Council for Higher Education. I.C.C. acknowledges the donors of the American Chemical Society Petroleum Research Fund for partial support under ACS PRF award \# 57371-DNI9. I.M.G. gratefully acknowledges support from the Royal Society through a University Research Fellowship. I.C.C., I.M.G., and G.Z.R. acknowledge the Collaborative Workshop Initiative (CWI) for providing a platform to instigate this research, as well as H.A. Stone for many insightful discussions on Taylor dispersion.
Appendix {#appendix .unnumbered}
========
Derivation of the effective advection–diffusion equation using the homogenization method {#app:MMS}
========================================================================================
At $\mathcal{O}(1)$, Eqs. (\[eq:dispersion cont and bc\]) can be simplified as
\[eq:appenddix1\]$$\begin{aligned}
\alpha^{2}\sigma\frac{\partial\Lambda_{0}}{\partial{t}_{0}}&=\frac{\partial^{2}\Lambda_{0}}{\partial{z}^{2}},\\[2mm]
\left.\frac{\partial\Lambda_{0}}{\partial{z}}\right|_{{z}=0}&=\left.\frac{\partial\Lambda_{0}}{\partial{z}}\right|_{{z}=h}=0.\end{aligned}$$
If we seek the developed periodic solution, then $\Lambda_{0}$ is independent of the short time scale ${t}_{0}$. From the system we find $\Lambda_{0}$ is independent of ${z}$. Hence, $\Lambda_{0}=\Lambda_{0}({t}_{1},{t}_{2},{r})$.
At $\mathcal{O}(\epsilon)$, Eq. (\[eq:dispersion cont and bc\]) gives
$$\begin{aligned}
\alpha^{2}\sigma\left(\frac{\partial\Lambda_{0}}{\partial{t}_{1}}+\frac{\partial\Lambda_{1}}{\partial{t}_{0}}\right)+{\textrm{Pe}}\left[\frac{1}{{r}}\frac{\partial}{\partial{r}}({r}{u}\Lambda_{0})+\frac{\partial}{\partial{z}}({v}\Lambda_{0})\right] &= \frac{\partial^{2}\Lambda_{1}}{\partial{z}^{2}},\label{eq:dispersion_order_epsilon-1}\\[2mm]
\left.\frac{\partial\Lambda_{1}}{\partial{z}}\right|_{{z}=0}=\left.\frac{\partial\Lambda_{1}}{\partial{z}}\right|_{{z}=h} &= 0.
\label{eq:dispersion_order_epsilon-1_B}\end{aligned}$$
Next, we introduce the temporal and spatial averaging operators $$\overline{(\cdot)}=\frac{1}{2\pi}\int_{{t}}^{{t}+2\pi}(\cdot)\,\mathrm{d}{t}_{0},\qquad \langle\cdot\rangle=\frac{1}{h}\int_{0}^{h}(\cdot)\,\mathrm{d}{z} \simeq \int_{0}^{1}(\cdot)\,\mathrm{d}{z}.$$ Note that $h\simeq1$ since $\hat{a}/\hat{h}_{0}\ll1$. Time averaging Eqs. (\[eq:dispersion\_order\_epsilon-1\]) and (\[eq:dispersion\_order\_epsilon-1\_B\]) gives
$$\begin{aligned}
\alpha^{2}\sigma\frac{\partial\Lambda_{0}}{\partial{t}_{1}} &= \frac{\partial^{2}\overline{\Lambda_{1}}}{\partial{z}^{2}}\label{eq:order_epsilon_2},\\
\left.\frac{\partial\overline{\Lambda_{1}}}{\partial{z}}\right|_{{z}=0}=\left.\frac{\partial\overline{\Lambda_{1}}}{\partial{z}}\right|_{{z}=h} &= 0\label{eq:order_epsilon_2_BC},\end{aligned}$$
where we have used the fact that $\overline{{u}}=\overline{{v}}=0$ because $u$ and $v$ are time-harmonic functions (see Eq. ) and $\overline{\Lambda_{0}}=\Lambda_{0}$ and $\langle\Lambda_{0}\rangle=\Lambda_{0}$ because $\Lambda_{0}$ is independent of ${t}_{0}$ and ${z}$, while $\overline{\partial\Lambda_{1}/\partial{t}_{0}}=0$.
Applying spatial average to Eq. (\[eq:order\_epsilon\_2\]) (and from the related boundary conditions shown in Eq. (\[eq:order\_epsilon\_2\_BC\])), we have $$\frac{\partial\Lambda_{0}}{\partial{t}_{1}}=0.
\label{eq:dC0dt1-1}$$ With Eq. (\[eq:dC0dt1-1\]), Eq. (\[eq:dispersion\_order\_epsilon-1\]) can be simplified to $$\begin{aligned}
\alpha^{2}\sigma\frac{\partial\Lambda_{1}}{\partial{t}_{0}}+{\textrm{Pe}}\left[\frac{1}{{r}}\frac{\partial}{\partial{r}}({r}{u}\Lambda_{0})+\frac{\partial}{\partial{z}}({v}\Lambda_{0})\right] &= \frac{\partial^{2}\Lambda_{1}}{\partial{z}^{2}}.
\label{eq:dispersion_order_epsilon2-1}\end{aligned}$$ According to the simplification $\Lambda_{0}=\Lambda_{0}\left({t}_{1},{t}_{2},{r}\right)$ and Eq. (\[eq:velocity form\]), we have $$\begin{aligned}
\frac{\partial}{\partial{z}}\left({v}\Lambda_{0}\right)=-\Lambda_{0}\Re\left\{ {f}^{\prime}({z})\mathrm{e}^{\mathrm{i}{t}}\right\}, \label{eq:dispersion_order_epsilon_dvCdz-1}\\
\frac{1}{{r}}\frac{\partial}{\partial{r}}({r}{u}\Lambda_{0})=\Lambda_{0}\Re\left\{ {f}^{\prime}({z})\mathrm{e}^{\mathrm{i}{t}}\right\} +\frac{r}{2}\Re\left\{ {f}^{\prime}({z})\mathrm{e}^{\mathrm{i}{t}}\right\} \frac{\partial\Lambda_{0}}{\partial{r}}.
\label{eq:order_epsilon_1/rd(ruC0)-1}\end{aligned}$$ Substituting Eqs. (\[eq:dispersion\_order\_epsilon\_dvCdz-1\]) and (\[eq:order\_epsilon\_1/rd(ruC0)-1\]) into Eq. (\[eq:dispersion\_order\_epsilon2-1\]), we have $$\alpha^{2}\sigma\frac{\partial\Lambda_{1}}{\partial{t}_{0}}+{\textrm{Pe}}\,\Re\left\{ {f}^{\prime}({z})\mathrm{e}^{\mathrm{i}{t}}\right\} \frac{{r}}{2}\frac{\partial\Lambda_{0}}{\partial{r}}=\frac{\partial^{2}\Lambda_{1}}{\partial{z}^{2}},
\label{eq:ODE of C1-1}$$ which indicates that the solution of $\Lambda_{1}$ is of the form $$\Lambda_{1}={r}\frac{\partial\Lambda_{0}}{\partial{r}}\Re\left\{ B_{\mathrm{w}}({z})e^{\mathrm{i}{t}}\right\}.
\label{eq:form of C1-1}$$ Substituting Eq. (\[eq:form of C1-1\]) into Eq. (\[eq:ODE of C1-1\]), we have $$\begin{aligned}
\frac{{\mathrm{d}}^{2}B_{w}({z})}{{\mathrm{d}}{z}^{2}}-\frac{1}{2}{\textrm{Pe}}{f}^{\prime}({z})-\mathrm{i}\alpha^{2}\sigma B_{w}({z})&=0,\\
\left.\frac{{\mathrm{d}}B_{w}}{{\mathrm{d}}z}\right|_{{z}=0}=\left.\frac{{\mathrm{d}}B_{w}}{{\mathrm{d}}z}\right|_{{z}=h}&=0.\label{eq:ODE for Bw-1}\end{aligned}$$
At $\mathcal{O}(\epsilon^{2})$, Eqs. (\[eq:dispersion cont and bc\]) give
$$\begin{aligned}
\alpha^{2}\mathrm{\sigma}\left(\frac{\partial\Lambda_{2}}{\partial{t}_{0}}+\frac{\partial\Lambda_{1}}{\partial{t}_{1}}+\frac{\partial\Lambda_{0}}{\partial{t}_{2}}\right)+{\textrm{Pe}}\left[\frac{1}{{r}}\frac{\partial}{\partial{r}}({r}{u}\Lambda_{1})+\frac{\partial}{\partial{z}}\left({v}\Lambda_{1}\right)\right]&=\frac{1}{{r}}\frac{\partial}{\partial{r}}\left({r}\frac{\partial\Lambda_{0}}{\partial{r}}\right)+\frac{\partial^{2}\Lambda_{2}}{\partial{z}^{2}},
\label{eq:dispersion_order_epsilon^2-1}\\
\left.\frac{\partial\Lambda_{2}}{\partial{z}}\right|_{{z}=0}=\left.\frac{\partial\Lambda_{2}}{\partial{z}}\right|_{{z}=h}&=0.\end{aligned}$$
Equations (\[eq:form of C1-1\]) and (\[eq:dC0dt1-1\]) give $$\frac{\partial\Lambda_{1}}{\partial{t}_{1}}=0.$$ Taking the cross-sectional average of Eq. (\[eq:dispersion\_order\_epsilon\^2-1\]), we have $$\begin{gathered}
\alpha^{2}\sigma\left(\frac{\partial\left\langle \Lambda_{2}\right\rangle }{\partial{t}_{0}}+\frac{\partial\Lambda_{0}}{\partial{t}_{2}}\right)+{\textrm{Pe}}\left[\left\langle \frac{1}{{r}}\frac{\partial}{\partial{r}}\left({r}{u}\Lambda_{1}\right)\right\rangle + \left({v}\Lambda_{1}\right)\Big|_{{z}=0}^{{z}=1}\right]
=\frac{1}{{r}}\frac{\partial}{\partial{r}}\left({r}\frac{\partial\Lambda_{0}}{\partial{r}}\right)+{\left\langle \frac{\partial^{2}\Lambda_{2}}{\partial{z}^{2}}\right\rangle }.
\label{eq:dispersion_order_epsilon^22-1}\end{gathered}$$
It can be shown that $$\begin{aligned}
\left\langle \frac{1}{{r}}\frac{\partial}{\partial{r}}\left({r}{u}\Lambda_{1}\right)\right\rangle =\frac{\partial}{\partial{r}}\left\langle {u}\Lambda_{1}\right\rangle +\frac{1}{{r}}\left\langle {u}\Lambda_{1}\right\rangle, \\
\left\langle \frac{\partial^{2}\Lambda_{2}}{\partial{z}^{2}}\right\rangle =\left.\frac{\partial\Lambda_{2}}{\partial{z}}\right|_{{z}=0}^{{z}=1}=0,\end{aligned}$$ then Eq. (\[eq:dispersion\_order\_epsilon\^22-1\]) can be written as $$\alpha^{2}\sigma\left(\frac{\partial\left\langle \Lambda_{2}\right\rangle }{\partial{t}_{0}}+\frac{\partial\Lambda_{0}}{\partial{t}_{2}}\right)+{\textrm{Pe}}\left[\frac{\partial}{\partial{r}}\left\langle {u}\Lambda_{1}\right\rangle +\frac{1}{{r}}\left\langle {u}\Lambda_{1}\right\rangle +\left({v}\Lambda_{1}\right)\Big|_{{z}=0}^{{z}=1}\right]=\frac{1}{{r}}\frac{\partial}{\partial{r}}\left({r}\frac{\partial\Lambda_{0}}{\partial{r}}\right).
\label{eq:dispersion_order_epsilon^22-2}$$ Taking the time average of Eq. (\[eq:dispersion\_order\_epsilon\^22-2\]), we have $$\alpha^{2}\sigma\frac{\partial\Lambda_{0}}{\partial{t}_{2}}+{\textrm{Pe}}\left[\frac{\partial}{\partial{r}}\left\langle \overline{{u}\Lambda_{1}}\right\rangle +\frac{1}{{r}}\left\langle \overline{{u}\Lambda_{1}}\right\rangle +\overline{{v}\Lambda_{1}}\Big|_{{z}=0}^{{z}=1}\right]=\frac{1}{{r}}\frac{\partial}{\partial{r}}\left({r}\frac{\partial\Lambda_{0}}{\partial{r}}\right)\label{eq:epsilon^2_stavg-1}.$$ Additionally, from Eqs. (\[eq:velocity form\]) and (\[eq:form of C1-1\]), we reach $$\begin{aligned}
\left\langle \overline{{u}\Lambda_{1}}\right\rangle &=\frac{{r}^{2}}{4}\frac{\partial\Lambda_{0}}{\partial{r}}\Re\left\langle {f}'({z})^{*}B_{{\rm w}}({z})\right\rangle, \label{eq:epsilon^2_vC1_avg-1}\\
\left.\overline{{v}\Lambda_{1}}\right|_{{z}=0}^{{z}=1}&=-\frac{{r}}{2}\frac{\partial\Lambda_{0}}{\partial{r}}\Re\left\{ {\rm i}B_{{\rm w}}\left(1\right)\right\}. \label{epsilon^2_uC1_avg-1}\end{aligned}$$ Substituting Eqs. (\[eq:epsilon\^2\_vC1\_avg-1\]) and (\[epsilon\^2\_uC1\_avg-1\]) into Eq. (\[eq:epsilon\^2\_stavg-1\]), we have $$\begin{gathered}
\alpha^{2}\sigma\frac{\partial\Lambda_{0}}{\partial{t}_{2}}=\frac{\partial^{2}\Lambda_{0}}{\partial{r}^{2}}\left[1-\frac{{r}^{2}}{4}\Re\left\langle {f}'({z})^{*}B_{{\rm w}}({z})\right\rangle {\textrm{Pe}}\right] +
\frac{\partial\Lambda_{0}}{\partial{r}}\left[
\frac{1}{{r}}\left(1-\frac{{r}^{2}}{4}\Re\left\langle {f}'({z})^{*}B_{{\rm w}}({z})\right\rangle {\textrm{Pe}}\right) \right.\\ \left. -\frac{{r}}{2}\Re\left\langle {f}'({z})^{*}B_{{\rm w}}({z})\right\rangle {\textrm{Pe}}+\frac{{r}}{2}\Re\left\{ {\rm i}B_{{\rm w}}(1)\right\} {\textrm{Pe}}\right].
\label{eq:dispersion_final-1}\end{gathered}$$
Introducing the dimensionless effective diffusivity, velocity and reaction terms shown in Eq. (\[eq:DUSeff\]), Eq. (\[eq:dispersion\_final-1\]) can be arranged into Eq. (\[eq:final\_PDE\_t2(2)\]).
Derivation of the component-wise momentum equations {#app:DDt}
===================================================
The material derivative, assuming axisymmetry, is $$\frac{\mathrm{D}}{\mathrm{D}\hat{t}}=\frac{\partial}{\partial \hat{t}}+\hat{u}\frac{\partial}{\partial \hat{r}}+\hat{v}\frac{\partial}{\partial \hat{z}}.$$ Using the dimensionless variables from Eq. (\[eq:nond\_notation\]), we have $$\ensuremath{\frac{\mathrm{D}}{\mathrm{D}t}=\frac{\partial}{\partial t}+\delta u\frac{\partial}{\partial r}+\delta v\frac{\partial}{\partial z}}.$$ Neglecting the small terms of $\mathcal{O}(\delta)$, we obtain $$\frac{\mathrm{D}}{\mathrm{D}\hat{t}} \simeq \frac{\partial}{\partial \hat{t}}\,.
\label{eq:materialD}$$ Next, combining Eq. (\[eq:maxwell\]) and Eq. (\[eq:momentum\_0\]) and acknowledging Eq. (\[eq:materialD\]), we have $$\hat{\lambda}_0\hat{\rho}\frac{\partial^{2}\hat{\bm{u}}}{\partial \hat{t}^{2}}+\hat{\lambda}_0\frac{\partial}{\partial \hat{t}}\left(\nabla \hat{p}\right)+\hat{\rho}\frac{\partial\hat{\bm{u}}}{\partial \hat{t}}=-\nabla \hat{p}+\hat{\mu}\nabla^{2}\hat{\bm{u}}.
\label{eq:momentum_dim}$$ From Eq. (\[eq:momentum\_dim\]), we obtain the component-wise momentum equations given in Eqs. (\[eq:momentum1\_dim\]) and (\[eq:momentum2\_dim\]).
Energy functional {#app:energy}
=================
In the fully periodic regime, the dimensionless pressure is of the form $${p}=\Re\left\{ {p}_{0}({r},{z})\mathrm{e}^{\mathrm{i}{t}}\right\}. \label{eq:dim_p}$$ Substituting Eqs. (\[eq:velocity form\]) and (\[eq:dim\_p\]) into Eqs. (\[eq:dimless\_cont(a)\]) at leading order in $\epsilon$, we have $$\left(1+{\mathrm{i}}\,{\textrm{De}}\right)\frac{\partial{p}_{0}}{\partial{r}}=\left({\textrm{De}}-{\mathrm{i}}\right)\alpha^{2}{\frac{r}{2}}{f}^{\prime}({z})+{\frac{r}{2}}{f}^{\prime\prime\prime}({z}),\label{eq:3}$$ Integrating Eq. (\[eq:3\]), we obtain $${p}_{0}({r},{z})=\frac{{r}^{2}}{4\left(1+{\mathrm{i}}\,{\textrm{De}}\right)}\left[\left({\textrm{De}}-{\mathrm{i}}\right)\alpha^{2}{f}^{\prime}({z})+{f}^{\prime\prime\prime}({z})\right]+G(z),\label{eq:p_0_0}$$ where $G\left(z\right)$ is an arbitrary function of integration. We define an *excess* pressure as $$\tilde{p}=\Re\left\{ \tilde{p}_{0}({r},{z})\mathrm{e}^{\mathrm{i}{t}}\right\}, \label{eq:dim_p4}$$ where the amplitude $\tilde{p}_0$ is relative to the background pressure amplitude $\left.p_0\right|_{r=1}$, i.e., $$\tilde{p}_0=p_0-\left.p_0\right|_{r=1}.\label{eq:excess_p}$$ Substituting Eq. (\[eq:p\_0\_0\]) into Eq. (\[eq:excess\_p\]), we obtain $$\tilde{p}_{0}({r},{z})=\frac{{r}^{2}}{4\left(1+{\mathrm{i}}\,{\textrm{De}}\right)}\left[\left({\textrm{De}}-{\mathrm{i}}\right)\alpha^{2}{f}^{\prime}({z})+{f}^{\prime\prime\prime}({z})\right]+G_2(z)\label{eq:p_0},$$ where $G_2=G-\left.p_0\right|_{r=1}$. Solving for $G_2$ with the boundary condition of $\left.\tilde{p}_0\right|_{r=1}=0$, we obtain $$\tilde{p}_0(r,z)=\frac{r^{2}-1}{4\left(1+{\mathrm{i}}\,{\textrm{De}}\right)}\left[\left({\textrm{De}}-{\mathrm{i}}\right)\alpha^{2}f^{\prime}(z)+f^{\prime\prime\prime}(z)\right].
\label{eq:ex_p}$$
From Eq. (\[eq:maxwell\]), the component-wise equation for the stress tensor $\hat{\tau}_{zz}$ can be expressed as $$\hat{\lambda}_{0}\left(\frac{\mathrm{\partial}\hat{\tau}_{zz}}{\mathrm{\partial}\hat{t}}\right)=\hat{\mu}\left(\nabla\hat{v}+\nabla\hat{v}^{T}\right)-\hat{\tau}_{zz}=2\hat{\mu}\frac{\partial\hat{v}}{\partial\hat{z}}-\hat{\tau}_{zz}.$$ With the non-dimensionalization in Eq. (\[eq:nond\_notation\]) and $\hat{\tau}_{zz}={\tau}_{zz}{\hat{\mu} \hat{a}\hat{\omega}}/{\epsilon^{3}\hat{R}}$, we have $$\frac{{\textrm{De}}}{\epsilon^{2}}\frac{\mathrm{\partial}{\tau}_{zz}}{\mathrm{\partial}{t}}=2\frac{\partial{v}}{\partial{z}}-\frac{1}{\epsilon^{2}}{\tau}_{zz}.$$ By writing ${\tau}_{zz}=\Re\left\{ {\tau}_{0,zz}(z)\mathrm{e}^{\mathrm{i}{t}}\right\} $, we obtain $${\tau}_{zz}=-\Re\left\{ \frac{2\epsilon^{2}}{1+{\mathrm{i}}\,{\textrm{De}}}{f}^{\prime}({z})\mathrm{e}^{\mathrm{i}{t}}\right\}. \label{eq:tau}$$
The scaled normal force on the top plate is found to be (see [@phan-thien_viscoelastic_1983]): $${F}_\mathrm{T}=2\pi\int_{0}^{1}{\sigma}_{zz}\,{r}\,\mathrm{d}{r}=2\pi\int_{0}^{1}\left.\left({\tau}_{zz}-{p}\right)\right|_{{z}=1}{r}\,\mathrm{d}{r}.
\label{eq:dim_F}$$ Recalling that ${f}^{\prime}(1)=0$ from Eq. (\[eq:BC of 1wall\]), Eq. (\[eq:dim\_F\]) can be simplified as $${F}_\mathrm{T}=-2\pi\int_{0}^{1}\left.{p}\right|_{{z}=1}{r}\,\mathrm{d}{r}.
\label{eq:dim_F2}$$ We define an excess normal force $$\Tilde{F}_\mathrm{T}=-2\pi\int_{0}^{1}\left.\Tilde{p}\right|_{{z}=1}{r}\,\mathrm{d}{r}.
\label{eq:dim_F3}$$ Taking $\Tilde{F}_\mathrm{T}=\Re\left\{ \Tilde{F}_\mathrm{0,T}\mathrm{e}^{\mathrm{i}{t}}\right\}$ and substituting Eq. (\[eq:dim\_p4\]) into Eq. (\[eq:dim\_F3\]), we obtain $$\Tilde{F}_\mathrm{0,T}=\frac{\pi f^{\prime\prime\prime}(1)}{8\left(1+{\mathrm{i}}\,{\textrm{De}}\right)}.
\label{eq:F_0}$$
Hence, the power needed to drive the upper plate is $$\tilde{W}=\frac{1}{2\pi}\int_{0}^{2\pi}\left.v\right|_{z=1}\tilde{F}_{T}\,\mathrm{d}t=-\Re\left\{ \frac{{\mathrm{i}}\pi f^{\prime\prime\prime}(1)}{16\left(1+{\mathrm{i}}\,{\textrm{De}}\right)}\right\}.$$
[^1]: Author to whom correspondence should be addressed.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this article, we present [FastLinAlg.m2]{}, a package in *Macaulay2* designed to introduce new methods focused on computations in function field linear algebra. Some key functionality that our package offers includes: finding a submatrix of a given rank in a provided matrix (when present), verifying that a ring is regular in codimension n, recursively computing the ideals of minors in a matrix, and finding an upper bound of the projective dimension of a module.'
address: 'Department of Mathematics, University of Utah, 155 S 1400 E Room 233, Salt Lake City, UT, 84112'
author:
- Boyana Martinova
- Marcus Robinson
- Karl Schwede
- Yuhui Yao
bibliography:
- 'MainBib.bib'
title: '[FastLinAlg]{} package for *Macaulay2*'
---
[^1]
Introduction
============
We start with some motivation. Suppose that $I = (f_1, \dots, f_m) \subseteq k[x_1, \dots, x_n]$ is a prime ideal. The corresponding variety $X := V(I)$ is singular if and only if $I$ plus the ideal generated by the determinants of the $n-\dim X$ submatrices of the Jacobian matrix $$\mathrm{Jac}(X) = \left( \begin{array}{c} {\partial f_i \over \partial x_j}\end{array}\right),$$ generate the unit ideal. Unfortunately, even for relatively small values of $m$ and $n$, the number of such submatrices is prohibitive. Suppose for instance that $n = 10, m = 15$ and $\dim X = 5$. Then there are $${10 \choose 5} \cdot {15 \choose 5} = 756756$$ such submatrices.
We cannot reasonably compute all of their determinants (this computation is often time intensive). This package attempts to fix this in several ways.
1. We try to compute just a portion of the determinants, in a relatively smart way.
2. We offer some tools for computing determinants which are sometimes faster.
Of course, computing the singular locus is not the only potential application. This technique has been applied to the related problem of showing that the singular locus has a certain codimension (for example, checking that a variety is R1 in order to prove normality). We provide a function for giving a better upper bound on projective dimension of a non-homogeneous module. Finally, this package has also been applied in the [RationalMaps]{} Macaulay2 package.
We provide the following functions:
- [getSubmatrixOfRank]{}, this tries to find a submatrix of a given rank, .
- [[isRankAtLeast]{}]{}, this uses [getSubmatrixOfRank]{} to try to find lower bounds for the rank of a matrix, .
- [Rn]{}, tries to verify if an integral domain is regular in dimension n, .
- [projDim]{}, tries to find upper bounds for the projective dimension of a non-homogeneous module, .
- [recursiveMinors]{}, computes the ideal of minors of a matrix via a recursive cofactor algorithm, as opposed to the included non-recursive cofactor algorithm, .
The latest version of this package is available at:
[https://github.com/kschwede/M2/blob/master/M2/Macaulay2/packages/FastLinAlg.m2]{}
Acknowledgements: {#acknowledgements .unnumbered}
-----------------
The authors thank David Eisenbud, Eloisa Grifo and Zhuang He for valuable conversations.
Finding interesting submatrices {#sec.FindingInterestingSubmatrices}
===============================
A lot of the speedups available in the package come down to finding interesting square submatrices of a given matrix. For example, it is often useful to compute a square submatrix whose determinant has small degree as this is less likely to vanish.
How are the submatrices chosen?
-------------------------------
Consider the following matrix defined over $\mathbb{Q}[x,y]$. $$\left[\begin{array}{ccc}
x & xy & 0 \\
xy^2 & x^6 & 0 \\
0 & x^2 y^3 & xy^4
\end{array} \right]$$ Suppose we want to choose a submatrix of size $2 \times 2$. Consider the monomial order [Lex]{} where $x < y$. We find, in the matrix, the nonzero element of smallest order. In this case, that is $x$. We choose this element to be in part of our submatrix. $$\left[\begin{array}{>{\columncolor{red!20}}ccc}
\rowcolor{red!20}
\cellcolor{red!40}{\bf x} & xy & 0 \\
xy^2 & x^6 & 0 \\
0 & x^2 y^3 & xy^4
\end{array}\right]$$ Hence we discard that row and column containing this term and find that the next smallest element with respect to our monomial order is $xy^4$. $$\left[\begin{array}{cc}
x^6 & 0 \\
x^2 y^3 & x y^4
\end{array}\right]
\;\;\;\;
\left[\begin{array}{c>{\columncolor{red!20}}c}
x^6 & 0 \\
\rowcolor{red!20}
x^2 y^3 & \cellcolor{red!40} {\bf x y^4}
\end{array}\right]$$ Since we are only looking for a $2 \times 2$ submatrix, we stop here. We have selected the submatrix with rows $0$ and $2$ and columns $0$ and $2$. $$\left[\begin{array}{ccc}
x & 0 \\
0 & x y^4 \\
\end{array} \right]$$ The determinant of which is $x^2 y ^4$. This happens to be the smallest $2 \times 2$ minor with respect to the given monomial order (which frequently happens, although it is certainly not always the case).
If we chose a different monomial order, we get a different submatrix, with a different determinant.
For example,
[Lex]{}, $x > y$
: We yield the submatrix with rows $0$ and $1$ and columns $0$ and $1$, whose determinant is $x^7 - x y^3$
[GRevLex]{}, $x < y$
: We yield the submatrix with rows $0$ and $2$ and columns $0$ and $1$, whose determinant is $x^3 y^3$
For any of these strategies, in this package we randomize the order of the variables before choosing a submatrix.
The strategies we implement in this package are a bit more complicated, however. If we have a matrix whose entries are not monomial, then we could reasonably either pick the submatrix of smallest entries with respect to our monomial order ([LexSmallest]{} or [GrevLexSmallest]{}), or the submatrix whose entries have the smallest terms ([LexSmallestTerm]{} or [GRevLexSmallestTerm]{}). Both of these strategies are implemented.
For example, consider the matrix $$\left[\begin{array}{ccc}
x^2 + y^2 & 0 & xy + 2x \\
y^4 - x & 0 & 3 x^5 \\
x^3 & x^4 y^5 - y^8 & 0
\end{array} \right]$$ In this case, if we are choosing the entries with smallest terms, we first replace each entry in the matrix with the smallest term. For example, if we are using [Lex]{} with $x < y$ we would obtain: $$\left[\begin{array}{ccc}
x^2 & 0 & 2x \\
-x & 0 & 3 x^5 \\
x^3 & x^4 y^5 & 0
\end{array} \right]$$ Then we proceed as before. Notice that if there is a tie, it is broken randomly. We also have the strategies [GRevLexLargest]{} and [LexLargest]{} available to the user.
Finally, we have two random strategies available. [Random]{} and [RandomNonzero]{}.
[Random]{}
: With this strategy, a random submatrix is chosen.
[RandomNonzero]{}
: With this strategy, a random nonzero element is chosen in each step following the method used by the other strategies. This guarantees a submatrix where no row or column is zero which can be very useful when dealing with relatively sparse matrices.
Different strategies work differently on different examples. When you have a non-homogeneous matrix, with some entries that have constant terms, those entries will always be chosen first when picking smallest terms, regardless of the monomial order. On the other hand, for homogeneous matrices, choosing the smallest term is frequently very effective.
### Modifying the underlying matrix when using [GRevLex\*]{}
Finally, when using [GRevLex\*]{} orders, we periodically change the underlying matrix by replacing terms of small order with terms of larger order in order to avoid computing the same submatrix. For example, in the following matrix, after a couple iterations, we might replace the $x^2$ term with $$x^2 \cdot (\text{a random degree 1 polynomial}).$$ It might look something like the following. $$\left[\begin{array}{ccc}
x^2 & 0 & xy \\
y^4 & 0 & x^5 \\
x^3 & x^4 y^5 & 0
\end{array} \right]
\rightarrow
\left[\begin{array}{ccc}
x^2 (2x-7y) & 0 & xy \\
y^4 & 0 & x^5 \\
x^3 & x^4 y^5 & 0
\end{array} \right].$$ This forces the algorithm to make different choices to be made. After several iterates are done, the matrix is reset again to its original form.
The strategy options
--------------------
The core features included in the package allow the user to choose which strategy should be used when selecting submatrices. This is done by setting a [Strategy]{} option to one of the following.
- [StrategyDefault]{}: This strategy uses [LexSmallest, LexSmallestTerm, GRevLexSmallest, GRevLexSmallestTerm, Random,]{} and [RandomNonzero]{} with equal probability.
- [StrategyDefaultNonRandom]{}: This uses [LexSmallest, LexSmallestTerm, GRevLexSmallest, GRevLexSmallestTerm, Random,]{} and [RandomNonzero]{} with equal probability.
- [StrategyLexSmallest]{}: chooses 50% of the submatrices using [LexSmallest]{} and 50% using [LexSmallestTerm]{}.
- [StrategyGRevLexSmallest]{}: chooses 50% of the submatrices using [GRevLexSmallest]{} and 50% using [GRevLexLargest]{}.
- [StrategyRandom]{}: chooses submatrices by using 50% [Random]{} and 50% [RandonNonzero]{}.
The user can create their own custom strategy. One creates a [HashTable]{} which has the keys [LexLargest, LexSmallestTerm, LexSmallest, GRevLexSmallestTerm, GrevLexSmallest, GRevLexLargest, Random, RandomNonzero]{} each with value an integer (the values need not sum to 100). If one value is twice the size of another, that strategy will be employed twice as often. For example, [StrategyDefaultNonRandom]{} was created by the command:
StrategyDefaultNonRandom = new HashTable from {
LexLargest => 0,
LexSmallestTerm => 25,
LexSmallest => 25,
GRevLexSmallestTerm => 25,
GRevLexSmallest => 25,
GRevLexLargest => 0,
Random => 0,
RandomNonzero => 0
};
Find a submatrix of a given rank: [getSubmatrixOfRank]{} {#sec.GetSubmatrixOfRank}
========================================================
This method examines the submatrices of an input matrix and attempts to find one of a given rank. If a submatrix with the specified rank is found, a list of two lists is returned. The first is the list of rows, the second is the list of columns, which describe the desired submatrix of desired rank. If no such submatrix is found, the function will return [null]{}.
The option [MaxMinors]{} allows the user to control how many minors to consider. If left [null]{}, the number considered is based on the size of the matrix. This method will choose the indicated amount of minors using one of the strategy options described below. If one of the chosen submatrices has the desired rank, the function will terminate and return its rows and columns. This process continues until a submatrix is found or [MaxMinors]{} submatrices have been unsuccessfully checked. The strategy can be controlled using the [Strategy]{} option as described above, the default value is [StrategyDefaultNonRandom]{}.
Examples of [getSubmatrixOfRank]{}
----------------------------------
In the following example, we first create a $3 \times 4$ rational matrix, $M$. We execute two calls to [getSubmatrixOfRank]{}, the first has no Strategy parameter and the second utilizes [StrategyGRevLexSmallest]{}. Note that these calls return different indices, but both find valid rank $3$ submatrices. We then create a larger $9 \times 10$ matrix over the $3$-dimensional space of integers modulo $103$. We display the time needed for the [rank]{} function to return, followed by the time elapsed during a call to [getSubmatrixOfRank]{} when searching for a rank $7$ submatrix. We repeat these calculations on a new matrix with the same parameters, execting the call to [getSubmatrixOfRank]{} first, where we search for a rank $9$ submatrix, and then calling [rank]{}. In both trials we found that [getSubmatrixOfRank]{} significantly outperformed [rank]{}.
i1 : loadPackage "FastLinAlg";
i2 : R = QQ[x,y];
i3 : M = random(R^{2,2,2},R^4)
o3 = {-2} | x2+2/3xy+9/2y2 3/10x2+2/3xy+1/5y2 2x2+5/3xy+7/5y2 4/3x2+1/3xy+10/9y2 |
{-2} | 3/2x2+2/3xy+2y2 1/2x2+3/2xy+3/4y2 6x2+5xy+4y2 9/5x2+1/5xy+7/2y2 |
{-2} | 1/4x2+1/7xy+5/6y2 7/5x2+4xy+4/5y2 10/9x2+3/7xy+5/9y2 5/2x2+xy+7/6y2 |
3 4
o3 : Matrix R <--- R
i4 : getSubmatrixOfRank(3,M)
o4 = {{2, 0, 1}, {0, 1, 3}}
o4 : List
i5 : getSubmatrixOfRank(3, M, Strategy=>StrategyGRevLexSmallest)
o5 = {{0, 2, 1}, {1, 2, 0}}
o5 : List
i6 : Q = ZZ/103[x,y,z];
i7 : N = random(Q^{7,7,7,7,7,7,7,8,8},Q^10);
9 10
o7 : Matrix Q <--- Q
i8 : elapsedTime rank N;
-- 17.701 seconds elapsed
i9 : elapsedTime getSubmatrixOfRank(7,N);
-- 0.0373561 seconds elapsed
i10 : O = random(Q^{7,7,7,7,7,7,7,8,8},Q^10);
9 10
o10 : Matrix Q <--- Q
i11 : elapsedTime getSubmatrixOfRank(9,O);
-- 0.76357 seconds elapsed
i12 : elapsedTime rank O;
-- 14.6581 seconds elapsed
In one of the core examples from the [RationalMaps]{} package, before using this package a function would look at several thousands of submatrices (randomly) typically before finding a submatrix of the desired rank whereas this package finds one after looking at fewer than half a dozen. Using this package sped up the computation of that example by more than one order of magnitude, see [@BottHassanzadehSchwedeSmolkinRationalMaps Page 7], the non-maximal linear rank example.
Finding lower bounds for matrix ranks: [isRankAtLeast]{} {#sec.IsRankAtLeast}
========================================================
This method is a direct implementation of [getSubmatrixOfRank]{}. This function returns a boolean indicating whether the rank of an input matrix, $M$, is greater than or equal to an input integer, $n$. In order to do so, the function first performs some basic checks to ensure a rank of $n$ is possible given $M$’s dimensions, then executes a call to [getSubmatrixOfRank]{}. If [getSubmatrixOfRank]{} returns a matrix, then this function will return true. However, if [getSubmatrixOfRank]{} does not return a matrix, a conclusive answer can not be reached. As such, the method will then evaluate the rank of $M$ and return the appropriate boolean value.
[isRankAtLeast]{} is efficient when [getSubmatrixOfRank]{} returns quickly, however may be costly if the results are inconclusive and a rank evaluation is necessary. As such, the described implementation is not optimized. In order to lead to time improvements, we developed a multithreaded version of this function that concurrrently evaluates the rank of $M$ and invokes [getSubmatrixOfRank]{}. Once a thread has terminated, the method cancels the other and returns the appropriate value. During the implementation of this functionality, we discovered that *Macaulay2* becomes unstable when cancelling threads and thus do currently not allow users to invoke the multithreaded version. However, this functionality is included in the package and can be made easily accessible once the stability issue is resolved.
Example of [isRankAtLeast]{}
----------------------------
The following example first creates and displays a smaller $3 \times 3$ matrix, $M$. We call [isRankAtLeast]{} to determine if its rank is at least 3. We then create a larger $9 \times 9$ matrix, $N$, and call [isRankAtLeast]{} to determine if its rank is at least 7. Directly calling [rank N]{} on a matrix of this size would take multiple seconds, whereas [isRankAtLeast]{} returns in a fraction of the time.
i1 : loadPackage "FastLinAlg";
i2 : R = QQ[x,y,z];
i3 : M = random(R^{2,2,2},R^3)
o3 = {-2} | 7/2x2+2/3xy+y2+2/9xz+7/10yz+3/5z2 x2+5xy+1/2y2+2xz+7/9yz+7/5z2
{-2} | 8/7x2+1/10xy+y2+2/3xz+3yz+1/2z2 3x2+7/8xy+1/9y2+3xz+10yz+5z2
{-2} | 1/10x2+5/2xy+3/4y2+2/3xz+3/2yz+10/7z2 3x2+10/7xy+7/2y2+7/10xz+2yz+1/4z2
------------------------------------------------------------------------------
6/7x2+9/7xy+3/10y2+2/5xz+2/5yz+5/4z2 |
2/9x2+5/4xy+3y2+1/2xz+yz+7/8z2 |
8/9x2+3/4xy+y2+10xz+9yz+4/5z2 |
3 3
o3 : Matrix R <--- R
i4 : isRankAtLeast(3,M)
o4 = true
ii5 : rank M
oo5 = 3
i6 : N = random(R^{6,6,6,6,6,6,6,7,7},R^9);
9 9
o6 : Matrix R <--- R
i7 : elapsedTime isRankAtLeast(7,N)
-- 0.0654172 seconds elapsed
o7 = true
Regular in codimension $n$: [Rn]{} {#sec.Rn}
==================================
Using the [getSubmatrixOfRank]{} routines, we provide a function for checking if a variety is regular in codimension $n$, or $Rn$. The default strategy is [Strategy=>Default]{}.
The function [Rn(ZZ, Ring)]{} returns [true]{} if it verifies that the ring is regular in codimension $n$. Note, this only works if the ring is equidimensional, as it is using a Jacobian criterion. If it cannot make a determination, it returns [null]{}. If it ended up computing all minors of the matrix, and it still doesn’t have the desired codimension, it will return [false]{} (note this will likely only occur for small matrices).
Example of [Rn]{}
-----------------
We begin with an example of a 3 dimensional ring that is regular in codimension 1, but not in codimension 2. It is generated by 12 equations in 7 variables.
i3 : T = ZZ/101[x1,x2,x3,x4,x5,x6,x7];
i4 : I = ideal(x5*x6-x4*x7,x1*x6-x2*x7,x5^2-x1*x7,x4*x5-x2*x7,x4^2-x2*x6,x1*x4-x2*x5,
x2*x3^3*x5+3*x2*x3^2*x7+8*x2^2*x5+3*x3*x4*x7-8*x4*x7+x6*x7,x1*x3^3*x5+3*x1*x3^2*x7
+8*x1*x2*x5+3*x3*x5*x7-8*x5*x7+x7^2,x2*x3^3*x4+3*x2*x3^2*x6+8*x2^2*x4+3*x3*x4*x6
-8*x4*x6+x6^2,x2^2*x3^3+3*x2*x3^2*x4+8*x2^3+3*x2*x3*x6-8*x2*x6+x4*x6,x1*x2*x3^3
+3*x2*x3^2*x5+8*x1*x2^2+3*x2*x3*x7-8*x2*x7+x4*x7,x1^2*x3^3+3*x1*x3^2*x5+8*x1^2*x2
+3*x1*x3*x7-8*x1*x7+x5*x7);
o4 : Ideal of T
i5 : S = T/I; dim S
o6 = 3
i7 : time Rn(1, S)
-- used 0.150734 seconds
o7 = true
i8 : time Rn(2, S)
-- used 2.12777 seconds
i9 : time singularLocus S;
-- used 8.29746 seconds
i10 : time dim o9
-- used 23.2483 seconds
o10 = 1
You can see that the function [Rn]{} verified that $S$ was regular in codimension 1 in a fraction of a second. When we ran [Rn(2, S)]{} nothing was returned, indicating that nothing was found. Computing the jacobian ideal however took more than 8 seconds and verifying that it had dimension 1 took more than 23 seconds.
The [Strategy]{} option
-----------------------
Let us look in slightly more detail at the same example but this time using some different strategies. For instance, you might think that it might be just as effective to choose random matrices, and sometimes it is.
i11 : time Rn(1, S, Strategy=>StrategyRandom, Verbose=>true)
Rn: ring dimension =3, there are 17325 possible minors, we will compute up to 324 of them.
Rn: About to enter loop
Rn: Loop step, about to compute dimension. Submatrices considered: 7, and computed = 7
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 9, and computed = 9
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 11, and computed = 11
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 14, and computed = 14
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 18, and computed = 18
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 24, and computed = 24
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 31, and computed = 31
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 40, and computed = 40
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 52, and computed = 52
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 67, and computed = 67
Rn: partial singular locus dimension computed, = 2
Rn: Loop step, about to compute dimension. Submatrices considered: 87, and computed = 87
Rn: partial singular locus dimension computed, = 1
Rn: Loop completed, submatrices considered = 87, and computed = 87.
singular locus dimension appears to be = 1
-- used 0.758961 seconds
o11 = true
You can see that the [StrategyRandom]{} option looked at 87 submatrices in this particular example. Note it does not check to see if we have obtained the desired outcome with each submatrix considered, it does this periodically with the space between checks increasing. The [considered]{} values on each line tells how many submatrices have been selected. Computed tells how many were not repeats (this will be very low with a random strategy).
Running [Rn(1, S, Strategy=>StrategyRandom, Verbose=>true)]{} 50 times yielded:
1. 61.3 average number of submatrices matrices considered.
2. a median value of 40 or 52 sumatrices considered.
3. a minimum value of 7 submatrices considered (one time).
4. a maximum value of 248 submatrices considered (one time).
Note 7 is the minimum number of submatrices that will be considered when checking regular in codimension 1 for this variety (the minimum number depends on the dimension).
On the other hand, the default strategy [Rn(1, S, Strategy=>StrategyDefaultNonRandom, Verbose=>true)]{} run 50 times yields
1. 12.1 average number of submatrices considered.
2. a median value of 7 or 9 submatrices considered.
3. a minimum value of 7 submatices considered (25 times).
4. a maximum value of 40 submatrices considered (one time).
In this particular example, [Strategy=>StrategyLexSmallest]{} appears to be optimal. Note that larger matrices tend to create even larger disparities.
Notes on implementation
-----------------------
As mentioned above, this function computes minors (based on the passed [Strategy]{}) option until either it finds that the singular locus has the desired dimension, or until it has considered too many minors. By default, it considers up to: $$10\cdot (\text{Minimum number of minors needed}) + 8 \cdot \log_{1.3}(\text{possible minors}).$$ This was simply chosen by experimentation. Note that if you are trying to show your singular locus has a certain codimension, you will need a minimum number of minors. The 10 multiplying it is because our default strategy uses multiple strategies, but only one might work well on a given matrix. You can provide an alternate function of $x = (\text{minors needed})$ and $y = (\text{possible minors})$ passing it to the option [MaxMinors]{}. You can also pass [MaxMinors]{} a number.
These matrices are considered in a loop. We begin with computing a constant number of minors, by default $2 \cdot (\text{Minimum number of minors needed}) + 3$ and check whether the output has the right dimension. You can provide a different function of $x = (\text{minors needed})$ via the option [MinMinorsFunction]{}. After that, we compute additional minors, checking periodically (based on an exponential function, $1.3^k$ minors considered before the next reset) whether our minors define a subset of the desired desired codimension. You can provide your own function via the option [CodimCheckFunction]{}. If in this loop, a submatrix is considered again, it is not recomputed, but the counter is still increased.
Other options
-------------
This function also includes other options including the option [ModP]{} which handles switching the coefficient field for a field of characteristic $p > 0$ (which you specify with [ModP => p]{}. )
One can also control how determinants are computed with the [DetStrategy]{} option, valid values are [Bareiss]{}, [Cofactor]{} and [Recursive]{}.
Projective dimension: [projDim]{} {#sec.ProjDim}
=================================
In April of 2019, it was pointed out in a thread on github
https://github.com/Macaulay2/M2/issues/936
that the command [pdim]{} sometimes provides an incorrect value (an overestimate) for projective dimension for non-homogeneous modules over polynomial rings. There it was also suggested that this could be addressed by looking at appropriate minors of the matrices in a possibly non-minimal resolution, but that in practice these matrices have too many minors to compute. We have implemented a function [projDim]{} that tries to address this by looking at only *some* minors. Our function does not solve the problem as it also gives only an upper bound on the projective dimension. However, this upper bound is more frequently correct.
The idea is as follows. Take a free resolution of a module $M$ over a polynomial ring $R$ $$\xymatrix{
0 & \ar[l] M & \ar[l] F_0 & \ar[l]_{d_1} F_1 & \ar[l] \dots & \ar[l]_{d_{n-1}} F_{n-1} & \ar[l]_{d_n} F_n & \ar[l] 0.
}$$ Each $d_i$ is given by a matrix. The term $F_n$ is unnecessary (ie, $d_n$ splits) happens exactly when the $\rank F_n$ minors of $d_n$ generate the unit ideal. In that case, we know or projective dimension is at most $n-1$. However, we can continue in this way, we can compute the $(\rank F_{n-1} - \rank F_n)$-minors of $d_{n-1}$, and see if they generate the unit ideal. Our algorithm of course only computes a subset of those minors.
Example of [projDim]{}
----------------------
In the below example, we take a monomial ideal of projective dimension 2, compute a non-homogeneous change of coordinates, and observe that [pdim]{} returns an incorrect answer that [projDim]{} corrects.
i1 : R = QQ[x,y,z,w];
i2 : I = ideal(x^4,x*y,w^3, y^4);
i3 : pdim module I
o3 = 2
i4 : f = map(R, R, {x+x^2+1, x+y+1, z+z^4+x-2, w+w^5+y+1});
i5 : pdim module f I
o5 = 3
i6 : time projDim module f I
-- used 3.43851 seconds
o6 = 2
i7 : time projDim(module f I, MinDimension=>2)
-- used 0.0503165 seconds
o7 = 2
Options
-------
As you can see in the previous example, setting [MinDimension]{} will can substantially speed up the computation as it won’t try to determine if the projective dimension is actually $1$.
The option [MaxMinors]{} can either be set to be a number (the number of a minors computed at each step). Or it can be set to be a list of numbers (one for each step in the above algorithm). Finally, it can be set to be a function of the dimension $d$ of the polynomial ring $R$ and the number $t$ of possible minors. This is the default option, and the function is: $5*d + 2*\log_{1.3}(t)$. The option [Strategy]{} is also available and it works as above with the default value being [StrategyDefault]{}.
Computing ideals of minors: [recursiveMinors]{} {#sec.RecursiveMinors}
===============================================
*Macaulay2* is a unique because it contains a [minors]{} method that returns the ideal of minors of a certain size, $n$, in a given matrix, a necessary step in locating singularities. However, the current implementation’s default is to evaluate determinants using the Bareiss algorithm, which is efficient when the entries in the matrix have a low degree and few variables, but very slow otherwise. The current minors method also allows users to compute determinants using cofactor expansion, but this strategy performs some unnecessary calculations, causing it to be quite costly as well. We improved the current cofactor expansion method to find the determinants of minors by adding recursion and multithreading throughout. We also eliminated said unnecessary calculations by ensuring that only the required determinants are being computed at each step of the recursion, rather than all possible determinants of the given size.
In order to do so, we programmed a method in *Macaulay2*’s software that recursively finds all $n \times n$ minors by first computing the $2 \times 2$ minors and storing them in a hash table. Then we use the $2 \times 2$ minors to compute the necessary $3 \times 3$ minors, and so forth. This process is repeated recursively until the minors of size $n \times n$ are evaluated. At each step, we only compute the determinants that will be needed when performing a cofactor expansion on the following size minor.
To allow for further time improvements, we also utilized *Macaulay2*’s existing parallel programming methods to multithread our code so different computations at each step of the recursion can occur simultaneously in separate threads. We divide the list of all determinants to be evaluated into different available threads and wait for them to finish before consolidating the results in a hash table and proceeding with the recursion. In order to more effectively utilize *Macaulay2*’s multithreading methods, we also created a nanosleep method that waits a given number of nanoseconds, rather than full seconds. This function has already been incorporated into the software.
Example of [recursiveMinors]{}
------------------------------
Below, we first create a simple matrix, M, of one dimensional polynomials with rational coefficients and execute the [recursiveMinors]{} method to find the ideal of all $3 \times 3$ minors. As can be seen, the result is equivalent to the output of the minors method when called with the same parameters. We then create a new, larger matrix, $N$, with two dimensional rational coefficients and return the computation time for [recursiveMinors]{} and minors utilizing both the Bareiss and Cofactor strategies. The [recursiveMinors]{} method finished executing approximately six times faster than the Bareiss algorithm and almost seven times faster than the Cofactor expansion, while yielding the same results.
i1 : loadPackage "FastLinAlg";
i2 : allowableThreads = 8
i3 : R = QQ[x];
i4 : M = random(R^{2,2,2}, R^4)
o4 = {-2} | x2 3x2 5/8x2 7/10x2 |
{-2} | 3/4x2 2x2 7/4x2 9x2 |
{-2} | x2 2/9x2 1/2x2 4/3x2 |
3 4
o4 : Matrix R <--- R
i5 : recursiveMinors(3,M)
1403 6 449 6 292 6 517 6
o5 = ideal (----x , ---x , - ---x , ---x )
60 240 45 144
o5 : Ideal of R
i6 : recursiveMinors(3,M) == minors(3,M)
o6 = true
i7 : Q = QQ[x,y];
i8 : N = random(Q^{5,5,5,5,5,5}, Q^7);
6 7
o8 : Matrix Q <--- Q
i9 : elapsedTime minors(5,N, Strategy => Bareiss);
-- 3.0094 seconds elapsed
o9 : Ideal of Q
i10 : elapsedTime minors(5,N, Strategy => Cofactor);
-- 3.76846 seconds elapsed
o10 : Ideal of Q
i11 : elapsedTime recursiveMinors(5,N);
-- 0.590152 seconds elapsed;
o11 : Ideal of Q
i12 : recursiveMinors(5,N) == minors(5,N)
o12 = true
[^1]: Martinova was supported by a University of Utah Mathematics REU fellowship and by the University of Utah ACCESS program. Robinson was supported by NSF RTG grant \#1840190. Schwede was supported by NSF CAREER grant \#1501102 and NSF grant \#1801849. Yao was supported by a University of Utah Mathematics REU fellowship.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Leonardo Gutiérrez Gómez
- 'Jean-Charles Delvenne'
bibliography:
- 'graph\_class.bib'
title: 'Unsupervised Network Embedding for Graph Visualization, Clustering and Classification'
---
Introduction {#introduction .unnumbered}
============
Numerous complex systems in social, medical, biological and engineering sciences can be studied under the framework of networks. Network models are often analyzed at the node/edge or substructure level, studying the interaction among entities, identifying groups of nodes behaving similarly or finding global and local connectivity patterns among a given network. Furthermore, many real life challenges might involve collections of networks representing instances of the system under study, e.g functional brain networks (connectomes) [@10.1371/journal.pbio.0060159], chemical compound graphs [@Srinivasan:1997:PTE:1624162.1624163], multilayer networks [@DBLP:journals/corr/abs-1212-2153], and so on. Other applications involve dynamic interactions between components, introducing an additional complexity in the time evolution of the system. For example, in a social mobile phone network, people are considered as nodes and the phone calls as edges. The dynamics of calls between users will systematically add and remove edges between them, describing a sequence of static graphs characterizing a dynamic evolution of the system.
With the increasing availability of manually labeled network data, many of these problems have recently raised the attention of the machine learning community. Machine learning applications seek to make predictions or discovering patterns in graph structured data. For example, in chemoinformatics [@doi:10.1021/jm00106a046], one might need to predict the toxicity or anti-cancer activity of proteins and molecules represented as graphs. In time-varying social networks, one might be interested in detecting unusual events [@Peel:2015:DCP:2888116.2888122], e.g points in time in which the network connectivity differs abruptly with respect to the evolution of the underlying process. Prediction of subjects having a neural disorder such as Alzheimer or Schizophrenia, based on their connectomes is crucial in neuroscience [@doi:10.1002/hbm.22633].
The cornerstone of this approach is the feature representation of the input data, e.g finding effective ways to encode graph structures in such a way that it can be used in traditional machine learning models. For example, in order to predict whether a molecule is toxic or not, one might build a feature vector representation of a molecule incorporating information about its atoms, as well as global and local properties of the graph structure itself [@Barnett2016; @doi:10.1093/comnet/cny034]. By doing so we can train a traditional machine learning model such as support vector machines, random forest, neural network, etc. so it will discriminate unseen toxic and non-toxic chemical compounds.
There exist many manners to extract features and comparing networks. For instance, graph distances [@2018arXiv180107351D; @Livi:2013:GMP:2737203.2737238] such as the Jaccard and Hamming distances compute differences between graphs by counting the number of edit operations to transform a graph into another one, focusing mainly in their local connectivity patterns. Other distances are spectral in nature based on the comparison between the eigenvalues of the reference matrices representing the networks. Another popular class of distance measures are the graph kernels [@Shervashidze:2011:WGK:1953048.2078187; @Yanardag:2015:DGK:2783258.2783417]. A kernel can often be seen as the scalar product between implicit high-dimensional feature representations of a network [@973]. The so-called kernel trick allows to compare networks without ever computing explicitly the coordinates of data points in the high-dimensional feature space, sometimes with a substantial gain in computational time over classical graph-distance approaches.
![Overview of the proposed method. Given a family of graphs, we train an unsupervised neural network in order to uncover dissimilar relationships between graphs. The graphs are embedded into a feature space and mapped to a Euclidean distance matrix reflecting the structural similarity between input examples.[]{data-label="general_pipeline"}](figure1.eps)
However, real life networks are complex structures involving heterogeneous connectivity patterns across domains, constraining the expressiveness of the aforementioned methods in multiple tasks. Therefore, the most relevant hand-crafted features tend to be task dependent and often require specialized domain expertise in order to incorporate the right properties to perform accurately on the target task.
Unlike previous approaches, in this work we propose a method to learn network embeddings from a collection of networks. It should not be confused with node embedding approaches which aim to map nodes from a graph into vectors on a feature space (see [@DBLP:journals/corr/abs-1709-07604; @DBLP:journals/corr/GoyalF17] for a survey of those methods). Therefore, in this paper we refer to graph or network embedding the outcome of mapping each network of a family as a vector in a Euclidean space (see Figure \[general\_pipeline\]). The unsupervised nature of the method allows to learn the most relevant features from the data in order to produce lower dimensional representation of input graphs. This reduces the curse of dimensionality of high dimensional graphs uncovering discriminative relationships in the underlying dataset. As a consequence, networks with similar structural properties will have neighboring embeddings in the feature space, and dissimilar graph will be more distant. Our approach thus differs from the various definitions of graph distances or similarities mentioned previously in that we learn automatically a feature representation of graphs assessing their similarity on a Euclidean space, instead of using a hand-crafted metric in the graph space. In addition, because many graph created in real life applications rarely have exchangeable nodes, we focus on problems defined on networks that account for node identities, e.g time-varying networks, brain networks, multilayer networks, etc.
We evaluate our method empirically in three network mining tasks: graph clustering (grouping similar graphs together), graph classification (predicting the class to which unseen networks belong to) and visualization (plotting many networks in $\mathbb{R}^2$). We perform diverse experiments on synthetic and real life datasets such as time-varying networks (primary school network), multilayer networks (European airport network) and brain networks datasets.
This paper is structured as follows. First, we introduce some popular methods of the literature used to compare networks, as well as the development of the proposed approach. Then, we present some applications in graph visualization, clustering and classification performed on synthetic and real life datasets. Subsequently, a computational analysis of our method is presented, finalizing with a discussion and perspectives for future work.
Methods {#methods .unnumbered}
=======
**Graph distances** {#sec_graph_dist .unnumbered}
-------------------
Distinguishing among a class of networks requires a notion of distance or similarity between pairs of graphs [@2018arXiv180107351D]. These measures capture different aspects of the local and global structure of graphs having an impact in the outcome of different applications. We present some of the most representative graph distances of the literature.
The *Hamming* and *Jaccard* distances are special instances from the broader class of graph-edit distances. They measure the number edge insertion and deletion operations necessary to transform one graph to another one. Denoting $N$ the number of nodes of the undirected graphs $G_1=(V,E_1)$ and $G_2=(V,E_2)$ with adjacency matrices $A_1$ and $A_2$ respectively, the *Hamming* distance between them is defined as:
$$d_H(G_1,G_2) = \dfrac{1}{N(N-1)}\sum_{i,j}^N|A_1 - A_2|_{i,j}$$
which defines a scaled version of the $L_{1,1}$ norm between matrices bounded between $0$ and $1$. Similarly, the *Jaccard* distance is defined as:
$$d_J(G_1,G_2) = \dfrac{|E_1 \cup E_2| - |E_1 \cap E_2|}{|E_1 \cup E_2|}$$
where $E_1$ and $E_2$ are the set of edges for the graphs $G_1$ and $G_2$ respectively.\
*DeltaCon* [@Koutra:2016:DEC:2888412.2824443] is a popular graph similarity measure in connectomics. As the edit distances it also exploits node correspondence across graphs. The intuition behind the method is to compute first pairwise node similarities of input graphs through a variant of a personalized PageRank algorithm [@Koutra:2016:DEC:2888412.2824443]. The pairwise node affinity matrices $(S_1, S_2)$ are compared using the Matusita Distance defined by:
$$d_{DC}(S_1,S_2) = \sqrt{\sum_{i,j=1}^n (\sqrt{S_1(i,j)} - \sqrt{S_2(i,j)})^2}$$
On the other hand, the *spectral distances* for graphs have proven to be very useful in many applications [@10.1038/s41598-018-37534-2; @Wilson:2008:SGS:1379924.1380381]. However, the spectral nature of the method makes it invariant to node permutations. Roughly speaking, these methods compare the spectrum of any matrix representing the input graph, generally the graph Laplacian. The combinatorial Laplacian matrix (CL) of an undirected graph $G$ is defined by $L = D - A$, where $D$ is the diagonal matrix whose $i$-th element equal to the degree of node $i$, and $A$ its adjacency matrix. The normalized Laplacian matrix (NL) is defined by $L^{'} = D^{-1/2}L D^{1/2} = I - D^{-1/2}AD^{1/2}$, with $I$ the corresponding identity matrix. We denote the eigenvalues of any of the Laplacian matrices as $0=\lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_N$.
For any $L$ and $L^{'}$ we consider the following spectral distance. The spectral distance between two undirected graphs $G_1$ and $G_2$ is defined as [@10.1038/s41598-018-37534-2]:
$$d(G_1, G_2) = \sqrt{\sum_{i=1}^{n_{\lambda}} [\lambda_{N+1-i}(G_1) - \lambda_{N+1-i}(G_2)]^2}$$
where $n_{\lambda}$ is the number of eigenvalues considered for the computation of the distance (typically $n_{\lambda}=N$).
**Embedding distances** {#embedding-distances .unnumbered}
-----------------------
Unlike the previous distances, our approach performs network comparisons directly on a feature space through a learned non-linear mapping applied to input graphs (see Figure \[general\_pipeline\]). The building blocks of our method are explained in the following subsections.
### **Autoencoder** {#autoencoder .unnumbered}
Unsupervised learning approaches aim to uncover hidden patterns or learning representations from unlabeled data. The autoencoder (AE) [@Vincent:2008:ECR:1390156.1390294] is one of the most popular unsupervised neural network approaches. It has been widely used as a performant mechanism to pre-train neural networks and general purpose feature learning [@DBLP:journals/ijon/ChoiCB18]. It allows to compress the representation of input data, disentangling the main factors of variability, removing redundancies and reducing the dimension of the input.
Given a set of data examples $D = \{\textbf{x}^{(1)},\textbf{x}^{(2)},\ldots , \textbf{x}^{(m)} \}$, the purpose of the the traditional auto-encoder is to learn a non-linear mapping which encodes an input example $\textbf{x} \in \mathbb{R}^n$ in a smaller dimensional latent vector $\textbf{y} \in \mathbb{R}^d$ with $n \gg d$. The encoding mapping has the form of $f_\theta(\textbf{x}) = s(W\textbf{x} + b)=\textbf{y}$, generally through a non-linear function $s$ such as sigmoid or $tanh$ applied entrywise on the vector $W\textbf{x}+b$. A reverse mapping of $f$ is used to reconstruct the input from the feature space: $g_{\theta'}(\textbf{y}) = s(W' \textbf{y} + b')=\textbf{z}$. The parameters $\theta = \{W,b\}$ and $\theta' = \{W',b'\}$ are optimized by minimizing the average reconstruction error over the training set:
$$\label{loss_ae}
\theta^*, \theta'^* = \underset{\theta, \theta'}{arg \min} \hspace*{0.2cm} \dfrac{1}{m}\sum_{i=1}^m \| \textbf{x}^{(i)} - \textbf{z}^{(i)} \|_2^2$$
Note that when $s$ is the trivial identity, the solution is equivalent to the classical PCA (principal component analysis) with the number of hidden units as the principal components. One can therefore see autoencoders as a nonlinear extension of PCA.
![Denoising Autoencoder. A corrupted instance **$\tilde{x}$** of a graph **x** is fitted into the Autoencoder’s input. The Autoencoder is trained to recover a cleaned version of the input by compressing it through a non-linear mapping $f_{\theta}$ and mapping it back (through $g_{\theta'}$) to a reconstructed version of the original input graph **x**.[]{data-label="DAE"}](figure2.eps){width="2.5in"}
### **Denoising Autoencoder (DAE)** {#sec_DAE .unnumbered}
Minimizing the previous reconstruction criterion alone is unable in general to guarantee the extraction of meaningful features as it can potentially memorize the training data. We want the Autoencoder to be sensitive enough to recreate the original observation but insensitive enough to the training data such that the model learns a generalizable encoding and decoding mapping.
To avoid this limitation, the objective (Eq \[loss\_ae\]) is redefined in such a way that the autoencoder will be able to clean partially corrupted input or simply denoising it. This modification leads a simple variant of the basic autoencoder described above. A denoising autoencoder (DAE) [@Vincent:2008:ECR:1390156.1390294] is trained to reconstruct a clean or repaired version from a corrupted input. This is done by transforming the original input $\textbf{x}$ in $\tilde{\textbf{x}}$ through a stochastic mapping $\tilde{\textbf{x}} \sim q_D(\tilde{\textbf{x}}|\textbf{x})$. By doing so the AE is forced to learn meaningful features, robust under corruption of the input.
The corrupted version $\tilde{\textbf{x}}$ is mapped with the original autoencoder to a hidden representation $\textbf{y}=f_{\theta}(\tilde{\textbf{x}})$ from which we reconstruct a clean $\textbf{z} = g_{\theta'}(\textbf{y})$. An important observation is that $\textbf{z}$ is now a deterministic function of $\tilde{\textbf{x}}$ rather than $\textbf{x}$. See Figure \[DAE\] for an schematic representation of the model. Thus, we optimize the same objective than Eq \[loss\_ae\] but replacing $\textbf{x}$ by $\tilde{\textbf{x}}$. Optimization is done with the standard mini-batch gradient descent and back propagation algorithms [@Lecun1998].
### **Network embedding distances** {#network-embedding-distances .unnumbered}
The adjacency matrix $A$ of a graph is a simple network representation but alone can be insufficient as an input for the DAE. It only captures first order relationships between neighboring nodes. We extend this by computing higher powers of the adjacency matrix in order to capture multiple paths relationships. Thus, we consider $A^r$ for some $ r\geq 1$ as a more adequate input for the Denoising Autoencoder.
Note that as the class of problems we tackle are defined on a collection of networks having a node correspondence across graphs, our method remains invariant to the node ordering when the same node permutation is assigned to the graphs.
The vectorization of matrices is required to feed the graphs into the DAE input. Let $A^r$ the $r$ power of the $n \times n$ adjacency matrix $A$ of a graph. The vectorization of $A^r$ is a $n^2 \times 1$ column vector $\textbf{x}=vec(A^r)$ obtained by staking the columns of $A^r$. Notice that when the graph is undirected, the input matrix can be described with a $\frac{n(n + 1)}{2} \times 1$ column vector $\textbf{x}$. We apply a stochastic noise on the input by removing or adding an small fraction of edges at random, then we infer the parameters of the DAE using the noisy inputs $\tilde{\textbf{x}}$ as was presented in the previous section.
The optimal solution $\theta^* =\{W^*,b^*\}$ parametrizes an encoder mapping $f_{\theta^*}$ of the DAE. It embeds the input $\textbf{x} = vec(A^r)$ into a smaller dimensional vector $f_{\theta^*}(\textbf{x}) \in \mathbb{R}^d$. A main advantage of transforming graphs into feature vectors is that it allows us to compare easily networks computing only Euclidean distances between their embeddings. Hence, the *network embedding distance* between two graphs $G_1$ and $G_2$ with power matrices $A_1^r$ and $A_2^r$ is defined as:
$$\label{emb_dist}
d(G_1, G_2) = \| f_{\theta^*}(vec(A_1^r)) - f_{\theta^*}(vec(A_2^r))\|_2.$$
In the following sections we present some experimental results of our method in various synthetic and real life applications.
Experiments and Results {#experiments-and-results .unnumbered}
=======================
The experiments have three purposes. First, they assess the performance of our method in discriminating different types of networks which are generated from different models, edge densities and heterogeneous community structure. Next, they show the use of graph embeddings in networks coming from diverse real life applications such as time-varying networks, connectomes and multilayer networks. Finally, they highlight the runtime performance of feature computation and compare it against other techniques. It is worth to mention that all our experiments were performed with $A^3$ as input for the DAE.
We evaluate our approach on three different but related tasks: graph visualization, graph clustering and classification. A detailed report of the parameters used in our experiments can be found in the appendix.
![Visualization of permuted Erdős-Rényi (ER) dataset. Each point corresponds to a network. Color of a point indicates the category of the network according with its average degree $\langle d \rangle$. For blue $\langle d \rangle=4$, green $\langle d \rangle$ = 6 and red color $\langle d \rangle=8$.[]{data-label="many_viz"}](figure3.eps){width="4.9in"}
**Graph visualization** {#graph-visualization .unnumbered}
-----------------------
A useful application of network embedding is graph visualization. It mainly consists in representing graph as 2D points, e.g an entire graph as one point, maximizing a certain notion of similarity. Considerable research has been done in visualizing nodes of graphs based on the premise that nodes sharing common structures e.g. neighboring nodes, structural equivalent nodes, assortative nodes, etc. should be mapped to close points in the embedding space [@DBLP:journals/corr/abs-1709-07604; @DBLP:journals/corr/GoyalF17].
In contrast, we propose to visualize multiple graphs at once on a two-dimensional space in the following way. From a given family of graphs, their embeddings are learned and used to compute the embedding distance matrix (Eq. \[emb\_dist\]). In order to enable a visualization, a methodology is needed to bring the embedding distances into a low-dimensional visualization. We choose the Multi-scale SNE tool [@Lee2015] as standard method. This is a non-linear dimensionality reduction approach for data visualization which aims to reproduce in a low-dimensional space the local and global neighborhood similarities observed on any similarity matrix. In this way, we expect that networks with similar properties as learned by the Denoising Autoencoder are neighboring points in the two dimensional visualization, while the gap between dissimilar groups of graphs is maximized.
### **Visualizing synthetic networks** {#sec_viz .unnumbered}
To assess the relevance of the visualization, we generate synthetic random networks with a range of parameters, and assign a color to each point in the visualization that reflects the value of the parameter used to generate the network. In this way we expect that a good visualization will preserve the same colored points as neighbors points in $\mathbb{R}^2$ maximizing the gap between groups. We generate two synthetic datasets which are described in the following.
### **Datasets** {#datasets .unnumbered}
In the first synthetic dataset, we create three Erdős-Rényi networks (*ER*) with different parameters (Figure \[many\_viz\]). Then we generate $200$ copies from each graph reordering the nodes with a different permutation of the original graph. In the second dataset, $1000$ power law networks were generated using the Lancichinetti–Fortunato–Radicchi (*LFR*) benchmark [@Fortunato:2009:CDA:1698822.1698858]. This algorithm creates networks with heterogeneous structures and communities sizes. The mixing parameter $\mu \in [0,1]$ controls the strength of the community arrangements, achieving well defined communities with small $\mu$, meaningless community structure when $\mu$ is close to one and $\mu=0.5$ as the border beyond which communities are no longer defined in the strong sense [@Radicchi2658]. Thus, we generate two groups of networks: one with mixing parameter $\mu=0.1$ and other with $\mu=0.5$. Other parameters are common for both groups: number of nodes $N=81$, average node degree equal $11$, community sizes varying between $6$ and $22$ nodes, exponent for the degree sequence $2$ and exponent for the community size distribution $1$. Therefore, the two groups of networks differ only in the strength of their communities structure and not in the degree distribution, being a more challenging problem than the previous dataset.\
### **Discussion** {#discussion .unnumbered}
Figure \[many\_viz\] shows the visualization of the *ER* dataset after applying our method and the aforementioned graph distances. As can be expected, results with *Jaccard* and *Hamming* distances are not satisfactory because points from different groups overlap. Even though *DeltaCon* tries to separate the data, the boundary between groups was not clearly determined. Because spectral distances are permutation invariant measures, they collapse all permuted graphs to the same point showing a hard separation between classes. On the other hand, our embedding (*Emb*) shows three well defined cloud of points grouping together isomorphic graphs. Our method exploits node correspondence across graphs when it is known, but even if we lose track of node order we can retrieve networks that are essentially identical.
![Visualization of networks generated with the LFR benchmark. *(Left)* networks with different community strength: $\mu=0.1$ and $\mu=0.5$. *(Right)* Same left-hand side networks. Colors encode the number of planted communities within each network.[]{data-label="dataset_viz"}](figure4.png "fig:"){width="2.0in"} ![Visualization of networks generated with the LFR benchmark. *(Left)* networks with different community strength: $\mu=0.1$ and $\mu=0.5$. *(Right)* Same left-hand side networks. Colors encode the number of planted communities within each network.[]{data-label="dataset_viz"}](figure5.png "fig:"){width="2.1in"}
Figure \[dataset\_viz\] shows the visualization of the *LFR* dataset. The left-hand side plot shows two clouds of points encoding networks with different mesoscopic structure. As can be seen the blue cluster tends to spread more than the red one, which is more compact. This illustrates the structural variability of networks having heterogeneous number/size communities (blue cluster) against a group of networks with weakly modularity (red cluster).
In the right-hand side plot of Figure \[dataset\_viz\], we keep the same networks from the left-hand side plot, but we color them according with the number of ground truth communities on each network. Inspecting the bottom cluster we observe that even if there is not a clear grouping of points, the data is distributed in a quasi-continuum manner, having networks with similar number of communities as neighboring points in the plane. On the other hand, the group of networks on the top are indistinguishable, which is expected because their weak community strength. This visualization allows us to understand the notion of similarity captured by the Autoencoder on the underlying dataset.
Once more we emphasize that although the embedding is in principle dependent on the order of the nodes, in this specific case different orderings lead to closely similar visualizations. This is expected as in this case, albeit all the networks are supported on the same number of nodes, there is no natural one-to-one correspondence between the nodes of two networks, and all nodes are treated symmetrically in the generation process.
### **Visualizing real life networks: temporal networks** {#visualizing-real-life-networks-temporal-networks .unnumbered}
The primary school network [@10.1371/journal.pone.0023176] is a dataset containing temporal face-to-face interactions between 232 children and 10 teachers in a primary school in Lyon, France. The data was collected over two days (Thursday, October 1 and Friday, October 2, 2009) spanning from 8:45 am to 5:20 pm the first day, and 8:30 am to 5:05 pm the second day.
The dynamic evolution of the network can be modeled as a time-varying network defined on a fixed number of nodes, and dynamic edges representing the physical interaction between children and teachers. It can be represented as a sequence of static graph snapshots over a time window $\tau$ which aggregates all events or edge activations occurred between the interval $[(t-1)\tau ,t\tau]$. For this experiment, we chose a time resolution $\tau = 20s$ yielding 1230 snapshots for Thursday, 01-October. Its visualization is shown in Figure \[school\].
![Visualization for primary school embeddings. Each point represents a network snapshot during the day of the 01-October-2009. Color of points encode a time frame of the day spanning from 8:45 until 17:20[]{data-label="school"}](figure6.eps){width="2.8in"}
The clusters in Figure \[school\] can be seen as groups of networks behaving similarly and correlated with external events, e.g consecutive clusters are separated because an external event. For instance, lunch time is characterized by clusters defined between 12:00 and 14:00. The class time is represented by a long cluster of dark and light blue points in the morning and yellow, orange groups in the afternoon. The end of the school day is highlighted with a brown group of points. Mixed group colors indicates smooth temporal transitions, e.g. end of lunch time and beginning of classes (green-yellow-light blue), also the end of the afternoon break to classes (orange-red).
Note that unlike synthetic examples from the previous section, nodes have an individual identity, and different network snapshots take place on the same set of nodes.
**Graph clustering** {#graph-clustering .unnumbered}
--------------------
### **Clustering synthetic graphs** {#clustering-synthetic-graphs .unnumbered}
Another important application is clustering of networks. Clustering aims to group together “similar” graphs and putting dissimilar ones in different groups. We proceed alike the previous section, but we do not perform dimensionality reduction to $\mathbb{R}^2$. Instead, clustering is performed directly in the embedding space with the standard spectral clustering algorithm [@NIPS2001_2092]. This technique makes use of the spectrum of a similarity matrix of the data to performing clustering in fewer eigenvectors.
We create four different synthetic datasets composed by 600 networks of 81 nodes each. We run our method on each dataset and compute a $600 \times 600$ network embedding matrix using Eq \[emb\_dist\]. In order to compare against other techniques, the graph distance matrices for the methods introduced in the first part of the manuscript are computed. All matrices are normalized having a maximum value of one for dissimilar pairs of graphs and zero for the most similar ones. Therefore, spectral clustering is performed on the similarity matrices induced by the previous graph distance measures.
The clustering performance is evaluated through the normalized mutual information (NMI) [@10.1007/978-3-642-14366-3_23] metric in the form:
$$NMI(C_t, C_p) = \dfrac{2I(C_t;C_p)}{H(C_t)+H(C_p)}$$
where $H$ is the entropy of a class distribution and $I$ the mutual information between the ground truth class distribution $C_t$, and the predicted cluster assignment $C_p$. It runs from zero when the algorithm fails to a value of one when the clustering is perfectly recovered. Details about the datasets and ground truth class generation are presented in the following.
----------------- --------------------- --------------------------------------------------------------- -------------------
**DATASET** **Type of network** **Properties** **True clusters**
ER Erdős-Rényi Different average degrees 4
\[2pt\] Mixed ER - Power law Different models, same average degree 2
\[2pt\] LFR Power law Strong vs weak communities strength 2
\[2pt\] Dynamic Erdős-Rényi Perturbation mechanism: rewiring, adding and removing % edges 3
\[2pt\]
----------------- --------------------- --------------------------------------------------------------- -------------------
: Summary of sythetic datasets[]{data-label="datasets"}
### **Datasets** {#datasets-1 .unnumbered}
An overview of the generated datasets is shown in Table \[datasets\]. We generate Erdős-Rényi (*ER*) networks with four distinct parameters producing random networks with different average degrees. The so called *Mixed* dataset is a collection of power-law networks generated by the Barabási-Albert (*BA*) model and Erdős-Rényi networks, all with the same average degree.
In an attempt to simulate a dynamic network evolution, we simulate a time varying network (*Dynamic*) following [@2018arXiv180107351D], applying a perturbation mechanism from a starting ER network. At each time step, a fraction of edges of the previous graph are rewired uniformly at random. At the same time, we apply a depletion/thickening process in which edges are deleted with probability $0.015$ and formerly absent edges are added with probability $0.015$. We introduce two perturbation points by augmenting the probabilities of adding and deleting edges to $0.2$ from time $t=200$ and also to $0.6$ from time $t=400$, defining three ground truth clusters of similar behaving networks. Finally, the *LFR* dataset introduced previously for networks visualization is also considered for clustering. For all datasets we generate balanced ground truth classes.
In order to evaluate the sensitivity of the clustering to the node ordering, we perform clustering with different enumeration of nodes by applying a fixed node permutation across the networks. We reported the mean and standard deviation of the $NMI$ after running the experiment ten times.
### **Discussion** {#discussion-1 .unnumbered}
Regarding the clustering results in Table \[clustering\], we observe that our graph embeddings (*Emb*) provides better clustering than traditional graph distances. The method is capable to differentiate networks with different edge densities (*ER*). Meanwhile, it is able also to discriminate networks with different degree distribution even if they have a similar average degrees (*Mixed*). Discriminating power law networks from strong to weak community structure (*LFR*) is also well achieved. The time-evolving network (Dynamic) is a harder setting in which our method perform the best comparatively to graph distances. In this case the graph embeddings are able to capture the variations introduced by anomalous points in the underlying evolution of the network. This can be explained because the DAE was not designed for a target kind of graphs. Instead, it learns the underlying distribution of the data, identifying the main factor of variability adapting its parameters for discriminating heterogeneous networks. The quality of the embeddings remains almost the same after permuting the nodes, which is confirmed by the low variance in the NMI. Hence, in practice we fixed a node numbering for the learning procedure.
**Hamming** **Jaccard** **DeltaCon** **CLP** **CLP normed** **Emb**
----------------- ------------- ------------- -------------- ----------- ---------------- -----------------------
ER 0.024 0.070 0.294 **0.933** **0.914** **0.918 $\pm$ 0.004**
\[2pt\] Mixed **1.0** **1.0** **1.0** **1.0** 0.374 **1.0 $\pm$ 0**
\[2pt\] LFR 0.219 0.603 0.265 0.035 **0.983** **0.986 $\pm$ 0.014**
\[2pt\] Dynamic 0.389 0.255 0.198 0.216 0.172 **0.652 $\pm$ 0.085**
\[2pt\]
: Clustering results for synthetic datasets (NMI)[]{data-label="clustering"}
### **Clustering real life networks: multilayer networks** {#clustering-real-life-networks-multilayer-networks .unnumbered}
The European Air Transportation Network (ATN) [@DBLP:journals/corr/abs-1212-2153] is a multilayer network with 37 layers each representing a different European airline. Each layer has the same number of nodes which represent 450 European airports. We learn graph embeddings for all layers and we cluster them applying a standard hierarchical clustering algorithm on the network embedding distance matrix. The hierarchical clustering provides partition of layers according with their similarity on the embedding space, see Figure \[dendrogram\].
Our findings confirm those introduced in [@DBLP:journals/corr/abs-1212-2153]. We can identify two main clusters representing major and low-cost aerial companies, as well as some regional airlines grouped together. Indeed, these airlines have developed according with different structural/commercial constraints. Low-cost companies tends to avoid being centralized and cover more than one country simultaneously. Major airlines have a hub and spoke network, connecting outlying airports to few central ones, providing a maximum coverage from their home country.
![Dendrogram of airlines for European airports[]{data-label="dendrogram"}](figure7.eps)
**Graph Classification** {#graph-classification .unnumbered}
------------------------
We evaluate graph classification in the context of supervised classification. It requires previously annotated reference samples (graphs) in order to train a classifier and subsequently classify unknown data.
### **Brain connectomes classification** {#brain-connectomes-classification .unnumbered}
In this experiment we apply our method on a brain networks (connectomes) dataset built from magnetic resonance imaging (MRI) [@B.Chiem2018]. Structural and diffusion MRI data of 91 healthy men and 113 healthy women is preprocessed in order to create undirected networks. All graphs have the same 84 nodes representing neural Regions of Interests (ROIs). Weighted edges correspond to the number of neural fibers linking two ROIs. The ROI keeps the same correspondence among graphs. The task is to classify connectomes according to gender, male or female.
### **Experimental setup** {#experimental-setup .unnumbered}
We assess the performance of our method against some well known algorithms for graph classification, mainly graph kernels and feature-based approaches. We choose the Shorthest Path (SP) and the Weisfeiler-Lehman (WL) subtree kernels [@Shervashidze:2011:WGK:1953048.2078187]. We also compare against the feature-based (FB) method [@Barnett2016] and Multi-hop assortativities features (MaF) for network classification [@doi:10.1093/comnet/cny034]. Such methods provide a pairwise similarity matrix between networks in the form of a Gram matrix which is used to train a popular support vector machine classifier (SVM) [@Smola:2004:TSV:1011935.1011939]. Note that the graph distances considered in this work do not define a proper positive semi-definite matrix. Therefore, following [@Wu05ananalysis] we shift the spectrum of their similarity matrices providing a proper kernel coherent with the SVM setting.
We follow the experimental setup of [@Shervashidze:2011:WGK:1953048.2078187; @Yanardag:2015:DGK:2783258.2783417]. The dataset is randomly split in training and testing sets. The best model is cross-validated over 10 folds. Parameters of SVM are optimized only on the training set. Thus, we compute the generalization accuracy on the unseen test set. In order to exclude the random effect of the data splitting, we repeated the whole experiment 10 times. Finally, we report the average prediction accuracies and its standard deviation.
For each graph kernel we report the result for the parameter that gives the best classification accuracy. For the feature-based approach [@Barnett2016], feature vectors were built with the same network features they reported in their paper: number of nodes, number of edges, average degree, degree assortativity, number of triangles and global clustering coefficient. Results are shown in tables \[brains1\] and \[brains2\].
**WL** **SP** **FB** **CLaplacian** **NLaplacian** **Emb**
------------------ ------------------ ------------------ ------------------- ------------------- ------------------------------------
$61.20 \pm 2.16$ $65.45 \pm 1.78$ $65.95 \pm 2.54$ $74.19 \pm 11.16$ $71.07 \pm 10.95$ $\textbf{87.20} \pm \textbf{7.60}$
\[2pt\]
: Mean and standard deviation of classification accuracies on brain connectomes dataset.[]{data-label="brains1"}
**Hamming** **Jaccard** **DeltaCon** **MaF** **Emb**
------------------ ------------------- ------------------------------------ ------------------ ------------------------------------ --
$84.37 \pm 9.26$ $84.34 \pm 10.11$ $\textbf{87.80} \pm\textbf{ 6.54}$ $84.26 \pm 5.81$ $\textbf{87.20} \pm \textbf{7.60}$
\[2pt\]
: Mean and standard deviation of classification accuracies on brain connectomes dataset.[]{data-label="brains2"}
### **Discussion** {#discussion-2 .unnumbered}
As can be seen in Table \[brains1\], WL, SP and FB perform significantly worse than spectral distances and graph embedding. This is expected as they do not take the identity of the nodes into account. Here, all brains share the same anatomical regions, which make the order of the nodes relevant. In Table \[brains2\] can be seen that among the approaches exploiting node correspondence, our method (Emb) outperforms all others while remaining competitive with DeltaCon.
**Computational cost** {#computational-cost .unnumbered}
----------------------
Our graph embedding approach involve globally two steps: learning graph embedding through the DAE followed by a pairwise Euclidean distance matrix computation (Eq \[emb\_dist\]). In order to make fair comparisons, for each method of Table \[datasets\] we measure the runtime for computing the distance matrix between pairs of graphs over synthetic datasets. The running times reported for our approach include learning graph embeddings and pairwise Euclidean distances computation. Results are shown below in Figure \[runtime\].
![Computational time for feature computation. Time is log scaled.[]{data-label="runtime"}](figure8.eps){width="3.6in"}
As can be seen our method (Emb) outperform all graph distances across all studied datasets. The competitor approaches compute their similarity score comparing examples directly in the graph domain. However, we compare graphs in the embedding space. It is well known that spectral distances (CLP, NLP) are heavy in computation due to the eigenvalues calculation. Hamming and Jaccard distances rely in computing common node/edges patterns being slower in dense networks. Even if Deltacon is a scalable graph similarity measure, it is outperformed by the edit distances, but is more efficient than spectral distances. Meanwhile, our graph embedding method remains the fastest. Indeed, mini-batch gradient descent on relatively small datasets convergences faster. For efficiency reasons, the Euclidean distance between two feature embeddings $x,y$ was computed as $d(x,y) = \sqrt{\langle x, x\rangle - 2\langle x,y \rangle + \langle y, y \rangle}$. This formulation has the advantage of being very efficient for sparse graphs given that some terms can be pre-computed for an entire pairwise computation.
All computations were done on a standard computer Intel(R) Core(TM) i7-4790 CPU, 3.60GHzI with 16G of RAM.
Discussion and concluding remarks {#discussion-and-concluding-remarks .unnumbered}
=================================
In the presented work we propose a method to learn graph embeddings for a collection of networks, e.g mapping graphs to $\mathbb{R}^p$ vectors. Our method allows to compare graphs computing only Euclidean distances between their embeddings in a feature space. We evaluate our method in three different applications in graph clustering, visualization and classification. Across heterogeneous synthetic and real life datasets, we compare our approach against well known graph distances and graph-kernel methods of the literature.
It turns out that our approach extract the most appropriated features for distinguish different kind of graphs. Indeed, clustering groups of similar networks provides good quality partitions among synthetic datasets (Table \[clustering\]), discriminating better heterogeneous structures among networks. Despite there is not a clear agreement about the use of combinatorial or normalized Laplacian in graph mining applications, spectral distances are highly competitive in graph clustering and visualization but are incapable to exploit node correspondence. Nevertheless, our learned graph embeddings turns out to be computationally cheaper than all considered methods (Table \[runtime\]), being an attractive yet efficient method for comparing networks.
The results in graph classification reveal that our approach has superior performance than graph-kernels and graph spectral distances (Table \[brains1\]). Indeed, exploiting the node identities across graphs increases the accuracy of the method. Thus, this result suggest a promising research direction in the connectomics domain.
Note that in this work we were not focusing in the task of for instance differentiating random networks with different average degrees, which can be trivially solved without any machine learning tool. Instead, we aimed to show an automatic way to leave the machine figure out the most relevant hidden patterns from the data, which is more general than designing tailored methods for particular applications.
The current study was limited by the assumption that all networks must have the same set nodes. Even if in many real applications this hypothesis holds, a large amount of complex systems have heterogeneous size graphs, e.g. chemical compounds, social networks, etc. This study has only investigated the class of graphs without node/edge attributes, such as age, gender in social networks. Addressing these issues introduce additional challenges and new opportunities for further research.
Despite this limitation, our work has the potential of being extended in two directions. Because the DAE captures the underlying probability distribution of the data [@Vincent:2008:ECR:1390156.1390294], the decoding function could be used to generate artificial data, e.g generating brain networks, for mining purposes. Another possibility is to explore deeper neural network architectures such as the stacked autoencoders [@Vincent:2010:SDA:1756006.1953039] and its variants in order to learn hierarchical feature representation of the data for graph classification and clustering applications.
This work was supported by Concerted Research Action (ARC) supported by the Federation Wallonia-Brussels Contract ARC 14/19-060; Fonds de la Recherche Scientifique; and Flasgship European Research Area Network (FLAG-ERA) Joint Transnational Call “FuturICT2.0”. We also thank Leto Peel and Michel Fanuel for helpful discussion and suggestions.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A fundamental two-fluid model for describing dynamics of a plasma is the Euler-Poisson system, in which compressible ion and electron fluids interact with their self-consistent electrostatic force. Global smooth electron dynamics were constructed in Guo [@Guo] due to dispersive effect of the electric field. In this paper, we construct global smooth irrotational solutions with small amplitude for ion dynamics in the Euler-Poisson system.'
address:
author:
- Yan Guo and Benoit Pausader
title: 'Global Smooth Ion Dynamics in the Euler-Poisson System'
---
Introduction and Formulation
============================
The ionic Euler-Poisson system
------------------------------
The two-fluid models in plasma physics describe dynamics of two separate compressible fluids of ions and electrons interacting with their self-consistent electromagnetic field. Many famous nonlinear dispersive PDE, such as Zakharov’s equation, nonlinear Schrödinger equations, as well as KdV equations, can be formally derived from two-fluid models under various asymptotic limits. In the absence of the magnetic effects, the fundamental two-fluid model for describing the dynamics of a plasma is given by the following Euler-Poisson system $$\begin{split}
\partial _{t}n_{\pm }+\nabla \cdot \left( n_{\pm }v\,_{\pm }\right) & =0 \\
n_{\pm }m_{\pm }(\partial _{t}v_{\pm }+v_{\pm }\cdot \nabla v_{\pm })+T_{\pm
}\nabla n_{\pm }& =en_{\pm }\nabla \phi \\
\Delta \phi & =4\pi e(n_{+}-n_{-}).
\end{split}
\label{2Fluids}$$ Here $n_{\pm }$ are the ion (+) and electron density (-), $v_{\pm }$ are the ion (+) and electron (-) velocity, $m_{\pm }$ are the masses of the ions (+) and electrons (-), $T_{\pm }$ are their effective temperatures, and $e$ is the charge of an electron. The self-consistent electric field $\nabla \phi $ satisfies the Poisson equation. The Euler-Poisson system describes rich dynamics of a plasma. Indeed, even at the linearized level, there are electron waves, ion acoustic waves in the Euler-Poisson system. Despite its importance, there has been few mathematical study of its global solutions in 3D. This stems from the fact that the Euler-Poisson system belongs to the general class of hyperbolic conservation laws with zero dissipation, for which no general mathematical framework for construction of global in-time solutions exists in 3D. In fact, as expected [@GuoTad], solutions of the Euler-Poisson system with large amplitude in general will develop shocks.
However, unlike the pure Euler equations, shock formation for solutions of the Euler-Poisson system with small amplitude has remained open. In Guo [@Guo], the first author studied a simplified model of the Euler-Poisson system for an electron fluid: $$\begin{split}
\partial _{t}n_{-}+\nabla \cdot \left( n_{-}v\,_{-}\right) & =0 \\
n_{-}m_{-}(\partial _{t}v_{-}+v_{-}\cdot \nabla v_{-})+T_{-}\nabla n_{-}&
=en_{-}\nabla \phi \\
\Delta \phi & =4\pi e(n_{-}-n_{0}).
\end{split}
\label{electronEP}$$ In this model, the ions are treated as immobile and only form a constant charged background $n_{0}$. Surprisingly, it was observed [@Guo] that the linearized Euler-Poisson system for the electron fluid is the Klein-Gordon equation, due to plasma oscillations created by the electric field $\phi .$ In this case, the dispersion relation reads $$\omega (\xi )\backsim \sqrt{1+|\xi |^{2}}$$Such a Klein-Gordon effect led to construction of smooth irrotational electron dynamics with small amplitude for all time. This is in stark contrast to the pure Euler equations for neutral fluids where the dispersion relation reads $$\omega (\xi )\backsim |\xi |,$$ in which shock waves can develop even for small smooth initial data (see Sideris [@Sid]). It is the dispersive effect of the electric field that enhances the linear decay rate and prevents shock formation. The natural open question remains: does such a dispersive effect exist generally ? If so, can it prevent shock formation for the general Euler-Poisson system ?
In the current paper, we make another contribution towards answering this question. We consider another (opposite) asymptotic limit of the original Euler-Poisson system for the ion dynamics. It is well-known that $\frac{m_{-}}{m_{+}}<<1$ in all physical situations. By letting the electron mass $m_{-}$ go to zero, we formally obtain $T_{-}\nabla
n_{-}=en_{-}\nabla \phi $ and the famous Boltzmann relation $$n_{-}=n_{0}\exp (\frac{e\phi }{T_{-}}) \label{boltzmann}$$for the electron density ($n_{0}$ is a constant). Such an important relation can also be verified through arguments from kinetic theory, see Cordier and Grenier [@CorGre]. We then obtain the well-known ion dynamic equations as $$\begin{split}
\partial _{t}n_{+}+\nabla \cdot \left( n_{+}v_{+}\right) & =0 \\
n_{+}m_{+}\left( \partial _{t}v_{+}+v_{+}\cdot \nabla v_{+}\right) &
=-T_{+}\nabla n_{+}-n_{+}e\nabla \phi \\
\Delta \phi & =4\pi e\left( n_{0}\exp \left( \frac{e\phi }{T_{-}}\right)
-n_{+}\right) .
\end{split}
\label{EP}$$We also assume that $$\hbox{curl}(v(0))=0. \label{CondCurlFree}$$It is standard that the condition is preserved by the flow. As a matter of fact, non irrotational flow leads to creation of a non-vanishing magnetic field, which is omitted in the Euler-Poisson system but retained in a more general Euler-Maxwell system [@CheJerWan]. The linear dispersion relation for behaves like $$p(\xi )\equiv |\xi |\sqrt{\frac{2+|\xi |^{2}}{1+|\xi |^{2}}}\equiv |\xi
|q(|\xi |) \label{DefOfp}$$which is much closer to the wave dispersion $\omega (\xi )=|\xi |$ than to the Klein-Gordon one, $\omega (\xi )=\sqrt{1+|\xi |^{2}}$ (in particular note this dispersion relation behaves near $0$ as in the Schrödinger case, whereas in our dispersion relation $p$ remains very similar to that of the wave’s). Intuitively, one might expect formation of singularity for as in the pure Euler equations. Nevertheless, we demonstrate that small smooth irrotational flows exist globally in time, and there is no shock formation. Without loss of generality, we study the global behavior of irrotational perturbations of the uniform state $$\lbrack n_{+},v_{+}]=[n_{0}+\rho ,v].$$We use two important norms defined as follows: $$\begin{split}
\Vert u(x)\Vert _{Y}& =\Vert |\nabla |^{-1}u\Vert _{H^{2k+1}}+\Vert u\Vert
_{W^{k+\frac{12}{5},\frac{10}{9}}} \\
\Vert u(t,x)\Vert _{X}& =\sup_{t}\left( \Vert |\nabla |^{-1}(1-\Delta )^{k+%
\frac{1}{2}}u(t)\Vert _{L^{2}}+(1+t)^{\frac{16}{15}}\Vert (1-\Delta )^{\frac{%
k}{2}}u(t)\Vert _{L^{10}}\right)
\end{split}
\label{Norm}$$for $k\geq 5$.
Here, we keep $k$ as a parameter to emphasize the fact that smoother initial data lead to smoother solutions. Hidden in the $X$-norm is a statement about preservation of regularity of $(\rho ,v)$. Our main result is the following
\[MainThm\] There exists $\varepsilon >0$ such that any initial perturbation $(n_{0}+\rho _{0},v_{0})$ satisfying $%
\nabla \times v_{0}=0$ and $\Vert \rho_0 \Vert _{Y}+\Vert v_{0}\Vert
_{Y}\leq \varepsilon $ leads to a global solution $(n_0+\rho ,v)$ of with $$\Vert\rho \Vert _{X}+\Vert v\Vert _{X}\leq 2\varepsilon .$$ In particular, the perturbations $\rho $ and $v$ decay in $L^{\infty }$.
Together with earlier result in Guo [@Guo], global smooth potential flows with small velocity exist for two opposite scaling limits of . This is a strong and exciting indication that shock waves of small amplitude should be absent for the full Euler-Poisson system , at least in certain physical regimes. Our method developed in this paper should be useful in the future study of .
There have been a lot of mathematical studies of various aspects of the Euler-Poisson system for a plasma. Texier [@Tex; @Tex2] studied the Euler-Maxwell system and its approximation by the Zakharov equations. Wang and Wang [@WanWan] constructed large BV radially symmetric solutions outside the origin. In Liu and Tadmor [@LiuTad; @LiuTad2], threshold for singularity formation has been studied for the Euler-Poisson system with $T_{\pm }=0$ in one and two dimensions. In Feldman, Ha and Slemrod [@FelSeuSle; @FelSeuSle2], plasma sheath problem of the Euler-Poisson system was investigated. In Peng and Wang [@PenWan], Euler-Poisson system is derived from Euler-Maxwell system with a magnetic field. Quasi-neutral limit in the Euler-Poisson system was studied in Cordier and Grenier [@CorGre] and Peng and Wang [@PenWan2]. When $n_{+}$ is replaced by a doping profile and a momentum relaxation is present, the Euler-Poisson system describes electron dynamics in a semiconductor device. There has been much more mathematical study of such a model, for which we only refer to Chen, Jerome and Wang [CheJerWan]{} and the references therein.
Presentation of the paper
-------------------------
For notational simplicity, we let $n_{0}=e=T_{+}=T_{-}=1$ in throughout the paper. Even though the ion dynamics system is the most natural system to further understand the dispersive effects in the full Euler-Poisson system , it has been remained an open problem to construct global smooth solutions until now ever since the work of [Guo]{}, due to much more challenging mathematical difficulties than in the case of the electron Euler-Poisson equation studied by Guo [@Guo].
The first difficulty is to understand the time decay rate of the linearized ion dynamics equation: $$\partial _{tt}\rho -\Delta \rho -\Delta (-\Delta +1)^{-1}\rho =0.$$whose solutions are given by the operator $e^{\pm p(|\nabla |)t}$ with p given by . Unlike the linearized electron equations studied in [@Guo], there is no direct study of the linear decay of such a system. Only recently [@GuoPenWan], time-decay rate for general dispersive equations has been carried out in detail with asymptotic conditions near low frequency $|\xi |=0$ and high frequency $|\xi |=\infty .$ Interestingly, any phase $p(\xi )$ which is not exactly the phase function of the wave equation $p(\xi )=|\xi |$ commands a decay rate better than $\frac{1}{t}$. We are able to employ this result together with a stationary phase analysis near the inflection point of $p(\xi )$ to obtain a decay rate of $\frac{1}{t^{4/3}%
}$, which is between the wave and the Klein-Gordon equations. A consequence of the linear estimates of Section \[SecLinEst\] is that $$\Vert e^{itp(|\nabla |)}\alpha _{0}\Vert _{X}\lesssim \Vert \alpha _{0}\Vert
_{Y}.$$
The main mathematical difficulty in this paper stems from bootstrapping the linear decay into a construction of global solutions to the nonlinear problem. Based on very recent new techniques of harmonic analysis in the study of dispersive PDE by Germain, Masmoudi and Shatah [GerMasSha,GerMasSha2,GerMasSha3]{}, Gustafson, Nakanishi and Tsai [GNT2,GNT]{}, Shatah [@Sha], we follow a new set-up for normal form transformation in [@GerMasSha; @GNT; @Sha]. Using that $\nabla \times
v\equiv 0$, we can introduce a pair of complex valued new unknowns: $$\alpha _{1}=\rho -\frac{i}{q(|\nabla |)}\mathcal{R}^{-1}v,\text{ and }\alpha
_{2}=\rho +\frac{i}{q(|\nabla |)}\mathcal{R}^{-1}v, \label{DefOfAlpha}$$for $q$ defined in , where $\mathcal{R}=\nabla |\nabla |^{-1}$ stands for the Riesz transform, and $v=\nabla \psi $, $\mathcal{R}%
^{-1}v\equiv |\nabla |\psi $. After the normal form transformation , it suffices for us to control $$\begin{split}
& \hat{\alpha}(t)\backsim \int_{\mathbb{R}^{3}}\frac{m(\xi ,\eta )}{\Phi
_{1}(\xi ,\eta )}\hat{\alpha}(\xi -\eta )\hat{\alpha}(\eta )d\eta \\
& +\int_{0}^{t}\int_{\mathbb{R}^{6}}e^{i(t-s)p(|\xi |)}\frac{m(\xi ,\eta
)m(\eta ,\zeta )}{\Phi _{1}(\xi ,\eta )}\hat{\alpha}(\xi -\eta )\hat{\alpha}%
(\eta -\zeta )\hat{\alpha}(\zeta )d\eta d\zeta ds.
\end{split}
\label{normal}$$where $m$ denotes a generic multiplier given by . This is well defined in views of and $$\Phi _{1}=p(|\xi |)-p(|\xi -\eta |)-p(|\eta |).$$In the Klein-Gordon case, the phase is bounded away from zero so there is no singularity. However, for $\Phi _{1},$ there is a significant zero set when $%
|\xi -\eta ||\eta |=0,$ (see Lemma \[EstimPhiGen\]) and there is no null form structure to cancel with the multiplier $m$. We first observe that $$m(\xi ,\eta )m(\eta ,\zeta )\backsim |\xi ||\eta |.$$We then make use of such a structure to form a locally bounded multiplier $$\mathcal{M}_{1}=\frac{|\xi ||\xi -\eta ||\eta |}{\Phi _{1}(\xi ,\eta )}%
\lesssim 1.$$This process introduces a singular term $\frac{\hat{\alpha}(\xi -\eta )}{%
|\xi -\eta |},$ which will be controlled in a separate fashion by the $%
H^{-1} $ norm in our norm $\Vert \cdot \Vert _{X}.$ We believe that including this $H^{-1}$ control in the norm should work equally well for equations with nonlinearity which has perfect spatial derivatives.
Even though $\mathcal{M}_{1}$ is locally bounded, it is very difficult to employ classical bilinear estimate such as Coifman-Meyer Theorem [CoiMey]{} to control . This is due to the anisotropic nature of $\mathcal{M}_{1}$ since $|\eta |$ can be very small with respect to $|\xi
-\eta |$. Instead, we make use of a very recent multiplier estimate by Gustafson, Nakanishi and Tsai [@GNT]. It is important to use $L^{10}$-norms as a proxy for the $L^{\infty }$-norm for which our degenerate multipliers are not well-suited (we need an $L^{p}$-norm with $p<12$). The optimal Sobolev regularity for $\mathcal{M}_{1}\in L_{\xi }^{\infty }(\dot{H}%
_{\eta }^{5/4-\varepsilon })\cap L_{\eta }^{\infty }(\dot{H}_{\xi
}^{5/4-\varepsilon })$ is crucial in applying such an estimate to obtain $%
L^{10}$ decay, and its proof is particularly delicate for small frequencies. We split the phase space and make a careful interplay between angles and the lengths of $\xi ,\eta ,\xi -\eta $. We also make use of Littlewood-Paley decomposition and interpolation to obtain a sharp Sobolev estimate for $%
\mathcal{M}_{1}$. On the other hand, to reduce the requirement of number of derivatives in our norm $X,$ we also need to show a stronger estimate $%
\mathcal{M}_{1}\in L_{\xi }^{\infty }(\dot{H}_{\eta }^{3/2-\varepsilon
})\cap L_{\eta }^{\infty }(\dot{H}_{\xi }^{3/2-\varepsilon })$ for large frequencies.
This paper is organized as follows: in Section \[SecLinEst\] we study the relevant linear dispersive equation. In Section \[SecNot\], we introduce our normal form transformation. In Section \[SecH-1\], we get an estimate on the $L^2$-part of the $X$ norm using the energy method. In Section [SecMult]{} we state and prove the relevant multiplier estimate we need in order to control our bilinear terms. Finally, in Section \[SecL10\], we control the high integrability part of the norm, and finish the analysis to obtain global solutions with small initial data in Theorem \[MainThm\].
Notations and preliminary results
---------------------------------
We work in dimension $n=3$, although we state some results in arbitrary dimension $n$. We introduce $$\langle a\rangle =\sqrt{1+a^{2}}.$$ We write $A\lesssim B$ to signify that there exists a constant $C$ such that $A\le C$. We write $A\simeq B$ if $A\lesssim B\lesssim A$. Our phases and some multiplier are radial functions, and in some cases we might abuse notations and write, for a radial function $f$, $f(x)=f(\vert x\vert)$.
Our multipliers are estimated using the homogeneous Sobolev norm defined for $0\le s<n/2$ by $$\Vert f\Vert_{\dot{H}^s}=\Vert \vert\nabla\vert^sf\Vert_{L^2}$$ where $\vert\nabla\vert$ is defined by $\mathcal{F}(\vert\nabla\vert
f)(\xi)=\vert\xi\vert\hat{f}(\xi)$.
We will also use the Littlewood-Paley multipliers $P_{N}$ defined for dyadic numbers $N\in 2^{\mathbb{Z}}$ by $$P_{N}g=\mathcal{F}_{\xi }^{-1}\varphi (\frac{\xi }{N})\mathcal{F}_{\xi }g
\label{DefLitPalOp}$$ where $\varphi \in C_{c}^{\infty }(\mathbb{R}^{n})$ is such that $$\forall \xi \neq 0,\hskip.2cm\sum_{N\in 2^{\mathbb{Z}}}\varphi (\frac{\xi }{%
N })=1,$$ and for later use, we also introduce a function $\chi\in C^\infty_c(\mathbb{R%
}^n)$ such that $\chi\varphi=\varphi$. An important estimate on these Littlewood-Paley multipliers is the Bernstein inequality: $$\label{BernSobProp}
\begin{split}
&\hskip.8cm\Vert \vert\nabla\vert^{\pm s}P_Nf\Vert_{L^p}\lesssim_s N^{\pm
s}\Vert P_Nf\Vert_{L^p}\lesssim_s N^{\pm s}\Vert f\Vert_{L^p} \\
\end{split}%$$ for all $s\ge 0$, and all $1\le p\le\infty$, independently of $f$, $N$, and $%
p$, where $\vert\nabla\vert^s$ is the classical fractional differentiation operator.
We will also need the two following product estimates:
\[LemProdEnergy\] Let $\tau$ be a multi-index of length $\vert\tau\vert$ and $\gamma<\tau$, then for all $u\in C^\infty_c(\mathbb{R}^n)$ and all $%
\delta>0$, there holds that $$\Vert D^{\tau-\gamma}u D^\gamma\partial_ju\Vert_{L^2}\lesssim_\delta \Vert
u\Vert_{W^{1+\delta,\infty}}\Vert u\Vert_{H^{\vert\tau\vert}}\lesssim \Vert
u\Vert_{W^{2,10}}\Vert u\Vert_{H^{\vert\tau\vert}}$$
We first note that without loss of generality, we may assume that $|\gamma
|+1,|\tau |-|\gamma |\geq 2$, otherwise Hölder’s inequality gives the result. We use a simple paradifferential decomposition, in other words, we write $$\begin{split}
D^{\tau -\gamma }uD^{\gamma }\partial _{j}u& =\left( \sum_{M\sim
N}+\sum_{M/N\leq 1/16}+\sum_{N/M\leq 1/16}\right) \left( P_{M}D^{\tau
-\gamma }u\right) \left( P_{N}D^{\gamma }\partial _{j}u\right) \\
& =R+T_{1}+T_{2}
\end{split}%$$where $M$ and $N$ are dyadic numbers. We first estimate $R$ as follows using Bernstein properties and in particular the fact that $$\Vert P_{N}u\Vert _{L^{\infty }}\lesssim \min (1,N^{-1-\delta })\Vert u\Vert
_{W^{1+\delta ,\infty }}$$we get that with the Cauchy Schwartz inequality that $$\begin{split}
\Vert R\Vert _{L^{2}}& \lesssim \sum_{M\sim N}\Vert P_{M}D^{\tau -\gamma
}uP_{N}D^{\gamma }\partial _{j}u\Vert _{L^{2}} \\
& \lesssim \sum_{M\sim N}M^{|\tau |-|\gamma |}\Vert P_{M}u\Vert
_{L^{2}}M^{|\gamma |+1}\Vert P_{N}u\Vert _{L^{\infty }} \\
& \lesssim \left( \sum_{M}M^{2|\tau |}\Vert P_{M}u\Vert _{L^{2}}^{2}\right)
^{\frac{1}{2}}\left( \sum_{M}M^{2}\Vert P_{M}u\Vert _{L^{\infty
}}^{2}\right) ^{\frac{1}{2}} \\
& \lesssim \Vert u\Vert _{H^{|\tau |}}\Vert u\Vert _{W^{1+\delta ,\infty }}.
\end{split}%$$Independently, we estimate $T_{1}$ as follows using that if $16M_{i}\leq
N_{i}$, $i=1,2$ then $$\langle P_{M_{1}}fP_{N_{1}}g,P_{M_{2}}hP_{N_{2}}k\rangle _{L^{2}\times
L^{2}}=0$$unless $N_{1}\leq 4N_{2}\leq 16N_{1}$ (intersection of the Fourier support), and letting $f=D^{\tau -\gamma }u$, $g=D^{\gamma }\partial _{j}u$, we get $$\begin{split}
\Vert T_{1}\Vert _{L^{2}}& \lesssim \sum_{N_{1}\sim N_{2},16M_{i}\leq
N_{i}}\langle P_{N_{1}}fP_{M_{1}}g,P_{N_{2}}fP_{M_{2}}g\rangle _{L^{2}\times
L^{2}} \\
& \lesssim \Vert u\Vert _{W^{1,\infty }}^{2}\sum_{N_{1}\sim
N_{2},16M_{i}\leq N_{i}}\Vert P_{N_{1}}f\Vert _{L^{2}}\Vert P_{N_{2}}f\Vert
_{L^{2}}(M_{1}M_{2})^{|\gamma |} \\
& \lesssim \Vert u\Vert _{W^{1,\infty }}^{2}\sum_{N_{1}\sim
N_{2}}N_{1}^{|\gamma |}\Vert P_{N_{1}}f\Vert _{L^{2}}N_{2}^{|\gamma |}\Vert
P_{N_{2}}f\Vert _{L^{2}} \\
& \lesssim \Vert u\Vert _{W^{1,\infty }}^{2}\Vert u\Vert _{H^{|\tau |}}^{2}
\end{split}%$$and $T_{2}$ is treated exactly in the same way.
We also need the following “tame” product estimate (see e.g. Tao [TaoBook]{})
For $1<p<\infty$, $s\ge 0$, $$\label{tameEst}
\Vert uv\Vert_{W^{s,p}}\lesssim \Vert u\Vert_{L^\infty}\Vert
v\Vert_{W^{s,p}}+\Vert u\Vert_{W^{s,p}}\Vert v\Vert_{L^\infty}$$ for $u$ and $v$ in $L^\infty\cap W^{s,p}$.
Linear Decay {#SecLinEst}
============
In this section, we investigate the decay of linear solutions of the linearized equation $$\label{LinEqt1}
\partial_{tt}\rho-\Delta\rho-\Delta(-\Delta+1)^{-1}\rho=0.$$ These solutions can be expressed in terms of the initial data and of one “half-wave” operator $$T_t=e^{itp(\vert\nabla\vert)}$$ for $p$ defined in that we now study. Our main result in this section is the following
\[PropLinDecay\] For any $\delta>0$, for any $f\in W^{\frac{5}{2}%
+\delta,1}$, there holds that $$\label{LinDecay}
\Vert e^{itp(\vert\nabla\vert)}f\Vert_{L^\infty}\lesssim_\delta \left(\vert
t\vert^{-\frac{4}{3}}+\vert t\vert^{-\frac{3}{2}}\right)\Vert f\Vert_{W^{%
\frac{5}{2}+\delta,1}}$$ for all $t\ne 0$. Besides, we have the $L^{10}$-decay estimate $$\label{L10Decay}
\Vert e^{itp(\vert\nabla\vert)}f\Vert_{L^{10}}\lesssim (1+\vert t\vert)^{-%
\frac{16}{15}}\Vert f\Vert_{W^{\frac{12}{5},\frac{10}{9}}}$$ uniformly in $\varepsilon, t, f$.
More precise estimates are derived below. The rest of the section in devoted to a proof of and .
For most of this section, we study the dispersive features of our operator in general dimension $n$. Proposition \[PropLinDecay\] is a consequence of the particular case $n=3$. Direct computations give that $$\label{Estimp}
\begin{split}
p^{\prime }(r)& =\frac{1}{\sqrt{(1+r^{2})(2+r^{2})}}\left( 1+r^{2}+\frac{1}{
1+r^{2}}\right) , \\
p^{\prime \prime }(r)& =\frac{r\left( r^{4}-2r^{2}-6\right) }{(1+r^{2})\left[
(1+r^{2})(2+r^{2})\right] ^{\frac{3}{2}}}\hskip.1cm\hbox{and} \\
p^{\prime \prime \prime }(r)& =\frac{5r^{4}-6r^{2}-6}{(1+r^{2})^{\frac{5}{2}
}(2+r^{2})^{\frac{3}{2}}}-\frac{r\left( r^{4}-2r^{2}-6\right) (11r^{3}+16r)}{
(1+r^{2})^{\frac{7}{2}}(2+r^{2})^{\frac{5}{2}}}.
\end{split}%$$ We note that $p^{\prime \prime }(r)$ has one unique positive root at $$r=r_{0}=\sqrt{1+\sqrt{7}}. \label{r0}$$
In order to state our first result, we define a frequency localization function around the critical point $r_{0}$. Let $\psi _{r_{0}}\in C^{\infty
}(\mathbb{R})$ be a smooth function such that $0\leq \psi \leq 1$, $\psi
_{r_{0}}(r_{0}+r)=1$ when $|r|\leq \varepsilon $ and $\psi
_{r_{0}}(r_{0}+r)=0$ when $|r|\geq 2\varepsilon $.
\[LinearDecayMedFreqLemma\] For all time $t\neq 0$, and all $f\in L^{1}$, there holds that $$\Vert e^{itp(|\nabla |)}\psi _{r_{0}}(|\nabla |)f\Vert _{L^{\infty
}}\lesssim _{n,\varepsilon }(1+|t|)^{-\frac{n-1}{2}-\frac{1}{3}}\Vert f\Vert
_{L^{1}}. \label{LinearDecay1}$$
We note that $$\begin{aligned}
\Vert e^{itp(|\nabla |)}\psi _{r_{0}}(|\nabla |)f(x)\Vert_{\infty } &=&||%
\mathcal{F}^{-1}\{e^{itp(|\xi |)}\psi _{r_{0}}(|\xi |)\hat{f}(\xi
)\}||_{\infty } \\
&=&||\mathcal{F}^{-1}\{e^{itp(|\xi |)}\psi _{r_{0}}(|\xi |)\}\ast
f(x)||_{\infty } \\
&\leq &||\mathcal{F}^{-1}\{e^{itp(|\xi |)}\psi _{r_{0}}(|\xi |)\}||_{\infty
}||f||_{L^{1}}.\end{aligned}$$ Since $\psi _{r_{0}}$ is chosen to be spherically symmetric, it is well-known that $$\begin{aligned}
\mathcal{F}^{-1}\{e^{itp(|\xi |)}\psi _{r_{0}}(|\xi |)\}(x) &=&2\pi
\int_{0}^{\infty }e^{itp(r)}\psi_{r_0} (r)\tilde{J}_{\frac{n-2}{2}%
}(r|x|)r^{n-1}dr \\
&=&2\pi \int_{0}^{\infty }e^{itp(r)}\psi_{r_0} (r)\tilde{J}_{\frac{n-2}{2}
}(r|x|)r^{n-1}dr\end{aligned}$$ where for all $n\geq 2$, $$\tilde{J}_{\frac{n-2}{2}}(s)\equiv s^{-\frac{n-2}{2}}J_{\frac{n-2}{2}}(s)= %
\hbox{Re}\left( e^{is}Z(s)\right) =e^{is}Z(s)-e^{-is}\bar{Z}(s).
\label{Bessel1}$$ Here $Z(s)$ is a smooth function satisfying (cf John [@Joh]) that for all $k\geq 0$ and all $s$ $$|\partial ^{k}Z(s)|\lesssim _{n,k}(1+s)^{-\frac{n-1}{2}-k}. \label{Bessel2}$$ We first estimate $e^{-ir|x|}\bar{Z}(r|x|)$. Changing variable $r\rightarrow
r+r_{0},$ and letting $\Psi (v)=(r_{0}+r)^{n-1}\psi _{r_{0}}(r_{0}+r)$, we get $$\begin{split}
\tilde{I}_{1}& =\int_{0}^{\infty }e^{i\left( tp(r)-r|x|\right) }\psi (r)%
\overline{Z}(r|x|)r^{n-1}dr \\
& =\int_{-2\varepsilon }^{2\varepsilon }e^{i\left( tp(r)-r|x|\right) }\Psi
(r)\overline{Z}((r_{0}+r)|x|)dr
\end{split}%$$ and a first crude estimate allow us to conclude that $$|\tilde{I}_{1}|\lesssim _{n,k,\varepsilon }1 \label{EstimI1SmallTimes}$$ which takes care of the small times $|t|\lesssim 1$. Thus, we now assume that $t>1$. We consider the phase $$\Omega (r,|x|,t)=\left(p(r)-r\frac{\vert x\vert}{t}\right) .$$ By , we directly compute that $p^{\prime }(r_{0})\neq 0$ and $$p^{\prime \prime \prime }(r_{0})=\frac{4r_{0}^{4}-4r_{0}^{2}}{(1+r_{0}^{2})^{%
\frac{5}{2}}(2+r_{0}^{2})^{\frac{3}{2}}}\neq 0.$$
**Case 1** Suppose that $|x|\geq \frac{1}{4}p^{\prime }(r_{0})t$. Then, since $\vert r\vert\le 2\varepsilon,$ $$|\partial _{r}^{3}\Omega (r,|x|,t)|=|p^{\prime \prime \prime }(r)|>\frac{1}{%
2 }|p^{\prime \prime \prime }(r_{0})|,$$ if $\varepsilon>0$ is chosen sufficiently small, and using , by the Van der Corput lemma (see e.g. Stein [@Ste]), we get that $$\begin{split}
|\tilde{I}_{1}|& \lesssim _{\varepsilon }|t|^{-\frac{1}{3}}\left( \sup_{k\in
\{0,1\},|r|\leq 2\varepsilon }|\overline{Z}((r_{0}+r)|x|)\partial ^{k}\Psi
(r)|+|x|\sup_{|r|\leq 2\varepsilon }|\Psi (r)\overline{Z}^{\prime
}((r_{0}+r)|x|)|\right) \\
& \lesssim _{\varepsilon }|t|^{-\frac{1}{3}}|x|^{-\frac{n-1}{2}} \\
& \lesssim _{\varepsilon }|t|^{-\frac{1}{3}-\frac{n-1}{2}}
\end{split}
\label{EstimI1LargeTimeCriticalPoint}$$
**Case 2** Suppose now that $|x|\leq \frac{1}{4}p^{\prime }(r_{0})t$. Then $$|\partial _{r}\Omega (r,|x|,t)|=|p^{\prime }(r_{0})|-\frac{|x|}{t}\geq
|p^{\prime }(r_{0})|/2$$ and therefore, using the nonstationary phase and the fact that $Z$ has all derivatives bounded, we obtain that $$|\tilde{I}_{1}|\lesssim |t|^{-\frac{n}{2}}.
\label{EstimI1LargeTimeNonCriticalPoint}$$
The estimation of $e^{ir\vert x\vert}Z(r\vert x\vert)$ is easier. Proceeding as above, we introduce $$\tilde{I}_2 =\int_{-2\varepsilon }^{2\varepsilon }e^{i\left( tp(r)+r\vert
x\vert\right) }\Psi (r)Z((r_{0}+r)\vert x\vert)dr.$$ But the phase in $\tilde{I}_2$ satisfies $$\vert \partial_r\Omega_2(r,\vert x\vert,t)\vert =\vert\partial_r\left(p(r)+r%
\frac{\vert x\vert}{t}\right)\vert\ge \vert p^\prime(r)\vert\gtrsim 1$$ and we can conclude as in Case 2 above to get $$\vert\tilde{I}_2\vert\lesssim \vert t\vert^{-\frac{n}{2}}.$$ Now this, , and prove .
Now that we have dealt with the degeneracy at $r_{0}$, the other degeneracy at $0$ and $\infty $ are more easily dealt with at the price of loosing derivatives. To isolate these regions, we introduce two smooth cut-off functions. We let $\psi _{0}$ and $\psi _{\infty }$ such that $0\leq \psi
_{0}+\psi _{\infty }\leq 1$, $\psi _{0}$ is supported on $%
(-r_{0}+\varepsilon ,r_{0}-\varepsilon )$, $\psi _{\infty }$ is supported on $\{| x |\geq r_{0}+\varepsilon \}$ and $$\psi _{0}+\psi _{r_{0}}+\psi _{\infty }=1. \label{PartitionOfUnity}$$ We first treat the case of small frequencies. We note that since $r_{0}$ is the only positive root of $p^{\prime \prime }$, $p^{\prime \prime }(r)\neq 0$ for either $r\in (r_{0}+\varepsilon ,\infty )$ or $r\in $ $%
(0,r_{0}-\varepsilon ).$ Therefore we can apply Theorem 1 from Guo, Peng and Wang [@GuoPenWan], case $(a)$ and $(b)$ respectively, to obtain with :
\[LinearDecaySmallFreqLemma\] There holds that, for all $f\in L^1$ $$\label{LinearDecaySmallFreq1}
\begin{split}
\Vert
e^{itp(\vert\nabla\vert)}\psi_0(\vert\nabla\vert)f\Vert_{L^\infty}%
\lesssim_{n,\varepsilon} (1+\vert t\vert)^{-\frac{n}{2}}\Vert
\vert\nabla\vert^{\frac{n-2}{2}}f\Vert_{L^1}.
\end{split}%$$
\[LinearDecayHighFreqLemma\] For all $f\in L^1$, there holds that $$\label{LinearDecayHighFreq}
\Vert
e^{itp(\vert\nabla\vert)}\psi_\infty(\vert\nabla\vert)f\Vert_{L^\infty}
\lesssim_{n,\varepsilon} \vert t\vert^{-\frac{n}{2}}\Vert \vert\nabla\vert^
\frac{n+2}{2} f\Vert_{B^0_{1,1}}.$$
Finally, from Lemma \[LinearDecayMedFreqLemma\] [LinearDecaySmallFreqLemma]{} \[LinearDecayHighFreqLemma\], we can prove Proposition \[PropLinDecay\].
follows directly from , and . In order to get , we interpolate between the isometric property $$\Vert e^{itp(|\nabla |)}Pf\Vert _{L^{2}}=\Vert Pf\Vert _{L^{2}}$$for $P$ a Fourier projector and the various $L^{\infty }$ estimates. Interpolating with gives that $$\Vert e^{itp(|\nabla |)}\psi (|\nabla |)f\Vert _{L^{10}}\lesssim |t|^{-\frac{%
16}{15}}\Vert f\Vert _{L^{\frac{10}{9}}}.$$Interpolating with gives $$\Vert e^{itp(|\nabla |)}\psi _{0}(|\nabla |)f\Vert _{L^{10}}\lesssim |t|^{-%
\frac{6}{5}}\Vert f\Vert _{L^{\frac{10}{9}}}.$$Finally, interpolating with and using the inclusions of Besov spaces $$L^{10}\subset B_{10,2}^{0}\hskip.2cm\hbox{and}\hskip.2cmB_{\frac{10}{9}%
,2}^{0}\subset L^{\frac{10}{9}},$$and Bernstein estimates , we get that $$\begin{split}
\Vert e^{itp(|\nabla |)}\psi _{\infty }(|\nabla |)f\Vert _{L^{10}}^{2}&
\lesssim \sum_{N\geq 1}\Vert e^{itp(|\nabla |)}\psi _{\infty }(|\nabla
|)P_{N}f\Vert _{L^{10}}^{2} \\
& \lesssim |t|^{-\frac{12}{5}}\sum_{N\geq 1}N^{4}\Vert P_{N}f\Vert _{L^{%
\frac{10}{9}}}^{2} \\
& \lesssim |t|^{-\frac{12}{5}}\Vert f\Vert _{W^{2,\frac{10}{9}}}^{2}.
\end{split}%$$ Since for small time $t\leq 1$, we also have that $$\Vert e^{itp(|\nabla |)}f\Vert _{L^{10}}\lesssim \Vert e^{itp(|\nabla
|)}f\Vert _{H^{\frac{6}{5}}}\lesssim \Vert f\Vert _{H^{\frac{6}{5}}}\lesssim
\Vert f\Vert _{W^{\frac{12}{5},\frac{10}{9}}}$$and since $f=\psi _{r_{0}}(|\nabla |)f+\psi _{0}(|\nabla |)f+\psi _{\infty
}(|\nabla |)f$, this ends the proof.
Normal form transformation {#SecNot}
==========================
In this section, we derive the normal form transformation for $\alpha _{j}.$ Isolating linear, quadratic and higher order terms, we can rewrite the Euler-Poisson system as follows: $$\begin{aligned}
& \partial _{t}\rho +\hbox{div}(v) & & +\hbox{div}(\rho v) & & =0
\label{EPrho} \\
& \partial _{t}v+\nabla \rho +\nabla \phi & & +(v\cdot \nabla )v-\nabla
\frac{\rho ^{2}}{2} & & =-\nabla \left[ \ln (1+\rho )-\rho +\frac{\rho ^{2}}{
2}\right] \label{EPv} \\
& \rho =(1-\Delta )\phi & & +\frac{\phi ^{2}}{2} & & +\left[ e^{\phi
}-1-\phi -\frac{\phi ^{2}}{2}\right] . \label{EPphi}\end{aligned}$$ The last line defines an operator $\rho \mapsto \phi (\rho )$ such that $$\label{DefOfElectricField}
\phi (\rho )=(1-\Delta )^{-1}\rho -\frac{1}{2}(1-\Delta )^{-1}\left[
(1-\Delta )^{-1}\rho \right] ^{2}+R(\rho )$$ where $R$ satisfies good properties. We note that since $\nabla \times v=0$ there exists a function $\psi$ such that $v=\nabla \psi $ and consequently, $%
(v\cdot \nabla )v=\nabla \frac{|v|^{2}}{2}$. In terms of the velocity potential $\psi $, we can rewrite the above system as
$$\label{EPtemp}
\begin{array}{c}
\partial _{t}
\begin{pmatrix}
\rho \\
\psi%
\end{pmatrix}
+
\begin{pmatrix}
0 & \Delta \\
(1-\Delta )^{-1}+1 & 0%
\end{pmatrix}
\begin{pmatrix}
\rho \\
\psi%
\end{pmatrix}
\\
=
\begin{pmatrix}
-\nabla \cdot (\rho \nabla\psi) \\
\frac{1}{2}(1-\Delta )^{-1}\left[ (1-\Delta )^{-1}\rho \right] ^{2}-R(\rho
)-\ln (1+\rho )+\rho -\frac{\rho ^{2}}{2}%
\end{pmatrix}
. \\
\end{array}%$$
We denote the pair of eigenfunctions of the linear part as$%
\begin{pmatrix}
1 \\
\pm \frac{p(|\nabla |)}{i|\nabla |^{2}}%
\end{pmatrix}%
$ $=%
\begin{pmatrix}
1 \\
\pm \frac{q(|\nabla |)}{i|\nabla |}%
\end{pmatrix}%
,$ and recall $\alpha _{j}=\rho +\frac{(-1)^{j}i}{q(|\nabla |)}\mathcal{R }%
^{-1}v\equiv \rho +\frac{(-1)^{j}i}{q(|\nabla |)}|\nabla |\psi ,$ where $%
\mathcal{R}=\frac{\nabla}{\vert\nabla\vert}$ stands for the Riesz transform. We can diagonalize the matrix as:
$$\begin{pmatrix}
0 & \Delta \\
(1-\Delta )^{-1}+1 & 0%
\end{pmatrix}
=
\begin{pmatrix}
1 & 1 \\
\frac{q(|\nabla |)}{i|\nabla |} & -\frac{q(|\nabla |)}{i|\nabla |}%
\end{pmatrix}
\begin{pmatrix}
ip(|\nabla |) & 0 \\
0 & -ip(|\nabla |)%
\end{pmatrix}
\begin{pmatrix}
\frac{1}{2} & \frac{i|\nabla |}{2q(|\nabla |)} \\
\frac{1}{2} & -\frac{i|\nabla |}{2q(|\nabla |)}%
\end{pmatrix}%$$
Now, with $\alpha $ given in , using that $\mathcal{R}%
^{-1} \nabla=\vert\nabla\vert\mathcal{R}^{-1}\frac{\nabla }{|\nabla |}%
=|\nabla |,$ and
$$\hbox{div}(v)=\hbox{div}(\frac{\nabla }{|\nabla |}\mathcal{R}
^{-1}v)=-|\nabla |\mathcal{R}^{-1}v,$$
we diagonalize the matrix and rewrite in terms of $\alpha $ as
$$\label{EPShort}
(\partial _{t}+(-1)^{j}ip(|\nabla |))\alpha _{j}=Q_{j}(\alpha )+\mathcal{N}
_{j}$$
where $Q_{2}=\bar{Q}_{1},$ and $\mathcal{N}_{2}=\mathcal{\bar{N}}_{1}$ such that the quadratic term $Q_{1}$ and the cubic term $\mathcal{N}_{1}$ take the form: $$\label{DefOfN}
\begin{split}
&Q_{j}=-\text{div}(\rho v)+(-1)^{j}\frac{i|\nabla |}{2q(|\nabla |)}\left\{
(1-\Delta )^{-1}[(1-\Delta )^{-1}\rho ]^{2}-\rho ^{2}-|v|^{2}\right\} \\
&\mathcal{N}_{j}=(-1)^{j}\frac{i|\nabla |}{q(|\nabla |)}\left[ \ln (1+\rho
)-\rho +\frac{\rho ^{2}}{2}-R(\rho )\right] .
\end{split}%$$ The most important step is to study the linear profiles $$\omega _{j}(t)=e^{(-1)^{j}itp(|\nabla |)}\alpha _{j}(t),$$ so that its temporal derivatives are of at least of quadratic order: $$\label{OmegaDerivatives}
\partial _{t}\omega _{j}=e^{(-1)^{j}ip(|\nabla |)t}\{Q_{j}(\alpha )+
\mathcal{N}_{j}\}.$$ Plugging $\rho =\frac{\alpha _{1}+\alpha _{2}}{2}$ and $v=\frac{\nabla
p(|\nabla |)}{-\Delta }\frac{\alpha_{1}-\alpha_{2}}{2i}$ into $Q_{j},$ we now compute the Fourier transform of $Q_{j}(\alpha )$ as $$\label{EquationForQ}
\begin{split}
& \hat{Q}_{j}(\alpha )(t,\xi ) \\
& =\int_{\mathbb{R}^{3}}\Big[-\frac{1}{4}\frac{\xi \cdot \eta }{|\eta |}
q(\eta )\hat{\alpha}_{1}(\xi -\eta )\hat{\alpha}_{1}(\eta )+\frac{1}{4}
\frac{\xi \cdot \eta }{|\eta |}q(\eta )\hat{\alpha}_{2}(\xi -\eta )\hat{
\alpha}_{2}(\eta ) \\
& +\frac{i|\xi |}{8q(\xi )}\left(\frac{(\xi -\eta )\cdot \eta }{|\xi -\eta
||\eta |}q(\xi -\eta )q(\eta )-1+\frac{1}{\langle \xi \rangle ^{2}\langle
\xi -\eta \rangle ^{2}\langle \eta \rangle ^{2}}\right) \hat{\alpha}_{1}(\xi
-\eta )\hat{\alpha}_{2}(\eta ) \\
& -\frac{(-1)^{j}i|\xi |}{8q(\xi )}\left(\frac{(\xi -\eta )\cdot \eta }{
|\xi -\eta ||\eta |}q(\xi -\eta )q(\eta )-1+\frac{1}{\langle \xi \rangle
^{2}\langle \xi -\eta \rangle ^{2}\langle \eta \rangle ^{2}}\right) \hat{
\alpha}_{1}(\xi -\eta )\hat{\alpha}_{1}(\eta ) \\
& +\frac{1}{4}\frac{\xi \cdot \eta }{|\eta |}q(\eta )\hat{\alpha}_{1}(\xi
-\eta )\hat{\alpha}_{2}(\eta ) -\frac{1}{4}\frac{\xi \cdot \eta }{|\eta |}%
q(\eta )\hat{\alpha}_{2}(\xi -\eta )\hat{\alpha}_{1}(\eta ) \\
& -\frac{(-1)^{j}i|\xi |}{8q(\xi )}\left(\frac{(\xi -\eta )\cdot \eta }{
|\xi -\eta ||\eta |}q(\xi -\eta )q(\eta )-1+\frac{1}{\langle \xi \rangle
^{2}\langle \xi -\eta \rangle ^{2}\langle \eta \rangle ^{2}}\right) \hat{
\alpha}_{2}(\xi -\eta )\hat{\alpha}_{2}(\eta )\Big]d\eta \\
& \equiv \int_{\mathbb{R}^{3}}\big[m_{rl}^{j}(\xi,\eta)\hat{\alpha}_{r}(\xi
-\eta ) \hat{\alpha}_{l}(\eta )\big](s)ds.
\end{split}%$$
We now integrate to get $$\label{DuhamelForAlpha}
\begin{split}
\hat{\alpha}_{j}(t) &=e^{(-1)^{j+1}ip(|\xi |)t}\hat{\alpha}
_{j}(0)+\int_{0}^{t}e^{(-1)^{j+1}ip(|\xi |)(t-s)}\hat{Q}_{j}(\alpha )(s)ds \\
&+\int_{0}^{t}e^{(-1)^{j+1}ip(|\xi |)(t-s)}\mathcal{\hat{N}}_{1}(\alpha
)(s)ds \\
&=e^{(-1)^{j+1}ip(|\xi |)t}\hat{\alpha}_{j}(0)+\int_{0}^{t}e^{(-1)^{j+1}ip(|%
\xi |)(t-s)}\mathcal{\hat{N}}_{j}(\alpha )(s)ds \\
&+e^{(-1)^{j+1}ip(|\xi |)t}\int_{0}^{t}e^{(-1)^{j+1}ip(|\xi |)s}m_{rl}^{j}%
\hat{\alpha}_{r}(\xi -\eta )\hat{\alpha}_{l}(\eta )\big](s)ds(s)ds.
\end{split}%$$ The crucial step is to replace $\hat{\alpha}_{j}(s)=e^{(-1)^{j+1}ip(|\xi
|)s} \hat{\omega}_{j}(s)$ in the third term, which then takes the form $$\label{EquaPsi}
\hat{\Psi}_j(\alpha)=e^{(-1)^{j+1}ip(|\xi
|)t}\sum_{r,l=1}^{2}\int_{0}^{t}\int_{\mathbb{R} ^{3}}m_{rl}^{j}e^{is\Phi
_{rl}}\hat{\omega}_{r}(\xi -\eta )\hat{\omega} _{l}(\eta )d\eta ds.$$ Here $\hat{\omega}_{1}(\xi)=e^{itp(\xi )}\hat{\alpha}_{1}(\xi )$ and $\hat{%
\omega}_{2}(\xi )=e^{-itp(\xi )}\hat{\alpha}_{2}(\xi)=\overline{\hat{\omega}}%
_{1}(\xi ),$ $$\label{DefOfPhim}
\begin{split}
\Phi _{rl}(\xi ,\eta )& =(-1)^{j+1}p(\xi )+(-1)^{r+1}p(\xi -\eta
)+(-1)^{l+1}p(\eta ),\hskip.1cm\hbox{and} \\
m_{rl}^{j}(\xi ,\eta )& =|\xi |n_{1rl}^{j}(\xi )n_{2rl}^{j}(\xi -\eta
)n_{3rl}^{j}(\eta ),
\end{split}%$$ is a factorable multiplier defined in , where the $%
n_{olk}^{j}$ are either smooth functions or product of a smooth function with the angle function $x\mapsto \frac{x}{|x|}$. More specifically, there are only four different phases of the following: $$\label{phase}
\begin{split}
\Phi _{1}(\xi ,\xi -\eta ,\eta )& =p(\xi )-p(\xi -\eta )-p(\eta ) \\
\Phi _{2}(\xi ,\xi -\eta ,\eta )& =p(\xi )+p(\xi -\eta )+p(\eta ) \\
\Phi _{3}(\xi ,\xi -\eta ,\eta )& =p(\xi )-p(\xi -\eta )+p(\eta ) \\
\Phi _{4}(\xi ,\xi -\eta ,\eta )& =p(\xi )+p(\xi -\eta )-p(\eta ). \\
&
\end{split}%$$ Integrating by parts in $s$ in the integral in $\Psi $, and making use of the fact that $\partial _{t}\hat{\omega}$ is at least quadratic by , we obtain from that[^1] $$\begin{split}
& e^{(-1)^{j}ip(|\xi |)t}\hat{\Psi}_j(\alpha)(t,\xi ) \\
& =\sum_{r,l=1}^{2}\left[ \int_{\mathbb{R}^{3}}\frac{m_{rl}^{j}}{i\Phi _{rl}}
e^{is\Phi _{rl}}\hat{\omega}_{r}(\xi -\eta )\hat{\omega}_{l}(\eta )d\eta %
\right] _{s=0}^{t} \\
& +2\sum_{r,l=1}^{2}\int_{0}^{t}\int_{\mathbb{R}^{3}}i\frac{%
m_{rl}^{j}(\xi,\eta)}{\Phi _{rl}}e^{is\Phi _{rl}}\hat{\omega}_{r}(\xi -\eta
)\partial _{t}\hat{\omega} _{l}(\eta )d\eta ds \\
& =i\sum_{r,l=1}^{2}\int_{\mathbb{R}^{3}}\frac{m_{rl}^{j}}{\Phi _{rl}}\hat{
\omega}_{r}(0,\xi -\eta )\hat{\omega}_{l}(0,\eta )d\eta \\
& +\sum_{r,l=1}^{2}e^{(-1)^jitp(\xi )}\int_{\mathbb{R}^{3}}\frac{m_{rl}^{j}}{
i\Phi _{rl}}\hat{\alpha}_{r}(t,\xi -\eta )\hat{\alpha}_{l}(t,\eta )d\eta \\
& +2\sum_{r,l,r_{1},l_{1}=1}^{2}\int_{0}^{t}\int_{\mathbb{R}^{3}}\frac{
im_{rl}^{j}(\xi ,\eta )m_{r_{1}l_{1}}^{l}(\eta ,\zeta )}{\Phi _{rl}}
e^{is\Phi _{rl}}\hat{\omega}_{r}(\xi -\eta )e^{is(-1)^{l}p(|\eta |)} \hat{Q}%
_j(\alpha)(\eta)d\eta ds \\
& +2\sum_{r,l=1}^{2}\int_{0}^{t}\int_{\mathbb{R}^{3}}\frac{im_{rl}^{j}}{\Phi
_{rl}}e^{is\Phi _{rl}}\hat{\omega}_{r}(\xi -\eta )e^{is(-1)^{l}p(\eta )}
\hat{\mathcal{N}_{l}}(\eta )d\eta ds
\end{split}%$$ We then change back to $\hat{\omega}_{j}(s)=e^{(-1)^{r+1}ip(|\xi |)s}\hat{
\alpha}_{j}(s),$ and using , we write
$$\begin{aligned}
&&e^{(-1)^{j}ip(|\xi |)t}\left(\hat{\alpha}_{j}(t)+\mathfrak{B}_{j}(\alpha )
\right) \notag \\
&=&\hat{\alpha}_{j}(0)+\mathfrak{B}_{j}(\alpha
(0))+\int_{0}^{t}e^{(-1)^{j}ip(|\xi |)s}\hat{Q}_{j}(\alpha
)(s)ds+\int_{0}^{t}e^{(-1)^{j}ip(|\xi |)s}\mathcal{\hat{N}}_{j}(\alpha )(s)ds
\notag \\
&=&\hat{\alpha}_{j}(0)+\mathfrak{B}_{j}(\alpha
(0))+\int_{0}^{t}e^{(-1)^{j}ip(|\xi |)s}\mathcal{\hat{N}}_{j}(\alpha )(s)ds
\notag \\
&&+\int_{0}^{t}e^{(-1)^{j}ip(|\xi |)s}\frac{im_{lk}^{j}(\xi ,\eta )}{\Phi
_{lk}}\hat{\alpha}_{r}(\xi -\eta )\hat{h}_{l}(\alpha (\eta ))(s)ds(s)ds
\label{alpha}\end{aligned}$$
where the normal form transformation is $$\mathcal{F}\mathfrak{B}_j(\alpha _{j})(\xi )=\sum_{r,l=1}^{2}\int_{\mathbb{R}
^{3}}\frac{m_{rl}^{j}}{i\Phi _{rl}}\hat{\alpha}_{r}(\xi -\eta )\hat{\alpha}
_{l}(\eta )d\eta \label{DefOfB}$$ and $$\label{DefOfH}
\hat{h}_{l}(\alpha )\equiv \int_{\mathbb{R}^{3}}m_{r_{1}l_{1}}(\eta ,\zeta )
\hat{\alpha}_{r_{1}}(\eta -\zeta )\hat{\alpha}_{l_{1}}(\zeta )d\zeta +
\mathcal{\hat{N}}_{l}$$ is the associated cubic nonlinearity.
We next show that $h(\alpha )$ behaves like a quadratic term in $\alpha .$
Assuming that $\alpha$ has small $X$-norm, then $$\label{PropertiesOfN}
\Vert \vert\nabla\vert^{-1}h(\alpha (t))\Vert_{H^{2k}}+\Vert
\vert\nabla\vert^{-1}\mathcal{N}\Vert_{H^{2k}}\lesssim (1+t)^{-\frac{16}{15}%
}\Vert \alpha \Vert_{X}^{2}.$$
When $h$ is a product of $\alpha $’s, this follows directly from the Sobolev embedding $L^{\infty }\subset W^{\frac{3}{10},10}$.
When $h=\mathcal{N}$, we see from that, except for the term involving $R$, a similar proof works. For the terms involving $R$, we proceed as follows: Letting $E(x)=e^{x}-1-x-\frac{x^{2}}{2}$, we see from . that $$\label{EquForR}
\begin{split}
& (1-\Delta )R+\frac{1}{2}\left[ (1-\Delta )^{-1}\rho -(1-\Delta
)^{-1}\left( (1-\Delta )^{-1}\rho \right) ^{2}\right] R+\frac{R^{2}}{2} \\
& +E((1-\Delta )^{-1}\rho -\frac{1}{2}(1-\Delta )^{-1}\left[ (1-\Delta
)^{-1}\rho \right] ^{2}+R) \\
& =\frac{1}{2}(1-\Delta )^{-1}\rho \left[ (1-\Delta )^{-1}\left[ (1-\Delta
)^{-1}\rho \right] ^{2}\right] -\frac{1}{8}\left[ (1-\Delta )^{-1}\left[
(1-\Delta )^{-1}\rho \right] ^{2}\right] ^{2}.
\end{split}%$$
In order to solve , we define the following iterative scheme. For $\rho $ sufficiently small in $X$-norm, we let $$\begin{split}
& R_{0}=0 \\
& (1-\Delta )R_{k+1}=-\frac{1}{2}\left[ (1-\Delta )^{-1}\rho -(1-\Delta
)^{-1} \left( (1-\Delta )^{-1}\rho \right) ^{2}\right] R_{k}-\frac{R_{k}^{2}%
}{2} \\
& -E((1-\Delta )^{-1}\rho -\frac{1}{2}(1-\Delta )^{-1}\left[ (1-\Delta
)^{-1}\rho \right] ^{2}+R_{k}) \\
& +\frac{1}{2}(1-\Delta )^{-1}\rho \left[ (1-\Delta )^{-1}\left[ (1-\Delta
)^{-1}\rho \right] ^{2}\right] -\frac{1}{8}\left[ (1-\Delta )^{-1}\left[
(1-\Delta )^{-1}\rho \right] ^{2}\right] ^{2}
\end{split}%$$ We see that, if $s>3/2$ and $p\ge 2$, using the tame estimate $$\label{EstimForNormRk}
\begin{split}
\Vert R_{k+1}\Vert _{W^{s+2,p}}& \lesssim \Vert \rho \Vert _{L^\infty}\Vert
R_{k}\Vert _{W^{s,p}}+\Vert \rho\Vert_{W^{s-2,p}}\Vert
R_k\Vert_{L^\infty}+\Vert R_{k}\Vert _{L^\infty}\Vert R_k\Vert_{W^{s,p}} \\
&+\Vert \rho \Vert_{L^\infty}\left(\Vert\rho\Vert _{W^{s-2,p}}^{2}+\Vert
\rho \Vert _{W^{s-2,p}}^{3}\right) \\
&+C\left( \Vert \rho \Vert _{L^{\infty }}+\Vert R_{k}\Vert _{L^{\infty
}}\right) \left( \Vert R_{k}\Vert _{W^{s,p}}+\Vert \rho \Vert
_{W^{s-2,p}}\right) ^{3}
\end{split}%$$ and, assuming that $$\sup_{k}\Vert R_{k}\Vert _{L^{\infty }}+\Vert \rho \Vert _{L^{\infty }}\leq 2$$ we also see that $$\Vert R_{k+1}-R_{k}\Vert _{H^{2}}\lesssim \left( \Vert \rho \Vert
_{L^{\infty }}+\sup_{k}\Vert R_{k}\Vert _{L^{\infty }}\right) \Vert
R_{k}-R_{k-1}\Vert _{L^{2}}.$$Hence, if $\Vert \rho \Vert _{X}<1$ is sufficiently small, there holds that $$(1+t)^{\frac{16}{15}}\Vert R_{k}\Vert _{W^{s+2,10}}+\Vert R_{k}\Vert
_{H^{2(s+1)}}\lesssim \Vert \rho \Vert _{X}^{3}\lesssim 1$$ for all $0\le s\le k$ and that $(R_{k})_{k}$ is a Cauchy sequence in $H^{2}$, hence converges to a unique limit $R=R(\rho )$, the given function which solves and satisfies $$\label{ControlOfR}
(1+t)^{\frac{16}{15}}\Vert R(\rho )\Vert _{W^{k+2,10}}+\Vert R(\rho )\Vert
_{H^{2(k+1)}}\lesssim \Vert \alpha \Vert _{X}^{3}.$$ Using now that $W^{2,10}\subset L^\infty$, one recovers from that for all $k$, $$(1+t)^\frac{16}{15}\Vert R_k(t)\Vert_{H^{2k+2}}\lesssim \Vert
\alpha\Vert_X^3.$$ Passing to the limit in $k$, we finish the proof of .
The $L^2$-type norm {#SecH-1}
===================
In this section, we get control on the first part of the $X$-norm, namely, we control the $L^2$-based norms as follows
\[ControlL2NormProp\] Let $\alpha$ correspond to a solution of by , then if $\alpha$ has small $X$-norm there holds that $$\Vert \alpha\Vert_{H^{-1}\cap H^{2k}}\lesssim \Vert
\alpha(0)\Vert_{Y}+\Vert\alpha\Vert_X^\frac{3}{2}.$$
The remaining of this section is devoted to the proof of Proposition [ControlL2NormProp]{}. We first control the high derivatives and then the $%
H^{-1}$-norm.
The Energy estimate {#SecEnergyEstimate}
-------------------
In this subsection, we use energy methods to control high derivatives of the solution in $L^2$, assuming a control on the $X$-norm, and most notably integrability of the solution in $L^{10}$-norms.
In order to prove this, we rewrite into the symmetrized form $$\partial _{t}u+A_{j}(u)\partial _{j}u=(0,-\nabla \phi ) \label{SymHypSys}$$ where $u=(\ln (1+\rho ),v_{1},v_{2},v_{3})$, $$A_{j}=
\begin{pmatrix}
v_{j} & e_{j}^{T} \\
e_{j} & v_{j}I_{3}%
\end{pmatrix}
.$$ Now, for a multi-index $\tau $, we derive $\tau $ times and take the scalar product with $D^{\tau }u$ to get $$\begin{split}
\frac{1}{2}\frac{d}{dt}\Vert D^{\tau }u\Vert _{L^{2}}^{2}&
=-(A_{j}(u)D^{\tau }\partial _{j}u,D^{\tau }u)_{L^{2}\times L^{2}} \\
& -\sum_{\gamma <\tau }c_{\gamma }(D^{\tau -\gamma }[A_{j}(u)]D^{\gamma
}\partial _{j}(u),D^{\tau }u)-(\nabla D^{\tau }\phi ,D^{\tau
}v)_{L^{2}\times L^{2}} \\
& \lesssim \Vert \hbox{div}(v)\Vert_{L^\infty}\Vert D^{\tau }u\Vert
_{L^{2}}^{2}+\sum_{\gamma <\tau }c_{\gamma }\vert (D^{\tau -\gamma
}[A_{j}(u)]D^{\gamma }\partial _{j}(u),D^{\tau }u)_{L^{2}\times L^{2}}\vert
\\
& +\vert (D^{\tau }\phi ,D^{\tau }\hbox{div}(v))_{L^{2}\times L^{2}}\vert. \\
&
\end{split}%$$ Besides, using and , one sees that $$\begin{split}
(D^{\tau }\phi ,D^{\tau }\hbox{div}(v))& =(D^{\tau }(1-\Delta )^{-1}\rho
,D^{\tau }\hbox{div}(v))-(\nabla D^{\tau }\tilde{R}(\rho ),D^{\tau }v) \\
& =-(D^{\tau }(1-\Delta )^{-1}\rho ,D^{\tau }\partial _{t}\rho )-(D^{\tau
}(1-\Delta )^{-1}\rho ,D^{\tau }\hbox{div}(\rho v)) \\
& -(\nabla D^{\tau }\tilde{R}(\rho ),D^{\tau }v)
\end{split}%$$ with $$\tilde{R}(\rho )=\frac{1}{2}(1-\Delta )^{-1}\left[ (1-\Delta )^{-1}\rho %
\right] ^{2}-R(\rho )$$ and $R$ given in . Now, using Lemma [LemProdEnergy]{}, we remark that for all $\gamma<\tau$, there holds that $$\Vert D^{\tau-\gamma}uD^\gamma\partial_ju\Vert_{L^2}\lesssim \Vert
u\Vert_{W^{2,10}}\Vert u\Vert_{H^{\vert\tau\vert}}$$ and combining this with , we obtain $$\begin{split}
\frac{1}{2}\frac{d}{dt}\left( \Vert D^{\tau }u\Vert _{L^{2}}^{2}+\Vert
(1-\Delta )^{-\frac{1}{2}}D^{\tau }\rho \Vert _{L^{2}}^{2}\right) & \lesssim
\Vert u\Vert _{W^{2,10}}^2\Vert u\Vert _{H^{\vert\tau\vert}}+\Vert
R\Vert_{H^{\vert\tau\vert+1}}\Vert u\Vert_{H^{\vert\tau\vert}} \\
&\lesssim (1+t)^{-\frac{16}{15}}\Vert u\Vert_X^3
\end{split}%$$ as long as $\tau \le 2k$ and that $\Vert \alpha\Vert _{X}$ is sufficiently small. Finally, integrating this in time and remarking that $$\rho =\hbox{Re}(\alpha )\hskip.2cm\hbox{and}\hskip.2cmv=q(|\nabla |)\mathcal{%
R}\hbox{Im}(\alpha ),$$ we obtain that $$\Vert u\Vert _{H^{\tau }}^{2}\lesssim \Vert u(0)\Vert _{H^{\tau }}^{2}+\Vert
u\Vert _{X}^{3} \label{EnergyEstimate}$$ provided that $\tau \leq 2k$. Since control of $\ln (1+\rho )$ in $%
L_{t}^{\infty }H_{x}^{\tau }$-norm gives control of $\rho $ in $%
L_{t}^{\infty }H_{x}^{\tau }$ -norm, this gives us the global bound on the derivatives we needed.
The $H^{-1}$-norm
-----------------
In this section, we control the $H^{-1}$ norm of the solution, which the other $L^2$ component of the component of the $X$-norm. We use and we first deal with the quadratic terms $%
Q_j(\alpha )$, whose contribution can be written as a finite sum of terms like (recall that $\alpha_1=\overline{\alpha}_2$) $$I=\mathcal{F}^{-1}\int_{0}^{t}e^{i(t-s)p(\xi )}|\xi |\frac{m}{|\xi |}\hat{
\alpha}(\xi -\eta )\hat{\alpha}(\eta )d\eta$$ where, from we see that one can write $$m=|\xi |n_{1}(\xi )n_{2}(\xi -\eta )n_{3}(\eta )$$ with $n_{i}(\zeta )=\frac{\zeta }{|\zeta |}\tilde{n}(\zeta )$ or $%
n_{i}(\zeta )=\tilde{n}(\zeta )$ for $\tilde{n}$ an $S^{0}$-symbol. In particular, $$\label{SymbolEst}
\Vert n_{i}(|\nabla |)f\Vert _{L^{r}}\lesssim \Vert f\Vert _{L^{r}}$$ for $1<r<\infty $. We use a standard energy estimate and the inclusion $%
L^\infty\subset W^{1,10}$ to get $$\begin{split}
\Vert \frac{I}{|\xi |}\Vert _{L^{2}}& \lesssim \int_{0}^{t}\Vert \int_{%
\mathbb{R}^{3}}\frac{m}{|\xi |}\hat{\alpha}(\xi -\eta )\hat{\alpha}(\eta
)d\eta ds\Vert _{L^{2}}ds \\
& \lesssim \int_{0}^{t}\Vert \int_{\mathbb{R}^{3}}\left( n_{2}(\xi -\eta )%
\hat{\alpha}(\xi -\eta )\right) \left( n_{3}(\eta )\hat{\alpha}(\eta
)\right) d\eta \Vert _{L^{2}}ds \\
& \lesssim \int_{0}^{t}\Vert \left( n_{2}(|\nabla |)\alpha \right) \left(
n_{3}(|\nabla |)\alpha \right) \Vert _{L^{2}}ds \\
& \lesssim \int_{0}^{t}\Vert n_{2}(|\nabla |)\alpha \Vert _{L_{t}^{\infty
}L_{x}^{2}}\Vert n_{3}(|\nabla |)\alpha (s)\Vert _{L_{x}^{\infty }}ds \\
& \lesssim \Vert \alpha \Vert _{X}\int_{0}^{t}\Vert (1-\Delta )^{\frac{1}{2}%
}n_{3}(|\nabla |)\alpha (s)\Vert _{L_{x}^{10}}ds \\
& \lesssim \Vert \alpha \Vert _{X}\int_{0}^{t}\Vert (1-\Delta )^{\frac{1}{2}%
}\alpha (s)\Vert _{L_{x}^{10}} \lesssim \Vert \alpha \Vert
_{X}^{2}\int_{0}^{t}\frac{ds}{(1+s)^{\frac{16}{ 15}}} \\
&\lesssim \Vert \alpha \Vert _{X}^{2}.
\end{split}%$$ Next we control the contribution of the cubic term $\mathcal{N}$ as follows using the fact that $e^{itp(\vert\nabla\vert)}$ is a unitary operator and , $$\begin{split}
\Vert |\nabla |^{-1}\int_{0}^{t}e^{-i(t-s)p(|\nabla |)}\mathcal{N}(s)ds\Vert
_{L^{2}}& \lesssim \int_{0}^{t}\Vert |\nabla |^{-1}\mathcal{N}\Vert
_{L^{2}}ds \\
& \lesssim \Vert \alpha \Vert _{X}^{2}\int_{0}^{t} \frac{ds}{(1+s)^{\frac{16%
}{15}}}\lesssim \Vert \alpha \Vert _{X}^{2}.
\end{split}%$$ Combining the two above estimates give that $$\begin{split}
\Vert |\nabla |^{-1}\alpha\Vert _{L_{t}^{\infty }L_{x}^{2}}& \lesssim \Vert
|\nabla |^{-1}\alpha _{0}\Vert _{L^{2}}+\Vert \frac{I(\xi )}{ |\xi |}\Vert
_{L^{2}}+\Vert \int_{0}^{t}\frac{e^{-i(t-s)p(|\xi |)}}{|\xi |} \hat{\mathcal{%
N}}(s)ds\Vert _{L^{2}} \\
& \lesssim \Vert \alpha _{0}\Vert _{Y}+\Vert \alpha \Vert _{X}^{2}
\end{split}
\label{H1Norm}$$ so that we control the first part in the $X$-norm.
Bilinear Multiplier Theorem {#SecMult}
===========================
A general multiplier theorem
----------------------------
In order to control the last part of the norm, we need to deal with bilinear terms in , which involve convolution with a singular symbol. Note that since $p(0)=0$, the symbol is quite singular on the whole parameter space and especially near $(\xi ,\eta )=(0,0)$. In particular, we cannot use the traditional Coifman-Meyer multiplier theorem [@CoiMey], or a more refined version as in Muscalu, Pipher, Tao and Thiele [@Mus; @MusPipTaoThi] since in all these cases, the multiplier need to satisfy some homogeneity conditions. In order to overcome this we use estimates inspired from Gustafson, Nakanishi and Tsai [@GNT] that we present now. Although most of the results in this subsection are essentially contained in Gustafson, Nakanishi and Tsai [@GNT], for selfcontainness, we give a direct proof.
We introduce the following multiplier norm: $$\Vert \mathfrak{m}\Vert _{M_{\xi ,\eta }^{s,b}}=\sum_{N\in 2^{\mathbb{Z}%
}}\Vert P_{N}^{\eta }\mathfrak{m}(\xi ,\eta )\Vert _{L_{\xi }^{b}\dot{H}%
_{\eta }^{s}} \label{MsNorm}$$and we let $\mathcal{M}_{\xi ,\eta }^{s}=\mathcal{M}_{\xi ,\eta }^{s,\infty
} $, which will be the norm that we mostly use. To a multiplier $\mathfrak{m}
$, we associate the bilinear pseudo-product operator $$B[f,g]=\mathcal{F}_{\xi }^{-1}\int_{\mathbb{R}^{3}}\mathfrak{m}(\xi ,\eta )%
\hat{f}(\xi -\eta )\hat{g}(\eta )d\eta . \label{b}$$ Our goal in this section is to obtain robust estimates on $B$.
\[infinity\]If $\Vert \mathfrak{m}\Vert _{L_{\xi }^{\infty }\dot{H}
_{\eta }^{s-\varepsilon }}+\Vert \mathfrak{m}\Vert _{L_{\xi }^{\infty }\dot{%
H
}_{\eta }^{s+\varepsilon }}<\infty $, then the $M_{\xi ,\eta }^{s}$-norm of $\mathfrak{m}$ is finite.
Indeed, we have that $$\begin{split}
\Vert P_{N}^{\eta }\mathfrak{m}(\xi ,\eta )\Vert _{L_{\xi }^{\infty }\dot{H}
_{\eta }^{s}}& \leq \min (N^{-\varepsilon }\Vert \mathfrak{m}\Vert _{L_{\xi
}^{\infty }\dot{H}_{\eta }^{s+\varepsilon }},N^{\varepsilon }\Vert \mathfrak{%
m}\Vert _{L_{\xi }^{\infty }\dot{H}_{\eta }^{s-\varepsilon }}),\hskip.2cm %
\hbox{so that} \\
\sum_{N}\Vert P_{N}^{\eta }\mathfrak{m}(\xi ,\eta )\Vert _{L_{\xi }^{\infty
} \dot{H}_{\eta }^{s}}& \lesssim \left(\sum_{N\leq 1}+\sum_{N\geq
1}\right)\Vert P_{N}^{\eta }\mathfrak{m}(\xi ,\eta )\Vert _{L_{\xi }^{\infty
} \dot{H}_{\eta }^{s}} \\
& \lesssim \sum_{N\leq 1}N^{\varepsilon }\Vert \mathfrak{m}\Vert _{L_{\xi
}^{\infty }\dot{H}_{\eta }^{s-\varepsilon }}+\sum_{N\geq 1}N^{-\varepsilon
}\Vert \mathfrak{m}\Vert _{L_{\xi }^{\infty }\dot{H}_{\eta }^{s+\varepsilon
}}<+\infty .
\end{split}%$$
\[CorGNT\] Suppose that $0\le s\le n/2$ and $\Vert \mathfrak{m}\Vert
_{M_{\eta ,\xi }^{s,\infty }}=\Vert \mathfrak{m}\Vert _{M_{\eta ,\xi
}^{s}}<\infty ,$ then $$\Vert B[f,g]\Vert _{L^{l_{1}^{\prime }}}\lesssim \Vert \mathfrak{m}\Vert
_{M_{\eta ,\xi }^{s}}\Vert f\Vert _{L^{l_{2}}}\Vert g\Vert _{L^{2}},$$for $l_{1},l_{2}$ satisfying $$2\leq l_{1},l_{2}\leq \frac{2n}{n-2s}\text{ and }\frac{1}{l_{1}}+ \frac{1}{%
l_{2}}=1-\frac{s}{n}. \label{CondpqBilEstGNT}$$
Actually, by changing coordinates $(\xi ,\eta )$ to $(\xi ,\zeta =\xi -\eta
) $, we could replace the norm $M_{\xi ,\eta }^{s}$ by $$\min \left( \Vert \mathfrak{m}\Vert _{M_{\xi ,\eta }^{s}},\Vert \mathfrak{m}%
\Vert _{M_{\xi ,\zeta }^{s}}\right) .$$
Theorem \[CorGNT\] follows by duality from the following estimate which is an adaptation of an estimate from Gustafson, Nakanishi and Tsai [@GNT].
\[LemGNT\] Let $0\le s\le n/2$, $2\leq l_{1},l_{2},l_{3}\leq \frac{2n}{%
n-2s}$, then $$\Vert B[f,g]\Vert _{L^{l_{1}}}\lesssim \Vert \mathfrak{m}\Vert _{\mathcal{M}
_{\xi ,\eta }^{s,b}}\Vert f\Vert _{L^{l_{2}}}\Vert g\Vert _{L^{l_{3}}}
\label{BilEstGNT}$$ for all $f\in L^{l_2}$, $g\in L^{l_3}$, where $\frac{1}{b}+\frac{1}{l_{1}}=%
\frac{1}{2}$, $\frac{1}{l_{2}}+\frac{1}{ l_{3}}=1-\frac{s}{n}.$
We consider $\mathfrak{m}$ with finite $\mathcal{M}_{\xi ,\eta }^{s,b}$ norm. Let $\mathcal{F}_{x}^{\eta }$ denote the Fourier transform from $%
x\rightarrow \eta .$ By definition, we have $$\hat{f}(\eta )\hat{g}(\xi -\eta )=\mathcal{F}_{x}^{\eta }\mathcal{F}%
_{y}^{\xi }f(x+y)g(y) \label{DoubleFT}$$and we let $\mathfrak{m}_{N}(\xi ,\eta )=P_{N}^{\eta }\mathfrak{m}(\xi ,\eta
)$ so that $\mathcal{F}_{z}^{\eta }\mathfrak{m}_{N}(\xi ,\eta )=\chi(\frac{z
}{N})\mathcal{F}_{\eta }^{z}\mathfrak{m}_N(\xi ,\eta )$. Using first Parseval’s equality in $x$, then in $\eta $ and then in $\xi$, we see that $$\begin{split}
\int_{\mathbb{R}^{n}}B[f,g](x)h(x)dx& =\int_{\mathbb{R}^{2n}}\mathfrak{m}
_{N}(\xi ,\eta )\hat{h}(\xi )\hat{f}(\eta )\hat{g}(\xi -\eta )d\eta d\xi \\
& =\int_{\mathbb{R}^{n}}\hat{h}(\xi )\int_{\mathbb{R}^{n}}\mathfrak{m}
_{N}(\xi ,\eta )\left( \mathcal{F}_{x}^{\eta }\mathcal{F}_{y}^{\xi
}f(x+y)g(y)\right) d\eta d\xi \\
& =\int_{\mathbb{R}^{n}}\hat{h}(\xi )\int_{\mathbb{R}^{n}}\mathcal{F}
_{x}^{\eta }\mathfrak{m}_{N}(\xi ,\eta )\left( \mathcal{F}_{y}^{\xi
}f(x+y)g(y)\right) d\eta d\xi \\
& =\int_{\mathbb{R}^{n}}\hat{h}(\xi )\int_{\mathbb{R}^{n}}\left( \chi(\frac{x%
}{N})\mathcal{F}_{\eta }^{x}\mathfrak{m}_N(\xi ,\eta )\right) \left(\mathcal{%
F}_{y}^{\xi }f(x+y)g(y)\right) dxd\xi \\
& =\int_{\mathbb{R}^{n}}\hat{h}(\xi )\int_{\mathbb{R}^{n}}\left(\mathcal{F}%
_{\eta }^{x}\mathfrak{m}_N(\xi ,\eta )\right) \mathcal{F} _{y}^{\xi} \left(
\chi (\frac{ x}{N})f(x+y)g(y)\right) dxd\xi
\end{split}%$$
We then use Cauchy-Schwarz’s inequality for the inner integral for $x,$ and then use the Hölder inequality with $\frac{1}{a}+\frac{1}{b}=\frac{1}{2}$ to get $$\begin{aligned}
&&\int_{\mathbb{R}^{n}}|\hat{h}(\xi )\vert \Vert \mathcal{F} _{\eta }^{x}%
\mathfrak{m}_N(\xi ,\eta )\Vert_{L_{x}^{2}}(\xi )\Vert \mathcal{F} _{y}^{\xi
}\{\chi (\frac{ x}{N}) \left( f(x+y)g(y)\right) \}\Vert_{L_{x}^{2}}(\xi )d\xi
\\
&\leq &\Vert\hat{h}\Vert_{L^{a}_\xi}\Vert \mathfrak{m}_N(\xi ,\eta
)\Vert_{L_{\xi }^{b}(L_{\eta }^{2})}\Vert\mathcal{F}_{y}^{\xi }\{\chi (\frac{
x}{N})\left( f(x+y)g(y)\right) \}\Vert_{L_{x,\xi }^{2}} \\
&\leq &\Vert h\Vert_{L^{a^{\prime }}_x}\Vert\mathfrak{m}_N(\xi ,\eta
)\Vert_{L_{\xi }^{b}(L_{\eta }^{2})}\Vert \chi (\frac{ x}{N}%
)f(x+y)g(y)\Vert_{L_{x,y}^{2}},\end{aligned}$$ where we have used the Hausdroff-Young’s inequality for $a>2,$ and the Parseval’s equality in $\eta $ for the second factor, as well as the Parseval’s equality in $\xi $ for the third factor. Finally, since $%
\Vert\chi (\frac{ x}{N})\Vert_{L^{n/s}}\lesssim N^{s},$ we employ the Hardy-Littlewood Young’s inequality with $\frac{s}{n}+\frac{1}{l_{2}}+\frac{1%
}{l_{3}}=1$ to get that $$\Vert \chi (\frac{ x}{N})f(x+y)g(y)\Vert _{L_{x,y}^{2}}\lesssim N^{s}\Vert
f\Vert _{L^{l_{1}}}\Vert g\Vert _{L^{l_{2}}}.$$ Combining $N^{s}$ with $||\mathfrak{m}(\xi ,\eta )||_{L_{\xi }^{b}(L_{\eta
}^{2})}$ with , we complete the proof.
In order to prove theorem \[CorGNT\], it suffices to remark that $$\begin{split}
\int_{\mathbb{R}^{n}}B[f,g](x)h(x)dx& =\int_{\mathbb{R}^{n}}\mathcal{F}%
_{x}^{\xi }B[f,g](\xi )\hat{h}(\xi )d\xi \\
& =\int_{\mathbb{R}^{2n}}\hat{f}(\eta )\mathfrak{m}(\xi ,\eta )\hat{h}(\xi )%
\hat{g}(\xi -\eta )d\xi d\eta \\
& =\int_{\mathbb{R}^{n}}f(x)B^{\ast }[h,\bar{g}](x)dx.
\end{split}%$$Applying to $B^{\ast }$ with $l_{1}=2$ to the bilinear operator corresponding to the multiplier $\mathfrak{m}^{\ast }(\xi ,\eta )=%
\mathfrak{m}(\eta ,\xi )$, we get the Theorem.
Multiplier Analysis
-------------------
The control of $L^{10}$ norm is the main mathematical difficulty in this paper. In this subsection, we prove the relevant estimate to apply Theorem \[CorGNT\] to the multipliers that appear in our analysis.
\[EstimPhiGen\] Let $a=b+c\in \mathbb{R}^{3},$ and let $|c|\leq \min
\{|a|,|b|\},$ then$$|p(a)-p(b)-p(c)|\gtrsim |c|\{1-\cos [c,a]+1-\cos [b,a]\}+\frac{|a||b||c|}{%
(1+|a||b|)(1+|c|^{2})}. \label{EstimOnPhi}$$where $[\cdot ,\cdot ]$ denote the angle between two vectors.
We first note that if $|b|\geq |a|,$ then $p(b)\geq p(a)$ and $$|p(a)-p(b)-p(c)|\geq p(c)\gtrsim |c|$$and the lemma follows. We assume $|b|\leq |a|.$ We remark that, as written in , $p(r)=rq(r)$, where $1\leq q(r)\leq q(0)=\sqrt{2}$ and $$\begin{split}
q^{\prime }(r)& =-\frac{r}{(1+r^{2})^{2}\sqrt{\frac{2+r^{2}}{1+r^{2}}}}\sim
_{r\rightarrow \infty }-\frac{1}{r^{3}} \\
q^{\prime }(0)& =0,\hskip.3cmq^{\prime \prime }(0)=-\frac{1}{\sqrt{2}}.
\end{split}
\label{EstimOnQ}$$From this, we get that $$p(a)-p(b)-p(c)\leq \left[ |a|-|b|-|c|\right] q(a)-|b|\left( q(b)-q(a)\right)
-|c|\left( q(c)-q(a)\right) . \label{EstonPhiABCGenPos}$$From , we see that $q$ is decreasing and hence each term is non positive. Remarking that $|a|=|b|\cos [b,a]+|c|\cos [c,a]$, the first term above gives the first term on the right hand side in .
We now consider the last term in the right hand side. Notice first that if $%
\cos [c,a]\leq 9/10$, then the last term is bounded by $\vert c\vert(1-\cos[%
c,a])$ and the lemma is clearly valid. So we can assume that $c$ and $a$ are almost collinear with $\cos [c,a]\geq 9/10 $. In which case, we get that $%
|a|\geq 4/3|c|$ and $$|a|-|c|\sim |b|\sim |a|.$$Using , we see that there exists $\delta >0$ such that $%
-s\leq q^{\prime }(s)\leq -\frac{s}{2}$ for $0\leq s\leq \delta $. Consequently, if $|a|\leq \delta $, we get that $$\begin{split}
|c|(q(a)-q(c))& =|c|\int_{|c|}^{|a|}q^{\prime }(s)ds\leq -|c|\frac{%
|a|^{2}-|c|^{2}}{4}\lesssim -|a||c|(|a|-|c|)\lesssim-|a||b||c|.
\end{split}%$$ On the other hand, since $q^{\prime }(r)\sim -r^{-3}$ at $\infty $, we see that $$q(a)-q(c)=\int_{|c|}^{|a|}q^{\prime }(s)ds\sim _{\infty }-\int_{|c|}^{|a|}
\frac{ds}{s^{3}}=\frac{|c|^{2}-|a|^{2}}{2|a|^{2}|c|^{2}}\lesssim -\frac{
|a|-|c|}{|a||c|^{2}}$$ so that if $|c|\geq \delta ^{-1}$ is sufficiently large, we get that $$|c|(q(a)-q(c))\lesssim-\frac{1}{|c|}.$$ Finally, in the last case $\delta\le\vert a\vert\le\delta^{-1}$ and $%
|a|=|c|+(|a|-|c|)\geq |c|+\delta /2$. Therefore, $$\int_{|c|}^{|a|}q^{\prime }(s)ds\lesssim \int_{|c|}^{|c|+\delta /2}q^{\prime
}(s)ds\lesssim -\frac{\delta }{2}q^{\prime }(2\delta ^{-1})$$ and we recover the last term once again.
In the remaining part of this section, we consider the triangle with vertices $\xi ,\eta ,\xi -\eta $ and let $\theta $ be the angle between $\xi
$ and $\eta $ ($0\leq \theta \leq \pi $), $\gamma $ the angle between $\xi $ and $\xi -\eta $ ($0\leq \gamma \leq \pi $) and we let the angle between $%
\eta $ and $\eta -\xi $ by $\pi -\beta $ such that $\beta =\gamma +\theta .$ We note that $\sin \frac{\beta }{2}\leq \frac{\beta }{2}$ and $\sin \frac{%
\beta }{2}\backsim \beta $ for $0\leq \beta \leq \pi $ so that $$1-\cos \beta =2\sin ^{2}\frac{\beta }{2}\backsim \beta ^{2}.$$
We now obtain general bounds on the multipliers that arise in our analysis. We first focus on the multiplier associated with the phase $\Phi_1$. In the end, in Section \[SecL10\], we recover the bounds on the other multipliers using symmetry.
\[EstimDerOfPhi\] The following estimates on $\Phi_1$ are globally true: $$\begin{aligned}
|\partial _{\xi }\Phi _{1}(\xi ,\eta )|&\lesssim \frac{|\eta |}{\langle \max
\{|\xi -\eta |,|\xi |\}\rangle \langle \min \{|\xi -\eta |,|\xi |\}\rangle
^{2}}+|\sin \gamma |,\text{ \ \ \ \ \ } \label{phixi} \\
|\partial _{\eta }\Phi _{1}(\xi ,\eta )|& \lesssim \frac{|\xi |}{\langle
\max \{|\xi -\eta |,|\eta |\}\rangle \langle \min \{|\xi -\eta |,|\eta
|\}\rangle ^{2}}+|\sin \beta |, \label{phieta} \\
|\Delta _{\xi }\Phi _{1}(\xi ,\eta )|& \lesssim \frac{|\eta |}{\langle \max
\{|\xi -\eta |,|\xi |\}\rangle \langle \min \{|\xi -\eta |,|\xi |\}\rangle
^{3}}+\frac{|\eta |}{|\xi -\eta ||\xi |} ,\text{ \ \ \ \ \ } \label{phixixi}
\\
|\Delta _{\eta }\Phi _{1}(\xi ,\eta )|& \lesssim \frac{1}{%
\min(\vert\xi-\eta\vert,\vert\eta\vert)} \label{phietaeta}\end{aligned}$$ for all $\xi,\eta\in\mathbb{R}^3$.
Recall $\Phi_1=p(\xi)-p(\xi-\eta)-p(\eta)$. We compute $$\begin{split}
\left\vert \nabla_{\xi }\Phi _{1}\right\vert & =\left\vert p^{\prime }(\xi )%
\frac{\xi }{|\xi |}-p^{\prime }(\xi -\eta )\frac{\xi -\eta }{|\xi -\eta |}%
\right\vert \\
& \leq \left\vert p^{\prime }(\xi )-p^{\prime }(\xi -\eta )\right\vert
+|p^{\prime }(\xi )|\left\vert \frac{\xi }{|\xi |}-\frac{\xi -\eta }{|\xi
-\eta |}\right\vert \\
& \lesssim
\left\vert\int_{\vert\xi-\eta\vert}^{\vert\xi\vert}p^{\prime\prime}(s)ds%
\right\vert +\left\vert \frac{\xi }{|\xi |}-\frac{\xi -\eta }{|\xi -\eta |}%
\right\vert \\
& \lesssim
\left\vert\int_{\vert\xi-\eta\vert}^{\vert\xi\vert}p^{\prime\prime}(s)ds%
\right\vert+2\sin \frac{\gamma }{2}.
\end{split}%$$ We claim that $$\label{DifferenceinPPrime}
\left\vert p^\prime(\xi)-p^\prime(\xi-\eta)\right\vert\lesssim\frac{|\eta |}{%
\langle \max \{|\xi -\eta |,|\xi |\}\rangle \langle \min \{|\xi -\eta |,|\xi
|\}\rangle ^{2}}$$
In fact, if $\max(\vert\xi\vert,\vert\xi-\eta\vert)\le 20$, from , using the crude bound $\vert
p^{\prime\prime}(s)\vert\lesssim 1$, we obtain that $$\left\vert\int_{\vert\xi-\eta\vert}^{\vert\xi\vert}p^{\prime\prime}(s)ds%
\right\vert\lesssim \left\vert \vert\xi\vert-
\vert\xi-\eta\vert\right\vert\lesssim \vert\eta\vert\lesssim \frac{|\eta |}{%
\langle \max \{|\xi -\eta |,|\xi |\}\rangle \langle \min \{|\xi -\eta |,|\xi
|\}\rangle ^{2}}.$$ Therefore, we only need to consider the case $\max\{\vert\xi\vert,\vert\xi-%
\eta\vert\}\ge 20$. Then, if $\min\{\vert\xi\vert,\vert\xi-\eta\vert\}\le 10$, we get that $\vert\eta\vert\simeq\max\{\vert\xi\vert,\vert\xi-\eta\vert\}$ and the right-hand side of is of order $1$ and the claim is valid. Finally, if $\min\{\vert\xi\vert,\vert\xi-\eta\vert\}\ge 10$, from , $p^{\prime \prime }(r)\sim \frac{1}{r^{3}}$ as $r\rightarrow
\infty ,$ and we conclude that claim since $$\label{min}
\begin{split}
\left\vert\int_{\vert\xi-\eta\vert}^{\vert\xi\vert}p^{\prime\prime}(s)ds%
\right\vert
&\lesssim\left\vert\int_{\min\{\vert\xi\vert,\vert\xi-\eta\vert\}}^{\max\{%
\vert \xi\vert,\vert\xi-\eta\vert\}} \frac{1}{ r^{3}}dr\right\vert \lesssim
\frac{1}{\min\{\vert\xi\vert,\vert\xi-\eta\vert\}^{2}}-\frac{1}{%
\max\{\vert\xi\vert ,\vert \xi-\eta\vert\}^2} \\
&=\frac{\left\vert \vert\xi\vert- |\xi -\eta
|\right\vert(\vert\xi\vert+\vert \xi -\eta\vert)}{\vert\xi\vert^{2}|\xi
-\eta |^{2}}\lesssim \frac{\vert\eta\vert}{\min\{\vert\xi\vert,\vert\xi-\eta%
\vert\}^{2}\max\{\vert \xi\vert,\vert \xi -\eta\vert\}} \\
&\lesssim \frac{|\eta |}{\langle \max \{|\xi -\eta |,|\xi |\}\rangle \langle
\min \{|\xi -\eta |,|\xi |\}\rangle ^{2}}.
\end{split}%$$
Similarly, as in , $$\begin{split}
\left\vert \nabla _{\eta }\Phi _{1}\right\vert & =\left\vert -p^{\prime
}(\xi -\eta )\frac{\xi -\eta }{|\xi -\eta |}-p^{\prime }(\eta )\frac{\eta }{%
|\eta |}\right\vert \\
& \leq \left\vert p^{\prime }(\eta )-p^{\prime }(\xi -\eta )\right\vert
+|p^{\prime }(\xi )|\left\vert \frac{\eta }{|\eta |}-\frac{\xi -\eta }{|\xi
-\eta |}\right\vert \\
& \leq \left\vert p^{\prime }(\eta )-p^{\prime }(\xi -\eta )\right\vert +2%
\sqrt{2}\sin \frac{\beta }{2} \\
&\lesssim\frac{|\xi |}{\langle \max \{|\xi -\eta |,|\eta |\}\rangle \langle
\min \{|\xi -\eta |,|\eta |\}\rangle ^{2}}+|\sin \beta |.
\end{split}%$$ Using the fact that $p^{(3)}(r)\sim r^{-4}$ as $r\to+\infty$, we now compute, by , that $$\begin{split}
\vert\Delta _{\xi}\Phi _{1}\vert & =\vert \Delta p(\xi )-\Delta p(\xi -\eta
)\vert \\
& =\left\vert p^{\prime\prime}(\xi)-p^{\prime \prime}(\xi -\eta ) +2\left(
\frac{p^{\prime}(\xi)}{\vert\xi\vert}-\frac{p^{\prime}(\xi -\eta)}{\vert \xi
-\eta \vert}\right) \right\vert \\
& \lesssim \frac{|\eta |}{\langle \max \{|\xi -\eta |,|\xi |\}\rangle
\langle \min \{|\xi -\eta |,|\xi |\}\rangle ^{3}} +\frac{\vert p^{\prime
}(\xi )-p^{\prime }(\xi -\eta )\vert}{\vert\xi\vert} \\
&+\left\vert\frac{1}{|\xi |}-\frac{1}{|\xi -\eta |}\right\vert \vert
p^{\prime }(\xi -\eta )\vert \\
& \lesssim \frac{|\eta |}{\langle \max \{|\xi -\eta |,|\xi |\}\rangle
\langle \min \{|\xi -\eta |,|\xi |\}\rangle ^{3}} \\
&+\frac{|\eta |}{\langle \max \{|\xi -\eta |,|\xi |\}\rangle \langle \min
\{|\xi -\eta |,|\xi |\}\rangle ^{2}|\xi |}+\frac{|\eta |}{\vert \xi -\eta
\vert\vert \xi \vert }.
\end{split}%$$ Finally, we also get that $$\begin{split}
\Delta _{\eta }\Phi _{1}& =-\Delta _{\eta }p(\xi -\eta )-\Delta _{\eta
}p(\eta ) \\
& =-\left( p^{\prime \prime }(\xi -\eta )+p^{\prime \prime }(\eta )\right)
-\left( \frac{2}{|\xi -\eta |}p^{\prime }(\xi -\eta )+\frac{2}{|\eta |}%
p^{\prime }(\eta )\right) \\
& \lesssim \frac{1}{1+|\xi -\eta |^{3}}+\frac{1}{1+|\eta |^{3}}+\frac{1}{%
|\eta |}+\frac{1}{|\xi -\eta |}.
\end{split}%$$ This ends the proof.
\[EstimGenPhase\] Define $$\mathfrak{M}_{1}=\frac{|\xi ||\xi -\eta ||\eta |}{\Phi _{1}\langle \xi -\eta
\rangle ^{2\lambda }\langle \eta \rangle ^{2\lambda }}$$ then if $f$ is either $\frac{\chi }{\langle \xi -\eta \rangle ^{\frac{1}{2}}}
$ or $\frac{\chi }{\langle \eta \rangle ^{\frac{1}{2}}}$ for any cutoff function $\chi $ with support in $$\label{RegionOmega}
\Omega =\{\max \{|\xi |,|\xi -\eta |,|\eta |\}\gtrsim 1\},$$ we have that, for any $\varepsilon >0,$ $$\begin{aligned}
||\mathfrak{M}_{1}||_{L_{\eta }^{\infty }(H_{\xi }^{\frac{5}{4}-\varepsilon
})}+||\mathfrak{M}_{1}||_{L_{\xi }^{\infty }(H_{\eta }^{\frac{5}{4}%
-\varepsilon })} &\lesssim_{\varepsilon}1\text{\ \ }for\text{ \ \ }\lambda >%
\frac{9}{8}. \label{mglobal} \\
||f\mathfrak{M}_{1}||_{L_{\eta }^{\infty }(H_{\xi }^{\frac{3}{2}-\varepsilon
})}+||f\mathfrak{M}_{1}||_{L_{\xi }^{\infty }(H_{\eta }^{\frac{3}{2}%
-\varepsilon })} &\lesssim_{\varepsilon}1\text{ \ \ }for\text{ }\lambda >1.
\label{mlocal}\end{aligned}$$
In order to prove this proposition, we split $\mathbb{R}^3$ into a union of three regions: $\{|\xi |<\frac{1}{2} |\eta |\},$ $\{|\eta |<\frac{1}{2}|\xi
|\}$ and $\{\frac{1}{3}<\frac{|\xi |}{ |\eta |}<3\}.$ Before we start, we remark that, in the triangle defined by $\xi ,\eta $ and $\xi -\eta ,$ we have that $$\frac{|\eta -\xi |}{\sin \theta }=\frac{|\xi |}{\sin \beta }=\frac{|\eta |}{
\sin \gamma }.$$ **Case 1. The region** $\Omega _{1}=\{|\xi |<\frac{1}{2}|\eta |\}.$ In this case, $|\xi -\eta |\geq |\eta |-|\xi |>|\xi |,$ so $|\xi |$ has the smallest size. We also deduce that $|\xi -\eta |\simeq |\eta |$ and consequently, since $p(\xi)\le p(\eta)$ $$|\Phi _{1}(\xi ,\eta )|=|p(\xi )-p(\xi -\eta )-p(\eta )|\gtrsim \max \{|\eta
|,|\xi -\eta |\}.$$ We note that since $p^{\prime }$ is bounded, $|\nabla _{\xi ,\eta }\Phi
_{1}|\lesssim 1$ and from Lemma \[EstimDerOfPhi\], we obtain that $$\begin{aligned}
|\nabla _{\xi ,\eta }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\nabla
_{\xi ,\eta }\Phi _{1}}{\Phi _{1}^{2}}\right\vert \lesssim \frac{1}{\{|\eta
|+|\xi -\eta |\}^{2}}, \\
|\Delta _{\xi }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\Delta _{\xi
}\Phi _{1}}{\Phi _{1}^{2}}+2\frac{|\nabla _{\xi }\Phi _{1}|^{2}}{\Phi
_{1}^{3}}\right\vert \\
&\lesssim &\frac{1}{\{|\eta |+|\xi -\eta |\}^{2}}\left\{ \frac{1}{ \{1+|\xi
|^{3}\}}+\frac{1}{|\xi|}\right\} +\frac{1}{\{|\eta |+|\xi -\eta |\}^{3}} \\
&\lesssim & \frac{1}{|\xi |^{3}}, \\
|\Delta _{\eta }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\Delta _{\eta
}\Phi _{1}}{\Phi _{1}^{2}}+2\frac{|\nabla _{\eta }\Phi _{1}|^{2}}{\Phi
_{1}^{3}}\right\vert \\
&\lesssim &\frac{1}{\{|\eta |+|\xi -\eta |\}^{2}}\frac{1}{|\eta |} +\frac{1}{%
\{|\eta |+|\xi -\eta |\}^{3}} \lesssim \frac{1}{|\eta |^{3}}.\end{aligned}$$ Recall the definition of $\chi,\varphi$ from and denote $%
g=\frac{|\xi ||\xi -\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda
}\langle \eta \rangle ^{2\lambda }}\varphi (\frac{\xi }{N} )\chi (\frac{%
2|\xi |}{|\eta |}),$ so that $$\begin{aligned}
\left\vert\mathfrak{M}_{1}\varphi (\frac{\xi}{N})\chi (\frac{2|\xi |}{|\eta |%
} )\right\vert&\lesssim & \frac{1}{\langle N\rangle ^{4\lambda-2}}\varphi (%
\frac{\xi }{N}),\hskip.1cm\hbox{and} \\
\left\vert \Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{\xi}{N})\chi (%
\frac{2|\xi |}{ |\eta |})\}\right\vert &=&\left\vert \Delta _{\xi }\{\frac{1%
}{\Phi _{1}}\}g+2\nabla _{\xi }\{ \frac{1}{\Phi _{1}}\}\cdot \nabla _{\xi }g+%
\frac{1}{\Phi _{1}}\Delta _{\xi }g\right\vert \\
&\lesssim &\frac{1}{N^{2}\langle N\rangle ^{4\lambda-2}}\varphi(\frac{\xi}{N}%
).\end{aligned}$$ We thus have that $$\begin{aligned}
||\mathfrak{M}_{1}\varphi (\frac{\xi}{N})\chi (\frac{2|\xi |}{|\eta |}
)||_{L_{\xi }^{2}} &\lesssim &\frac{N^{3/2}}{\langle N\rangle ^{4\lambda-2}}
\\
\Vert\Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{\xi}{N})\chi (\frac{%
2|\xi | }{|\eta |})\}\Vert_{L^{2}}+\Vert\Delta _{\eta }\{\mathfrak{M}%
_{1}\varphi (\frac{\eta }{N})\chi (\frac{2|\xi |}{|\eta |})\}\Vert_{L^{2}}
&\lesssim &\frac{1}{ N^{1/2}\langle N\rangle ^{4\lambda-2}}.\end{aligned}$$ Interpolating between the above estimates, we get that for any $\varepsilon
>0$ and any fixed fixed $\eta$, $$\begin{aligned}
\Vert \mathfrak{M}_{1}\chi (\frac{2|\xi |}{ |\eta |})\Vert_{\dot{H}_{\xi
}^{\sigma}}&\lesssim&\sum_{N}\Vert\mathfrak{M}_{1}\varphi (\frac{\xi}{N}%
)\chi (\frac{2|\xi |}{|\eta |})\Vert_{\dot{H}_{\xi }^{\sigma }} \\
&\lesssim &\sum_{N}||\mathfrak{M}_{1}\varphi (\frac{\xi}{N})\chi (\frac{
2|\xi |}{|\eta |})\Vert_{L^{2}}^{1-\frac{\sigma }{2}} \Vert\Delta _{\xi }\{%
\mathfrak{M}_{1}\varphi (\frac{\xi}{N})\chi (\frac{2|\xi |}{|\eta |}
)\}\Vert_{L^{2}}^{\frac{\sigma }{2}} \\
&\lesssim &\sum_{N}\frac{N^{\frac{3}{2}-\sigma }}{\langle N\rangle
^{4\lambda-2}},\end{aligned}$$ which is summable in $N$ for $\lambda >1$ and $0\le\sigma <\frac{3}{2}.$ The same proof (switching $\xi $ to $\eta $) works for $\sum_{N}||\mathfrak{M}%
_{1}\varphi (\frac{\eta}{N})\chi (\frac{2|\xi |}{|\eta |})||_{\dot{H}_{\eta
}^{\sigma }}.$ Both (\[mglobal\]) and (\[mlocal\]) are valid in this case.
**Case 2. In the region** $\Omega _{2}=\{|\eta |\leq \frac{1}{2}|\xi
|\}.$ ** **We note that $|\eta |$ is the smallest, and $|\xi -\eta
|\simeq |\xi |.$ We first claim that $$|\Phi _{1}|\gtrsim |\eta |\{\theta ^{2}+\frac{|\xi |^{2}}{\langle \eta
\rangle ^{2}\langle \xi \rangle ^{2}}\}\equiv |\eta |(\theta ^{2}+d^{2}).
\label{d}$$In fact, if $|\xi |$ is not the largest, then we know that $|\Phi _{1}|\geq
|\eta |$ and the claim is clearly valid. If $|\xi |$ is the largest, then $%
\theta $ is the angle between $|\xi |$ and $|\eta |,$ and $1-\cos \theta
\gtrsim \theta ^{2}$. Therefore we deduce from Lemma [EstimPhiGen]{}.
We note from Lemma \[EstimDerOfPhi\] that in this case, $$\begin{aligned}
& |\nabla _{\xi }\Phi _{1}(\xi ,\eta )|\lesssim \frac{|\eta |}{1+|\xi |^{3}}%
+|\sin \gamma |, \\
& |\Delta _{\xi }\Phi _{1}(\xi ,\eta )|\lesssim \frac{|\eta |}{1+|\xi |^{4}}+%
\frac{|\eta |}{|\xi |^{2}}\lesssim \text{ }\frac{|\eta |}{|\xi |^{2}}.\end{aligned}$$Besides, using that $\frac{\sin \gamma }{|\eta |}=\frac{\sin \beta }{|\xi |}$, the inequality above and , we can obtain that $$\begin{split}
|\nabla _{\xi }\{\frac{1}{\Phi _{1}}\}|& =\left\vert -\frac{\nabla _{\xi
}\Phi _{1}}{\Phi _{1}^{2}}\right\vert \lesssim \frac{1}{|\eta |\{\theta
^{2}+d^{2}\}^{2}(1+|\xi |^{3})}+\frac{\sin \beta }{|\eta ||\xi |\{\theta
^{2}+d^{2}\}^{2}}, \\
|\Delta _{\xi }\{\frac{1}{\Phi _{1}}\}|& =\left\vert -\frac{\Delta _{\xi
}\Phi _{1}}{\Phi _{1}^{2}}+2\frac{|\nabla _{\xi }\Phi _{1}|^{2}}{\Phi
_{1}^{3}}\right\vert \\
& \lesssim \frac{1}{|\eta |\{\theta ^{2}+d^{2}\}^{2}}\frac{1}{|\xi |^{2}}+%
\frac{\frac{|\eta |^{2}}{(1+|\xi |^{3})^{2}}+\sin ^{2}\gamma }{|\eta
|^{3}\{\theta ^{2}+d^{2}\}^{3}} \\
& \lesssim \frac{1}{|\eta |\{\theta ^{2}+d^{2}\}^{2}|\xi |^{2}}+\frac{1}{%
(1+|\xi |^{3})^{2}|\eta |\{\theta ^{2}+d^{2}\}^{3}}+\frac{\sin ^{2}\beta }{%
|\eta ||\xi |^{2}\{\theta ^{2}+d^{2}\}^{3}}.
\end{split}
\label{ComputationOfNablaxi}$$Now, for fixed $\eta ,$ and for any cutoff function for $\chi (\frac{2|\eta |%
}{|\xi |}),$ denote $$g=\frac{|\xi ||\xi -\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda
}\langle \eta \rangle ^{2\lambda }}\varphi (\frac{\xi }{N})\chi (\frac{%
2|\eta |}{|\xi |})$$since $|\xi |\backsim N$ and $|\eta |\leq |\xi |,$ direct computation yields $$\begin{aligned}
|\nabla _{\xi }g| &\lesssim &\frac{1}{|\xi |}\frac{|\xi ||\xi -\eta ||\eta |%
}{\langle \xi -\eta \rangle ^{2\lambda }\langle \eta \rangle ^{2\lambda }}%
\mathbf{1}_{|\xi |\backsim N,|\eta |\leq |\xi |}, \\
|\partial _{\xi }^{2}g| &\lesssim &\frac{1}{|\xi |^{2}}\frac{|\xi ||\xi
-\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda }\langle \eta \rangle
^{2\lambda }}\mathbf{1}_{|\xi |\backsim N,|\eta |\leq |\xi |}.\end{aligned}$$Therefore, we have that $$|\mathfrak{M}_{1}\varphi (\frac{\xi }{N})\chi (\frac{2|\xi |}{|\eta |}%
)|\lesssim \frac{|\xi |^{2}}{\langle N\rangle ^{2\lambda }\langle \eta
\rangle ^{2\lambda }(\theta ^{2}+d^{2})}\mathbf{1}_{|\xi |\backsim N,|\eta
|\leq |\xi |},$$and by we also have that $$\begin{aligned}
&&|\Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{\xi }{N})\chi (\frac{2|\xi
|}{|\eta |})\}| \\
&=&|\Delta _{\xi }\{\frac{1}{\Phi _{1}}\}g+2\nabla _{\xi }\{\frac{1}{\Phi
_{1}}\}\cdot \nabla _{\xi }g+\frac{1}{\Phi _{1}}\Delta _{\xi }g| \\
&\lesssim &\frac{|\xi |^{2}\mathbf{1}_{|\xi |\backsim N,|\eta |\leq |\xi |}}{%
\langle N\rangle ^{2\lambda }\langle \eta \rangle ^{2\lambda }}\left\{ \frac{%
1}{\{\theta ^{2}+d^{2}\}^{2}|\xi |^{2}}+\frac{1}{(1+|\xi |^{3})^{2}\{\theta
^{2}+d^{2}\}^{3}}+\frac{\sin ^{2}\beta }{|\xi |^{2}\{\theta ^{2}+d^{2}\}^{3}}%
\right\} .\end{aligned}$$By using $\frac{\eta }{|\eta |}$ as the north pole, we thus compute: $$\begin{split}
\int |\mathfrak{M}_{1}\varphi (\frac{\xi }{N})\chi (\frac{2|\xi |}{|\eta |}%
)|^{2}d\xi & \lesssim \frac{N^{4}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }}\int_{|\xi |\backsim N}\frac{|\xi |^{2}\sin \theta }{%
(\theta ^{2}+d^{2})^{2}}d\xi d\theta \\
& =\frac{N^{6}}{\langle N\rangle ^{4\lambda }\langle \eta \rangle ^{4\lambda
}}\int_{|\xi |\sim N}\left\{ \int_{\theta \leq d}\frac{\theta d\theta }{d^{4}%
}+\int_{\theta \geq d}\frac{d\theta }{\theta ^{3}}\right\} d|\xi | \\
& \lesssim \frac{N^{7}}{\langle N\rangle ^{4\lambda }\langle \eta \rangle
^{4\lambda }}\frac{1}{d^{2}}\lesssim \frac{N^{5}}{\langle N\rangle
^{4\lambda -2}\langle \eta \rangle ^{4\lambda -2}}.
\end{split}
\label{2M2}$$Next, since $\beta =\theta +\gamma $ and $\gamma ,\beta \lesssim \theta ,$ and $d\backsim \frac{N}{\langle \eta \rangle \langle N\rangle },$ we have $$\begin{split}
& \int |\Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{\xi }{N})\chi (\frac{%
2|\xi |}{|\eta |})\}|^{2}d\xi \\
& \lesssim \frac{N^{6}}{\langle N\rangle ^{4\lambda }\langle \eta \rangle
^{4\lambda }}\int_{|\xi |\backsim N}\left\{ \frac{1}{\{\theta
^{2}+d^{2}\}^{4}N^{4}}+\frac{1}{\langle N\rangle ^{12}\{\theta
^{2}+d^{2}\}^{6}}+\frac{\sin ^{4}\beta }{N^{4}\{\theta ^{2}+d^{2}\}^{6}}%
\right\} \theta d\theta d|\xi | \\
& \lesssim \frac{N^{6}}{\langle N\rangle ^{4\lambda }\langle \eta \rangle
^{4\lambda }}\times \int_{|\xi |\backsim N}\Big(\left\{ \int_{\theta \leq d}%
\frac{\theta d\theta }{d^{8}N^{4}}+\int_{\theta \geq d}\frac{d\theta }{%
\theta ^{7}N^{4}}\right\} \\
& +\frac{1}{\langle N\rangle ^{12}}\left\{ \int_{\theta \leq d}\frac{\theta
d\theta }{d^{12}}+\int_{\theta \geq d}\frac{d\theta }{\theta ^{11}}\right\} +%
\frac{1}{N^{4}}\left\{ \int_{\theta \leq d}\frac{\theta ^{5}d\theta }{d^{12}}%
+\int_{\theta \geq d}\frac{d\theta }{\theta ^{7}}\right\} \Big)d|\xi | \\
& \lesssim \frac{N^{3}}{\langle N\rangle ^{4\lambda }\langle \eta \rangle
^{4\lambda }d^{6}}+\frac{N^{7}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }\langle N\rangle ^{12}d^{10}}+\frac{N^{3}}{\langle
N\rangle ^{4\lambda }\langle \eta \rangle ^{4\lambda }d^{6}} \\
& \lesssim \frac{1}{\langle N\rangle ^{4\lambda -6}\langle \eta \rangle
^{4\lambda -6}N^{3}},
\end{split}
\label{2MH2}$$where we have used the fact $\frac{1+|\eta |^{2}}{1+N^{2}}\lesssim 1.$ By interpolation between and , $$\begin{aligned}
\Vert \mathfrak{M}_{1}\varphi (\frac{\xi }{N})\chi (\frac{2|\xi |}{|\eta |}%
)\Vert _{\dot{H}_{\xi }^{\sigma }} &\lesssim &\Vert \mathfrak{M}_{1}\varphi (%
\frac{\xi }{N})\chi (\frac{2|\xi |}{|\eta |})\Vert _{L^{2}}^{1-\frac{\sigma
}{2}}\Vert \Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{\xi }{N})\chi (%
\frac{2|\xi |}{|\eta |})\}\Vert _{L^{2}}^{\frac{\sigma }{2}} \\
&\lesssim &\frac{N^{\frac{5}{2}(1-\frac{\sigma }{2})}}{\langle N\rangle
^{2\lambda -1}\langle \eta \rangle ^{2\lambda -1}}\frac{\langle \eta \rangle
^{\sigma }\langle N\rangle ^{\sigma }}{N^{\frac{3}{4}\sigma }} \\
&\lesssim &\frac{\langle \eta \rangle ^{\sigma }\langle N\rangle ^{\sigma
}N^{\frac{5}{2}-2\sigma }}{\langle N\rangle ^{2\lambda -1}\langle \eta
\rangle ^{2\lambda -1}}\end{aligned}$$as $|\eta |\lesssim N.$ By taking $\sigma =\frac{5}{4}-\varepsilon ,$ this is summable for $N$ for $4\lambda -2>\frac{5}{2}.$ This concludes . On the other hand, in $\Omega $ (see ) we have $|\xi |\simeq |\xi -\eta |\geq 1$ so that $N\gtrsim 1$ and we deduce that this is summable for $N\geq 1$, $\sigma =3/2-\varepsilon $ when $%
\lambda >1.$ This concludes .
We now turn to the $\eta$ derivatives. Using again Lemma \[EstimDerOfPhi\] , we have that $$|\nabla _{\eta }\Phi _{1}(\xi ,\eta )|\lesssim \frac{|\xi |}{\langle \eta
\rangle ^{2}\langle \xi \rangle }+|\sin \beta |,\hskip.1cm\hbox{and}\hskip%
.1cm|\Delta _{\eta }\Phi _{1}(\xi ,\eta )|\lesssim \frac{1}{|\eta |}$$ and therefore $$\begin{aligned}
|\nabla _{\eta }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\nabla _{\eta
}\Phi _{1}}{\Phi _{1}^{2}}\right\vert \lesssim \frac{|\xi |}{|\eta
|^{2}\{\theta ^{2}+d^{2}\}^{2}\langle \xi \rangle \langle \eta \rangle ^{2}}+%
\frac{\sin \beta }{|\eta |^{2}\{\theta ^{2}+d^{2}\}^{2}}, \\
|\Delta _{\eta }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\Delta _{\eta
}\Phi _{1}}{\Phi _{1}^{2}}+2\frac{|\nabla _{\eta }\Phi _{1}|^{2}}{\Phi
_{1}^{3}}\right\vert \\
&\lesssim &\frac{1}{|\eta |^{3}\{\theta ^{2}+d^{2}\}^{2}}+\frac{1}{|\eta
|^{3}\{\theta ^{2}+d^{2}\}^{3}}\left\{ \frac{|\xi |^{2}}{\langle \xi \rangle
^{2}\langle \eta \rangle ^{4}}+\sin ^{2}\beta \right\} .\end{aligned}$$Define $g$ by $$g=\frac{|\xi ||\xi -\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda
}\langle \eta \rangle ^{2\lambda }}\varphi (\frac{\eta }{M})\chi (\frac{%
2|\eta |}{|\xi |}).$$Since $|\eta |\backsim M$ and $|\eta |\lesssim |\xi |,$ direct computation yields $$\begin{aligned}
|\nabla _{\eta }g| &\lesssim &\frac{1}{|\eta |}\frac{|\xi ||\xi -\eta ||\eta
|}{\langle \xi -\eta \rangle ^{2\lambda }\langle \eta \rangle ^{2\lambda }}%
\mathbf{1}_{|\eta |\backsim M,|\eta |\leq |\xi |} \\
|\partial _{\eta }^{2}g| &\lesssim &\frac{1}{|\eta |^{2}}\frac{|\xi ||\xi
-\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda }\langle \eta \rangle
^{2\lambda }}\mathbf{1}_{|\eta |\backsim M,|\eta |\leq |\xi |}.\end{aligned}$$Hence, since $\sin \beta \lesssim \sin \theta $, $$\begin{aligned}
|\Delta _{\eta }\{\mathfrak{M}_{1}\varphi (\frac{\eta }{M})\chi (\frac{2|\xi
|}{|\eta |})\}| &=&|\Delta _{\eta }\{\frac{1}{\Phi _{1}}\}g+2\nabla _{\eta
}\{\frac{1}{\Phi _{1}}\}\cdot \nabla _{\eta }g+\frac{1}{\Phi _{1}}\Delta
_{\eta }g| \\
&\lesssim &\frac{|\xi |^{2}}{M^{2}\{\theta ^{2}+d^{2}\}^{2}\langle M\rangle
^{2\lambda }\langle \xi \rangle ^{2\lambda }} \\
&&+\frac{|\xi |^{2}}{M^{2}\{\theta ^{2}+d^{2}\}^{3}\langle M\rangle
^{2\lambda }\langle \xi \rangle ^{2\lambda }}\left\{ \frac{|\xi |^{2}}{%
\langle \xi \rangle ^{2}\langle \eta \rangle ^{4}}+\sin ^{2}\theta \right\} .\end{aligned}$$By using $\frac{\xi}{|\xi|}$ as the north pole, and $d\backsim \frac{|\xi |}{%
\langle M\rangle \langle \xi \rangle },$ we thus compute that $$\label{2M2eta}
\begin{split}
\int_{|\eta |\sim M}|\mathfrak{M}_{1}\varphi (\frac{\eta }{M})\chi (\frac{%
2|\xi |}{|\eta |})|^{2}d\eta &\lesssim \frac{|\xi |^{4}}{\langle M\rangle
^{4\lambda }\langle \xi \rangle ^{4\lambda }}\int_{|\eta |\backsim M}\frac{%
\sin \theta }{(\theta ^{2}+d^{2})^{2}}d\eta d\theta \\
&\lesssim \frac{|\xi |^{4}M^{2}}{\langle M\rangle ^{4\lambda }\langle \xi
\rangle ^{4\lambda }}\int_{|\eta |\backsim M}\left\{ \int_{\theta \leq d}%
\frac{\theta d\theta }{d^{4}}+\int_{\theta \geq d}\frac{d\theta }{\theta ^{3}%
}\right\} d|\eta | \\
&\lesssim \frac{|\xi |^{4}M^{3}}{\langle M\rangle ^{4\lambda }\langle \xi
\rangle ^{4\lambda }}\frac{1}{d^{2}}\lesssim \frac{|\xi |^{2}M^{3}}{\langle
M\rangle ^{4\lambda -2}\langle \xi \rangle ^{4\lambda -2}}.
\end{split}%$$ Next, since $\beta =\theta +\gamma $ and $\gamma ,\beta \lesssim \theta $, and we have that $$\label{2MH2eta}
\begin{split}
&\int_{|\eta |\sim M}|\Delta _{\eta }\{\mathfrak{M}_{1}\varphi (\frac{\eta }{%
M})\chi (\frac{2|\xi |}{|\eta |})\}|^{2}d\eta \\
&\lesssim \frac{1}{M^{2}\langle M\rangle ^{4\lambda }\langle \xi \rangle
^{4\lambda }} \\
&\times \int_{\substack{ \\ |\eta |\backsim M}}\left\{ \frac{|\xi |^{4}}{%
\{\theta ^{2}+d^{2}\}^{4}}+\frac{|\xi |^{8}}{\{\theta
^{2}+d^{2}\}^{6}\langle \xi \rangle ^{4}\langle \eta \rangle ^{8}}+\frac{%
|\xi |^{4}\sin ^{4}\theta }{\{\theta ^{2}+d^{2}\}^{6}}\right\} \theta
d\theta d|\eta | \\
&\lesssim \frac{|\xi |^{4}}{M^{2}\langle M\rangle ^{4\lambda }\langle \xi
\rangle ^{4\lambda }}\int_{|\eta |\backsim M}\Big(\left\{ \int_{\theta
\lesssim d}\frac{\theta d\theta }{d^{8}}+\int_{\theta \geq d}\frac{d\theta }{%
\theta ^{7}}\right\} \\
&+\frac{|\xi |^{8}}{\langle \xi \rangle ^{4}\langle M\rangle ^{8}}\left\{
\int_{\theta \leq d}\frac{\theta d\theta }{d^{12}}+\int_{\theta \geq d}\frac{%
d\theta }{\theta ^{11}}\right\} +\left\{ \int_{\theta \lesssim d}\frac{%
\theta ^{5}d\theta }{d^{12}}+\int_{\theta \geq d}\frac{d\theta }{\theta ^{7}}%
\right\} \Big)d|\eta | \\
&\lesssim \frac{|\xi |^{4}}{M\langle M\rangle ^{4\lambda }\langle \xi
\rangle ^{4\lambda }d^{6}}+\frac{|\xi |^{8}}{M\langle M\rangle ^{4\lambda
+8}\langle \xi \rangle ^{4\lambda +4}d^{10}}+\frac{|\xi |^{4}}{M\langle
M\rangle ^{4\lambda }\langle \xi \rangle ^{4\lambda }d^{6}} \\
&\lesssim \frac{1}{M\langle M\rangle ^{4\lambda -6}\langle \xi \rangle
^{4\lambda -6}|\xi |^{2}}.
\end{split}%$$ Interpolating between and , we obtain $$\begin{aligned}
\Vert \mathfrak{M}_{1}\varphi (\frac{\eta }{M})\chi (\frac{2|\xi |}{|\eta |}%
)\Vert _{\dot{H}_{\eta }^{\sigma }} &\lesssim &\Vert \mathfrak{M}_{1}\varphi
(\frac{\eta }{M})\chi (\frac{2|\xi |}{|\eta |})\Vert _{L^{2}}^{1-\frac{%
\sigma }{2}}\Vert \Delta _{\eta }\{\mathfrak{M}_{1}\varphi (\frac{\eta }{M}%
)\chi (\frac{2|\xi |}{|\eta |})\}\Vert _{L^{2}}^{\frac{\sigma }{2}} \\
&\lesssim &\left\{ \frac{|\xi |^{2}M^{3}}{\langle M\rangle ^{4\lambda
-2}\langle \xi \rangle ^{4\lambda -2}}\right\} ^{\frac{1}{2}-\frac{\sigma }{4%
}}\left\{ \frac{1}{M\langle M\rangle ^{4\lambda -6}\langle \xi \rangle
^{4\lambda -6}|\xi |^{2}}\right\} ^{\frac{\sigma }{4}} \\
&\lesssim &\frac{M^{\frac{3}{2}-\sigma }}{\langle M\rangle ^{2\lambda
-1-\sigma }\langle \xi \rangle ^{2\lambda -1-\sigma }|\xi |^{\sigma -1}}\end{aligned}$$By taking $\sigma =\frac{5}{4}-\varepsilon ,$ this is summable in $M$ if $%
2\lambda -1-\sigma >0$ and we conclude . On the other hand, in $\Omega ,$ we have $|\xi |\geq 1$ so that by taking $\sigma =\frac{3}{2}%
-\varepsilon ,$ this is summable for $M\geq 1$ if $\lambda >1$ and if $f=%
\frac{1}{\langle \xi -\eta \rangle ^{\frac{1}{2}}}$ or $\frac{1}{\langle
\eta \rangle ^{\frac{1}{2}}}$.
**Case 3 Region** $\Omega _{3}=\{\frac{1}{3}<\frac{|\xi |}{|\eta |}
<3\}. $
In this region, we have $|\xi -\eta |\leq 4\min (|\xi |,|\eta |)$, $|\xi
|\simeq |\eta |$ are of the order of the longest side and $\sin \gamma
\simeq \sin \beta $. Therefore $$|\Phi _{1}|\geq |\xi -\eta |(\gamma ^{2}+\beta ^{2}+\frac{|\xi |^{2}+|\eta
|^{2}}{(1+|\xi |^{2}+|\eta |^{2})\langle \xi -\eta \rangle ^{2}})\equiv |\xi
-\eta |(\gamma ^{2}+\beta ^{2}+d_{1}^{2}). \label{d1}$$The above lower bound is trivial if $|\xi |$ is not the largest. If $|\xi |$ is the largest and $|\xi -\eta |$ is not the smallest, then $\xi ,\eta ,\xi
-\eta $ are all comparable so that $\gamma \simeq \theta \simeq \pi -\beta $ and from , $$|\Phi _{1}|\gtrsim \theta ^{2}|\eta |+\frac{|\eta |}{1+|\eta |^{2}}\gtrsim
|\xi -\eta |(\gamma ^{2}+\beta ^{2}+d_{1}^{2}).$$Finally, when $|\xi -\eta |$ is the smallest, this follows from . Moreover, from and , $$|\partial _{\xi ,\eta }\Phi _{1}(\xi ,\eta )|\lesssim \frac{|\eta |}{\langle
\eta \rangle \langle \xi -\eta \rangle ^{2}}+|\sin \gamma |,\text{ \ \ \ \ \
}|\Delta _{\xi ,\eta }\Phi _{1}(\xi ,\eta )|\lesssim \frac{1}{|\xi -\eta |}$$and therefore, $$\begin{aligned}
|\nabla _{\xi ,\eta }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\nabla
_{\xi }\Phi _{1}}{\Phi _{1}^{2}}\right\vert \\
&\lesssim &\frac{|\eta |}{|\xi -\eta |^{2}(\beta ^{2}+\gamma
^{2}+d_{1}^{2})^{2}\langle \eta \rangle \langle \xi -\eta \rangle ^{2}}+%
\frac{\sin \gamma }{|\xi -\eta |^{2}(\beta ^{2}+\gamma ^{2}+d_{1}^{2})^{2}}
\\
|\Delta _{\xi ,\eta }\{\frac{1}{\Phi _{1}}\}| &=&\left\vert -\frac{\Delta
_{\xi }\Phi _{1}}{\Phi _{1}^{2}}+2\frac{|\nabla _{\xi }\Phi _{1}|^{2}}{\Phi
_{1}^{3}}\right\vert \\
&\lesssim &\frac{1}{|\xi -\eta |^{3}(\beta ^{2}+\gamma ^{2}+d_{1}^{2})^{2}}+%
\frac{|\eta |^{2}+|\xi |^{2}}{|\xi -\eta |^{3}(\beta ^{2}+\gamma
^{2}+d_{1}^{2})^{3}\langle \eta \rangle ^{2}\langle \xi -\eta \rangle ^{4}}
\\
&&+\frac{\sin ^{2}\gamma +\sin ^{2}\beta }{|\xi -\eta |^{3}(\beta
^{2}+\gamma ^{2}+d_{1}^{2})^{3}}.\end{aligned}$$For fixed $\eta $ and a dyadic number $N$, denote $$g=\frac{|\xi ||\xi -\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda
}\langle \eta \rangle ^{2\lambda }}\varphi (\frac{\xi -\eta }{N})\varphi (%
\sqrt{\frac{|\eta |}{|\xi |}})$$As before, direct computation yields $$\begin{aligned}
|\nabla _{\xi ,\eta }g| &\lesssim &\frac{1}{|\xi -\eta |}\frac{|\xi ||\xi
-\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda }\langle \eta \rangle
^{2\lambda }}\mathbf{1}_{|\xi -\eta |\backsim N,|\xi |\backsim |\eta |} \\
|\partial _{\xi ,\eta }^{2}g| &\lesssim &\frac{1}{|\xi -\eta |^{2}}\frac{%
|\xi ||\xi -\eta ||\eta |}{\langle \xi -\eta \rangle ^{2\lambda }\langle
\eta \rangle ^{2\lambda }}\mathbf{1}_{|\xi -\eta |\backsim N,|\xi |\backsim
|\eta |}.\end{aligned}$$Therefore, $$|\mathfrak{M}_{1}\varphi (\frac{\xi -\eta }{N})\varphi (\sqrt{\frac{|\eta |}{%
|\xi |}})|\lesssim \frac{|\eta |^{2}\mathbf{1}_{|\xi -\eta |\backsim N,|\xi
|\backsim |\eta |}}{\langle N\rangle ^{2\lambda }\langle \eta \rangle
^{2\lambda }(\beta ^{2}+d_{1}^{2})},$$and $$\begin{aligned}
&&|\Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{\xi -\eta }{N})\varphi (%
\sqrt{\frac{|\eta |}{|\xi |}})\}| \\
&\lesssim &\left\{ \frac{|\eta |^{2}}{(\beta ^{2}+d_{1}^{2})^{2}}+\frac{%
|\eta |^{4}}{(\beta ^{2}+d_{1}^{2})^{3}\langle \eta \rangle ^{2}\langle \xi
-\eta \rangle ^{4}}+\frac{|\eta |^{2}\sin ^{2}\beta }{(\beta
^{2}+d_{1}^{2})^{3}}\right\} \frac{\mathbf{1}_{|\xi -\eta |\backsim N,|\xi
|\backsim |\eta |}}{N^{2}\langle N\rangle ^{2\lambda }\langle \eta \rangle
^{2\lambda }}.\end{aligned}$$By using $-\frac{\eta }{|\eta |}$ as the north pole, and $d_{1}\backsim
\frac{|\eta |}{\langle \eta \rangle \langle N\rangle },$ we thus compute from : $$\label{3M2}
\begin{split}
&\int |\mathfrak{M}_{1}\varphi (\frac{\xi -\eta }{N})\varphi (\sqrt{\frac{%
|\eta |}{|\xi |}})|^{2}d\xi \\
&\lesssim \frac{|\eta |^{4}N^{2}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }}\int_{|\xi -\eta |\backsim N}\frac{\sin \beta }{(\beta
^{2}+d_{1}^{2})^{2}}d|\xi |d\beta \\
&=\frac{|\eta |^{4}N^{3}}{\langle N\rangle ^{4\lambda }\langle \eta \rangle
^{4\lambda }}\int_{|\xi -\eta |\sim N}\left\{ \int_{\beta \leq d_{1}}\frac{%
\beta d\beta }{d_{1}^{4}}+\int_{\beta \geq d}\frac{d\beta }{\beta ^{3}}%
\right\} d|\xi | \\
&\lesssim \frac{|\eta |^{4}N^{3}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }}\frac{1}{d_{1}^{2}}\lesssim \frac{|\eta |^{2}N^{3}}{%
\langle N\rangle ^{4\lambda -2}\langle \eta \rangle ^{4\lambda -2}}.
\end{split}%$$ Next, we have that $$\label{3MH2}
\begin{split}
&\int_{|\xi -\eta |\sim N}|\Delta _{\xi }\{\mathfrak{M}_{1}\varphi (\frac{%
\xi -\eta }{N})\varphi (\sqrt{\frac{|\eta |}{|\xi |}})\}|^{2}d\xi \\
&\lesssim \frac{|\eta |^{4}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }N^{4}}\int_{|\xi -\eta |\backsim N}\left\{ \frac{1}{%
(\beta ^{2}+d_{1}^{2})^{4}}+\frac{|\eta |^{4}}{(\beta
^{2}+d_{1}^{2})^{6}\langle \eta \rangle ^{4}\langle N\rangle ^{8}}+\frac{%
\beta ^{4}}{(\beta ^{2}+d_{1}^{2})^{6}}\right\} d(\xi -\eta ) \\
&\lesssim \frac{|\eta |^{4}N^{2}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }N^{4}}\int_{|\xi -\eta |\backsim N}d|\xi -\eta |\Big(%
\left\{ \int_{\beta \leq d_{1}}\frac{\beta d\beta }{d_{1}^{8}}+\int_{\beta
\geq d_{1}}\frac{d\beta }{\beta ^{7}}\right\} \\
&+\frac{|\eta |^{4}}{\langle \eta \rangle ^{4}\langle N\rangle ^{8}}\left\{
\int_{\beta \leq d_{1}}\frac{\beta d\beta }{d_{1}^{12}}+\int_{\beta \geq
d_{1}}\frac{d\beta }{\beta ^{11}}\right\} +\left\{ \int_{\beta \leq d_{1}}%
\frac{\beta ^{5}d\beta }{d_{1}^{12}}+\int_{\beta \geq d_{1}}\frac{d\beta }{%
\beta ^{7}}\right\} \Big) \\
&\lesssim \frac{|\eta |^{4}}{\langle N\rangle ^{4\lambda }\langle \eta
\rangle ^{4\lambda }Nd^{6}}+\frac{|\eta |^{8}}{\langle N\rangle ^{4\lambda
+8}\langle \eta \rangle ^{4\lambda +4}Nd_{1}^{10}}+\frac{|\eta |^{4}}{%
\langle N\rangle ^{4\lambda }N\langle \eta \rangle ^{4\lambda }d_{1}^{6}} \\
&\lesssim \frac{1}{\langle N\rangle ^{4\lambda -6}\langle \eta \rangle
^{4\lambda -6}|\eta |^{2}N}.
\end{split}%$$ Interpolating between and , we have $$\begin{aligned}
&&||\mathfrak{M}_{1}\varphi (\frac{\xi -\eta }{N})\varphi (\sqrt{\frac{|\eta
|}{|\xi |}})||_{\dot{H}_{\xi }^{\sigma }} \\
&\lesssim &||\mathfrak{M}_{1}\varphi (\frac{\xi -\eta }{N})\varphi (\sqrt{%
\frac{|\eta |}{|\xi |}})||_{L^{2}}^{1-\frac{\sigma }{2}}||\Delta _{\xi }\{%
\mathfrak{M}_{1}\varphi (\frac{\xi -\eta }{N})\varphi (\sqrt{\frac{|\eta |}{%
|\xi |}})\}||_{L^{2}}^{\frac{\sigma }{2}} \\
&\lesssim &\left\{ \frac{|\eta |^{2}N^{3}}{\langle N\rangle ^{4\lambda
-2}\langle \eta \rangle ^{4\lambda -2}}\right\} ^{\frac{1}{2}-\frac{\sigma }{%
4}}\left\{ \frac{1}{\langle N\rangle ^{4\lambda -6}\langle \eta \rangle
^{4\lambda -6}|\eta |^{2}N}\right\} ^{\frac{\sigma }{4}} \\
&\lesssim &\frac{N^{\frac{3}{2}-\sigma }}{\langle N\rangle ^{2\lambda
-1-\sigma }\langle \eta \rangle ^{2\lambda -1-\sigma }|\eta |^{\sigma -1}}\end{aligned}$$as $N\lesssim |\eta |.$ By taking $\sigma =\frac{5}{4}-\varepsilon ,$ this is summable for $N$ when $4\lambda -2>\frac{5}{2}$ hence we deduce ([mglobal]{})$.$ On the other hand, in $\Omega ,$ we know that $|\eta |\simeq
|\xi |\geq 1.$ Hence for $f=\frac{1}{\langle \xi -\eta \rangle ^{\frac{1}{2}}%
}$ or $\frac{1}{\langle \eta \rangle ^{\frac{1}{2}}}$, we can take $\sigma =%
\frac{3}{2}-\varepsilon $ and still get a convergent series. therefore follows. The $\eta $ derivatives can be controlled similarly since we had the same control.
The $L^{10}$ Bound and end of the proof {#SecL10}
=======================================
Estimating the $L^{10}$ bound
-----------------------------
Using the results of Section \[SecMult\], we can now estimate the last part of the $X$ norm.
\[EstimL10NormForAlphaProp\] Let $\alpha$ be a solution of , then $$\label{EstimL10NormForAlpha}
\sup_{t\geq 0}(1+t)^{\frac{16}{15}}\Vert \alpha(t)\Vert _{W^{k,10}}\lesssim
\Vert \alpha _{0}\Vert _{Y}+\Vert \alpha \Vert _{X}^{2}.$$
We use Theorem \[CorGNT\] and Proposition \[EstimGenPhase\] to control the nonlinear terms appearing in . Our strategy is first to establish the Proposition for $\Phi _{1},$ and then we use symmetry to conclude all the other cases. We first deal with the cubic terms as follows. We let $$A(\xi-\eta)=\frac{n_2(\xi-\eta)}{\vert\xi-\eta\vert}\langle\xi-\eta%
\rangle^{2\lambda}\hskip.1cm\hbox{and}\hskip.1cm B(\eta)=\frac{n_3(\eta)}{%
\vert\eta\vert}\langle\eta\rangle^{2\lambda}$$ We first apply Proposition \[PropLinDecay\] to get that, for a typical term, $$\begin{split}
& \Vert (1-\Delta)^{\frac{k}{2}}\mathcal{F}^{-1}\left\{%
\int_{0}^{t}e^{i(t-s)p(|\xi |)}\frac{im^1(\xi ,\eta )}{\Phi_{1}}\hat{\alpha}%
(s,\xi -\eta ) \hat{h}(\alpha)(s,\eta)ds\right\}\Vert _{L^{10}} \\
\lesssim &\int_{0}^{t}\frac{1}{(1+t-s)^{\frac{16}{15}}}\Vert \mathcal{F}
_{\xi }^{-1}\left\{n_{1}(\xi )\langle \xi \rangle ^{k}\int_{\mathbb{R}^{3}}%
\frac{|\xi |}{\Phi _{1}}n_{2}(\xi -\eta)\hat{\alpha}(\xi -\eta)n_{3}(\eta )%
\hat{h}(\eta )d\eta\right\} \Vert _{W^{\frac{12}{5},\frac{10}{9}}}ds \\
\lesssim &\int_{0}^{t}\frac{1}{(1+t-s)^{\frac{16}{15}}}\Vert \mathcal{F}
_{\xi }^{-1}\left\{\langle \xi \rangle ^{k+\frac{12}{5}}\int_{\mathbb{R}%
^{3}} \mathfrak{M}_{1}A(\xi-\eta)\hat{\alpha}(\xi -\eta )B(\eta)\hat{h}(\eta
)\right\}\Vert _{L^{\frac{10}{9}}}ds \\
\lesssim &\int_{0}^{t}\frac{1}{(1+t-s)^{\frac{16}{15}}}\Big(\Vert
A(\vert\nabla\vert)\alpha \Vert_{H^{k+\frac{12}{5}}}\Vert
B(\vert\nabla\vert)\beta \Vert_{L^{l_2}}+\Vert
A(\vert\nabla\vert)\alpha\Vert_{L^{l_2}}\Vert B(\vert\nabla\vert)\beta
\Vert_{H^{k+\frac{12}{5}}}\Big) ds \\
\lesssim &\int_{0}^{t}\frac{1}{(1+t-s)^{\frac{16}{15}}}\left( \Vert \alpha
\Vert _{H^{-1}\cap H^{k+2\lambda+\frac{7}{5}}}\Vert |\nabla|^{-1}\beta \Vert
_{H^{3}}+\Vert \alpha \Vert _{H^{-1}\cap H^{2}}\Vert
\vert\nabla\vert^{-1}\beta \Vert _{H^{k+2\lambda+\frac{12}{5}}}\right) ds \\
\lesssim &(1+t)^{-\frac{16}{15}}\Vert \alpha \Vert _{X}^{3}
\end{split}%$$ since $k\ge 2\lambda+\frac{7}{5}$. Here we have applied Lemma \[infinity\] around $s=\frac{5}{ 4}-\varepsilon,$ Proposition \[EstimGenPhase\] and Theorem \[CorGNT\] with $l_{1}=10$, $b=\infty$, and $l_{2}= \frac{60}{%
29+20\varepsilon} >2$. To finish the analysis of the cubic term, we also need to control the cubic term pre-normal form in . We use the fact that $e^{itp(\vert\nabla\vert)}$ is a unitary operator on $H^{k}$ and to get that $$\begin{split}
\Vert \mathcal{F}^{-1}\int_0^te^{i(t-s)p(\xi)}\hat{\mathcal{N}}%
_1(\alpha)(s)ds\Vert_{W^{k,10}}&\lesssim \int_0^t\Vert
e^{i(t-s)p(\vert\nabla\vert)}\mathcal{N}_1(\alpha)(s)\Vert_{H^{k+2}}ds \\
&\lesssim \int_0^t\Vert\vert\nabla\vert^{-1}\mathcal{N}_1(\alpha)(s)%
\Vert_{H^{k+3}}ds \\
&\lesssim \Vert\alpha\Vert_X^2
\end{split}%$$
To estimate the integrated term $\mathfrak{B}$ in , we need to separate the regions. First we control the integrated part when all the terms are small, $$M=\max (|\xi |,|\xi -\eta |,|\eta |)<3.$$ To do this, we first note that Sobolev’s embedding $\dot{H}^{\frac{6}{5}
}\subset L^{10},$ and the fact that for bounded $\xi $, $|\xi |^{\frac{6}{5}
}\lesssim |\xi |\lesssim |\xi -\eta |+|\eta |$ to get[^2] $$\begin{split}
& \Vert \mathfrak{B}(\alpha (t),\alpha (t))\Vert _{L^{10}} \\
& \lesssim \Vert \mathcal{F}_{\xi }^{-1}\left\{|\xi |^{\frac{6}{5}}\int_{%
\mathbb{R}^{3}}\frac{|\xi ||\xi -\eta |||\eta |}{i\Phi _{1}}\frac{m}{|\xi |}%
\frac{\hat{\alpha}(t,\xi -\eta )}{|\xi -\eta |}\frac{\hat{\alpha}(t,\eta )}{%
|\eta |}d\eta\right\} \Vert _{L^{2}} \\
& \lesssim \Vert \int_{\mathbb{R}^{3}}\mathfrak{M}_{1}n_{1}(\xi )n_{2}(\xi
-\eta )n_{3}(\eta )\langle \xi -\eta \rangle ^{2\lambda }\hat{\alpha}(t,\xi
-\eta )\frac{\langle \eta \rangle ^{2\lambda }\hat{\alpha}(t,\eta )}{|\eta |}%
d\eta \Vert _{L^{2}} \\
& \lesssim \Vert n_{2}(|\nabla |)\alpha \Vert _{L^{10}}\Vert |\nabla
|^{-1}n_{3}(|\nabla |)\alpha \Vert _{L^{2}}\lesssim \Vert \alpha \Vert
_{X}\Vert \alpha \Vert _{L^{10}}
\end{split}%$$ where we have applied Proposition \[EstimGenPhase\] for $\mathfrak{M}_{1}$ for $s=\frac{5}{4} -\varepsilon$, Lemma \[infinity\], and Lemma [LemGNT]{} with $l_{1}=2$, $b=\infty $, $l_{2}=10$ and $l_{3}=\frac{60}{%
29+20\varepsilon}>2$.
Next we deal with the case when one of the frequencies is large $M>1.$ This can happen in two cases. First, if $|\xi |\leq 1,M>1$. In this case, we have $|\eta |\simeq |\xi -\eta |\geq 1.$ We bound the $L^{10}$ norm by the $L^{2}$ norm via Sobolev’s inequality (with bounded $\xi $) to get $$\begin{split}
\Vert \mathfrak{B}\Vert _{L^{10}}& \lesssim \Vert \mathcal{F}_{\xi
}^{-1}\int_{\mathbb{R}^{3}}\frac{m(\xi ,\eta )}{i\Phi _{1}}\chi \hat{\alpha}%
(t,\xi -\eta )\hat{\alpha}(t,\eta )d\eta \Vert _{L^{10}} \\
& \lesssim \Vert \mathcal{F}_{\xi }^{-1}\int_{\mathbb{R}^{3}}n_{1}(\xi )f
\mathfrak{M}_{j}n_{2}(\xi -\eta )\frac{\langle \xi -\eta \rangle ^{2\lambda +%
\frac{1}{4}}\hat{\alpha}(t,\xi -\eta )}{|\xi -\eta |}n_{3}(\eta )\frac{%
\langle \eta \rangle ^{2\lambda +\frac{1}{4}}\hat{\alpha}(t,\eta )}{|\eta |}%
d\eta \Vert _{L^{10}} \\
& \lesssim \Vert \mathcal{F}_{\xi }^{-1}\int_{\mathbb{R}^{3}}f\mathfrak{M}%
_{j}n_{2}(\xi -\eta )\frac{\langle \xi -\eta \rangle ^{2\lambda +\frac{1}{4}}%
\hat{\alpha}(t,\xi -\eta )}{|\xi -\eta |}n_{3}(\eta )\frac{\langle \eta
\rangle ^{2\lambda +\frac{1}{4}}\hat{\alpha}(t,\eta )}{|\eta |}d\eta \Vert
_{L^{2}} \\
& \lesssim \Vert \alpha \Vert _{H^{2}}\Vert n_{3}(|\nabla |)\alpha \Vert
_{W^{2,\frac{3}{\varepsilon }}}\lesssim \Vert \alpha \Vert _{X}\Vert \alpha
\Vert _{W^{3,10}}.
\end{split}%$$ We have applied Proposition \[EstimGenPhase\] with $s=\frac{3}{2}%
-\varepsilon ,$ Lemma \[infinity\] around $s=\frac{3}{2}-\varepsilon$ and Lemma \[LemGNT\] with $s=\frac{3}{2} -\varepsilon$, $b=\infty$, $l_{1}=2$, $l_{2}=10$ and $l_{3}=\frac{15}{6+5\varepsilon}>2.$ This concludes the estimates in the region $\{|\xi |\leq 1\}\cap \{M\geq 1\}$.
The other case is included in the region $$\Omega =\{|\eta |\leq 2|\xi -\eta |,|\xi |\geq 1/2\}\cup \{|\eta |>|\xi
-\eta |,|\xi |\geq 1/2\}$$ and leads to the worst loss in derivatives (whereas the region when all frequencies are small leads to the loss of smoothness of the multiplier and and hence to the loss of decay in time). In the case $|\eta |\leq 2|\xi
-\eta |,$ we choose $f=\frac{\chi }{\langle \eta \rangle ^{\frac{1}{2}}}$ in Proposition \[EstimGenPhase\]. We apply Lemma \[infinity\] to deduce that $$\frac{\vert\xi \vert^{k+\frac{6}{5}}}{\langle \xi -\eta \rangle ^{k+ \frac{6%
}{5}}}\mathfrak{M}_{1}f\in M_{\xi ,\eta }^{\frac{3}{2}-\varepsilon }.$$ Hence $$\begin{split}
& \Vert \vert\nabla\vert^k\mathfrak{B}(\alpha ,\alpha )\Vert _{L^{10}} \\
& \lesssim \Vert \mathcal{F}_{\xi }^{-1}\left\{ |\xi |^{k}n_{1}(\xi )\int_{
\mathbb{R}^{3}}f\mathfrak{M}_{j}n_{2}(\xi -\eta)\frac{\langle \xi -\eta
\rangle ^{2\lambda}\hat{\alpha}(\xi -\eta )}{|\xi -\eta|}n_{3}(\eta )\frac{
\langle \eta \rangle ^{2\lambda +\frac{1}{2}}\hat{\alpha}(\eta )}{|\eta |}
d\eta \right\} \Vert _{L^{10}} \\
& \lesssim \Vert \mathcal{F}^{-1}\int_{\mathbb{R}^{3}}[\frac{|\xi |^{k+\frac{%
6}{5}}}{\langle \xi -\eta \rangle ^{k+\frac{6}{5}}}\mathfrak{M}%
_{j}f]n_{2}(\xi -\eta )\frac{\langle \xi -\eta \rangle ^{k+2\lambda +\frac{6%
}{5}}\hat{\alpha}(\xi -\eta )}{|\xi -\eta |}n_{3}(\eta )\frac{\langle \eta
\rangle ^{2\lambda +\frac{1}{2}}}{|\eta |}\hat{\alpha}(\eta )d\eta \Vert
_{L^{2}} \\
& \lesssim \Vert \frac{|\xi |^{k+\frac{6}{5}}}{\langle \xi -\eta \rangle ^{k+%
\frac{6}{5}}}\mathfrak{M}_{j}f\Vert _{\mathcal{M}_{\xi ,\eta }^{\frac{3}{2}%
-\varepsilon }}\Vert |\nabla |^{k+\frac{11}{5}+2\delta }n_{2}(|\nabla
|)\alpha \Vert _{L^{l_{3}}}\Vert \frac{(1-\Delta )^{2}}{|\nabla |}%
n_{3}(|\nabla |)\alpha \Vert _{L^{l_{2}}} \\
& \lesssim \Vert |\nabla |^{k+\frac{11}{5}+2\delta }\alpha \Vert
_{L^{l_{2}}}\Vert \frac{(1-\Delta )^{2}}{|\nabla |}\alpha \Vert _{L^{l_{3}}}.
\end{split}%$$ We have applied Lemma with $s=\frac{3}{2}-\varepsilon ,$ $\frac{1}{l_{2}}=%
\frac{14}{50}+\frac{17}{15}\varepsilon $ and $\frac{1}{l_{3}}=\frac{11}{50}-%
\frac{4}{5}\varepsilon $. Now, using Bernstein estimates, we compute that $$\begin{split}
\Vert P_{\leq 1}\frac{(1-\Delta )^{2}}{|\nabla |}\alpha \Vert _{L^{l_{2}}}&
\lesssim \sum_{N\leq 1}N^{-1}\Vert P_{N}\alpha \Vert _{L^{l_{2}}} \\
& \lesssim \sum_{N\leq 1}N^{-1}N^{3\left( \frac{1}{l_{4}}-\frac{1}{l_{2}}%
\right) }\Vert P_{N}\alpha \Vert _{L^{l_{4}}} \\
& \lesssim \sum_{N\leq 1}N^{\varepsilon }\left( N^{-1}\Vert P_{N}\alpha
\Vert _{L^{2}}\right) ^{1-\sigma }\Vert P_{N}\alpha \Vert _{L^{10}}^{\sigma
}\lesssim \Vert \alpha \Vert _{X}^{1-\sigma }\Vert \alpha \Vert
_{L^{10}}^{\sigma },
\end{split}%$$ for $$\sigma =\frac{5}{11}\left( \frac{3}{b}-2\varepsilon \right) =\frac{3}{10}+
\frac{2\varepsilon }{11},\hskip.1cm\hbox{and}\hskip.1cm \frac{1}{l_{4}}=%
\frac{1}{2 }-\frac{2\sigma }{5}=\frac{19}{50}-\frac{4}{55}\varepsilon$$ while for the high frequencies, we have that $$\begin{split}
\Vert P_{\geq 1}\frac{(1-\Delta )^{2}}{|\nabla |}\alpha \Vert _{L^{l_{2}}}&
\lesssim \Vert (1-\Delta )^{\frac{3}{2}}\alpha \Vert _{L^{10}}^{\sigma
}\Vert (1-\Delta )^{\frac{3}{2}}\alpha \Vert _{L^{l_{5}}}^{1-\sigma } \\
& \lesssim \Vert \alpha \Vert _{W^{3,10}}^{\sigma }\Vert \alpha \Vert
_{X}^{1-\sigma }
\end{split}%$$ for $\frac{1}{l_{5}}=(1/4+184\varepsilon )/(7/10-2/11\varepsilon )$. Independently, we have that $$\begin{split}
\Vert |\nabla |^{k+\frac{11}{5}+2\delta }\alpha \Vert _{L^{l_{3}}}& \lesssim
\Vert (1-\Delta )^{\frac{k}{2}}\alpha \Vert _{L^{10}}^{1-\sigma }\Vert
(1-\Delta )^{\frac{k}{2}+(\frac{11}{10}+2\delta )\frac{1}{\sigma }}\alpha
\Vert _{L^{2}}^{\sigma } \\
& \lesssim \Vert \alpha \Vert _{W^{k,10}}^{1-\sigma }\Vert \alpha \Vert
_{X}^{\sigma }
\end{split}%$$ provided that $k>11/(5\sigma )=\frac{22}{3}+\varepsilon .$
In the case $|\eta |>|\xi -\eta |$ we proceed similarly with $f=\frac{\chi }{%
\langle \xi -\eta \rangle ^{1/2}}$. We therefore conclude the Proposition for $\Phi _{1}.$
We now have completed the proof for $j=1$ by Lemma and Theorem. To establish for $j\neq 1,$ we note that the proposition is clearly valid for $\Phi _{2}$ because the proof in Case 1 shows that Proposition \[EstimGenPhase\] is also valid in this easier case (indeed, $%
\vert\Phi_2\vert\gtrsim \max(\vert\xi\vert,\vert\xi-\eta\vert,\vert\eta\vert)
$). For $\Phi _{4},$ we note $\Phi _{4}(\xi ,\eta )=-\Phi _{1}(\eta ,\xi ),$ and repeat the same proof in light of Proposition \[EstimGenPhase\]. Finally, for $\Phi _{3}(\xi ,\eta )=-\Phi _{1}(\xi -\eta ,\xi ),$ we make a change of integration variable $\eta \rightarrow \xi -\eta $ in the integrations in both the cubic terms and $\mathfrak{B}$ and get back to the previous case. We thus conclude the proof.
End of the proof
----------------
Now, we are ready to finish the proof of Theorem \[MainThm\].
The existence of a local regular solution $\beta\in C(0,T^\ast),X)$ follows from the standard method of Kato [@Kat]. Combining Proposition [ControlL2NormProp]{} and Proposition \[EstimL10NormForAlphaProp\], we obtain that $$\Vert \beta\Vert_X\lesssim \Vert \alpha(0)\Vert_{Y}+\Vert \beta\Vert_X^2$$ so that if $\Vert\alpha(0)\Vert_Y$ is sufficiently small, we get a global bound on the $X$-norm of the solution, which implies that $T^\ast=\infty$ and gives a global bound on the $X$-norm of $\rho$ and $v$. This ends the proof.
[99]{} Chen, Gui-Qiang; Jerome, J. W. and Wang, D., Compressible Euler-Maxwell equations. Proceedings of the Fifth International Workshop on Mathematical Aspects of Fluid and Plasma Dynamics (Maui, HI, 1998). Transport Theory Statist. Phys. 29 (2000), no. 3-5, 311–331.
Coifman, R. and Meyer, Y., Commutateurs d’intégrales singulières et opérateurs multilinéaires. *Ann. Inst. Fourier* (Grenoble) 28 (1978), no. 3, xi, 177–202.
Cordier, S. Grenier, E., Quasineutral limit of an Euler-Poisson system arising from plasma physics. Comm. Partial Differential Equations 25 (2000), no. 5-6, 1099–1113.
Feldman, M. Ha, S-Y. and Slemrod, M., Self-similar isothermal irrotational motion for the Euler, Euler-Poisson systems and the formation of the plasma sheath. *J. Hyperbolic Differ. Equ.* 3 (2006), no. 2, 233–246.
, A geometric level-set formulation of a plasma-sheath interface. *Arch. Ration. Mech. Anal.* 178 (2005), no. 1, 81–123.
Germain, P., Masmoudi, N. and Shatah, J., Global solutions for 3D quadratic Schrödinger equations, *Int. Math. Res. Not.*, 2009, no. 3, 414–432.
, Global solutions for the gravity water waves equation in dimension 3, preprint.
, Global solutions for 2D quadratic Schrödinger equations., preprint.
Guo, Y., Smooth irrotational Flows in the large to the Euler-Poisson system in $R^{3+1}$ *Commun. Math. Phys.* 195, (1998), 249–265.
Guo, Y. Tahvildar-Zadeh, A. S, Formation of singularities in relativistic fluid dynamics and in spherically symmetric plasma dynamics. Nonlinear partial differential equations (Evanston, IL, 1998), 151–161, *Contemp. Math.*, 238, Amer. Math. Soc., Providence, RI, 1999.
Guo, Z., Peng, L., and Wang, B., Decay estimates for a class of wave equations, J. Funct. Anal. 254 (2008), no. 6, 1642–1660.
Gustafson, S., Nakanishi, K. and Tsai, T.P. Global dispersive solutions for the Gross-Pitaevskii equation in two and three dimensions. *Ann. IHP* 8 (2007), no. 7, 1303–1331.
, Scattering theory for the Gross-Pitaevskii equation in three dimensions. *Commun. Contemp. Math.* 11 (2009), no. 4, 657–707.
John, F. Plane Waves and Spherical Means, Applied to Partial Differential Equations, reprint,
Kato, T., The Cauchy problem for quasilinear symmetric systems, *Arch. Ration. Mech. Anal.* 58, (1975), 181–205.
Liu, H., Tadmor, E. Critical thresholds in 2D restricted Euler-Poisson equations. *SIAM J. Appl. Math.* 63 (2003), no. 6, 1889–1910 (electronic). 35Q35 (76X05)
Liu, H. Tadmor, E., Spectral dynamics of the velocity gradient field in restricted flows. *Comm. Math. Phys.* 228 (2002), no. 3, 435–466.
Muscalu, C., Paraproducts with flag singularities. I. A case study. *Rev. Mat. Iberoam.* 23 (2007), no. 2, 705–742.
Muscalu, C., Pipher, J., Tao, T., Thiele, C., Multi-parameter paraproducts. *Rev. Mat. Iberoam.* 22 (2006), no. 3, 963–976.
Peng, Y. Wang, S. Convergence of compressible Euler-Maxwell equations to compressible Euler-Poisson equations. *Chin. Ann. Math.* Ser. B 28 (2007), no. 5, 583–602.
Peng, Y. Wang, Ya-Guang, Boundary layers and quasi-neutral limit in steady state Euler-Poisson equations for potential flows. *Nonlinearity* 17 (2004), no. 3, 835–849.
Shatah, J., Normal forms and quadratic nonlinear Klein-Gordon equations. *Comm. Pure Appl. Math.* **38** (1985), No 5, 685–696.
Sideris, T. Formation of singularities in three-dimensional compressible fluids. *Commun. Math. Phys.* 101, (1985), 475–485.
Stein, E. , volume 43 of *Princeton Mathematical Series*. Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis, III.
Tao, T., *Nonlinear dispersive equations, local and global analysis.* CBMS. Regional Conference Series in Mathematics, 106. Published for the Conference Board of the Mathematical Science, Washington, DC; by the American Mathematical Society, Providence, RI, 2006. ISBN: 0-8218-4143-2.
Texier, B., WKB asymptotics for the Euler-Maxwell equations. [*Asymptot. Anal.*]{} 42 (2005), no. 3-4, 211–250.
Texier, B., Derivation of the Zakharov equations. [*Arch. Ration. Mech. Anal.*]{} 184 (2007), no. 1, 121–183.
Wang, D. Global solution to the equations of viscous gas flows. Proc. Roy. Soc. Edinburgh Sect. A 131 (2001), no. 2, 437–449.
Wang, D. Wang, Z. Large BV solutions to the compressible isothermal Euler-Poisson equations with spherical symmetry. *Nonlinearity* 19 (2006), no. 8, 1985–2004.
[^1]: for notational simplicity, we do not distinguish $m^j_{lr}(\xi,\eta)$ and $%
m^j_{rl}(\xi,\xi-\eta)$.
[^2]: Here we forget the difference between $n_2$ and $n_3$ and treat the terms as symmetric.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We suggest that the “Big Bang” may be a result of the first-order phase transition driven by changing scalar curvature of the 4D space-time in expanding cold Universe, filled with nonlinear scalar field $\phi $ and neutral matter with equation of state $p=\nu \varepsilon $ (where $p$ and $\varepsilon $ are pressure and energy density of matter). We consider a Lagrangian for scalar field in curved space-time with nonlinearity $\phi ^{4} $, which along with the quadratic term $-\xi R\left|\phi \right|^{2} $ (where $\xi $ is the interaction constant and $R$ is scalar curvature) contains a term $\sim \xi R\left(\phi +\phi ^{+} \right)$ linear in $\phi $. Due to this term the condition for the extrema of the potential energy of scalar field is given by a cubic equation. Provided $\nu >1/3$ the scalar curvature $R=[\kappa (3\nu -1)\varepsilon -4\Lambda ]$ (where $\kappa $ and $\Lambda $ are Einstein’s gravitational and cosmological constants) decreases along with decreasing $\varepsilon $ in the process of the Universe’s expansion, and at some critical value $R_{c} <0$ a first-order phase transition occurs, induced by an “external field” parameter proportional to $R$. Given certain conditions the critical radius of the early Universe at the point of the first-order phase transition may reach arbitrary large values, so this scenario of unrestricted “inflation” of the Universe may be called “hyperinflation”. Beyond the point of phase transition the system is rolling down into the potential minimum with the release of the potential energy of scalar field, accompanied by oscillations of its amplitude $\phi $. The powerful heating of the Universe in the course of attenuation of these oscillations plays the role of “Big Bang”.'
author:
- 'E.A. Pashitskii'
- 'V.I. Pentegov'
bibliography:
- 'Bang\_new\_translation.bib'
date: 'December 29, 2014'
title: '“Big Bang” as a result of first-order phase transition driven by changing scalar curvature in expanding early Universe: “hyperinflation” scenario'
---
Introduction
============
The question about the cause of the “Big Bang” starting the hot phase in the development of our Universe is still a central question of cosmology. The earliest stage of evolution of the cold pre-“Big Bang” Universe, which now may be traced only through the manifestations of the relict gravitational radiation, was considered first in the works of Gliner [@Gliner1969; @Gliner1970] and Starobinsky [@Starobinskii1979; @Starobinsky1980].
In the same time, starting with the works of Kirzhnits and Linde [@Kirzhnits1972; @Kirzhnits1972a; @Kirzhnits1975; @Kirzhnits1976] and Guth [@Guth1981] (see also [@Kazanas1980; @Sato1981; @Albrecht1982]), various inflation scenarios of the initially hot Universe were investigated. In these scenarios the first- or second-order phase transitions induced by temperature occurred in the process of expansion and cooling of the Universe with subsequent spontaneous braking of symmetries of various interactions and production of fields and particles from vacuum. A number of authors also considered first-order phase transitions, driven by the density changes [@Harrington1974; @Krive1976; @Lee1974; @Linde1976a; @Linde1979; @Krive1982] or by external fields and currents [@Salam1974; @Salam1975; @Linde1976; @Krive1976a]. The common shortcoming of the hot Universe scenarios is the existence of critical fluctuations during the second-order phase transitions or the formation of domains (“bubbles”) of new phase in the case of the first-order phase transitions, leading to a strong large-scale spatial inhomogeneity of matter and anisotropy of the relic radiation in the modern Universe, which contradicts astronomical observations.
Thereby Linde [@Linde1982; @Linde1983; @Linde1983a; @Linde1990] proposed a scenario of “chaotic inflation” for the early cold Universe, when the energy density of vacuum was determined by the potential energy density of a nonlinear scalar field $U\left(\phi \right)\le M_{P}^{4} $ (where $M_{P} \sim {1\mathord{\left/ {\vphantom {1 \sqrt{G} }} \right. \kern-\nulldelimiterspace} \sqrt{G} } $ is the Planck mass, presuming $\hbar =c=1$, and $G$ is the Newton gravitational constant). For the potentials $U\left(\phi \right)$ described by powers of $\phi $ with sufficiently small interaction constants and large initial values of the field amplitude $\phi \gg M_{P} $, the initial quantum fluctuations on the Planck scale $l_{P} \sim 1/M_{P} $ expand (“inflate”) gigantically, by many orders of magnitude exceeding the observable size of the present-day Universe, which provides explanation for its flat geometry, isotropy and high degree of large-scale homogeneity, as well as for the absence of domain walls and t’Hooft [@tHooft1974] – Polyakov [@Polyakov1974] monopoles in our world.
In the inflation scenarios the heating of the cold Universe to high temperatures (so called “reheating”, actually playing the role of “Big Bang”) occurs as the result of dissipation of the energy of oscillations of the scalar field’s amplitude, with energies of about $10^{12} -10^{14}$ GeV, in the vicinity of the potential minimum. The attenuation of these oscillations is due to both the expansion of the Universe and the production of various particles and antiparticles [@Kofman1997].
Later on a scenario of “hybrid inflation” [@Linde1994] was put forward, according to which there existed two different types of scalar fields in the early Universe, with significantly different equilibrium amplitudes and velocities of rolling down into the potential minimum. This approach allowed to reconcile the inflation theory with the theory of supergravitation [@Linde2004].
It is well to bear in mind though, that given the small scale and high scalar curvature of the early Universe the interaction of the fundamental scalar field with gravitation should have played a significant role in the processes of its evolution, which was not considered in [@Linde1982; @Linde1983; @Linde1983a; @Linde1990; @Kofman1997]. In accordance with the general principles of the quantum field theory in the 4-space with finite scalar curvature $R\ne 0$ (see [@Birrell1982]), the initial Lagrangian of the nonlinear scalar field $\phi $ should contain a term ${R\left|\phi \right|^{2} \mathord{\left/ {\vphantom {R\left|\phi \right|^{2} 6}} \right. \kern-\nulldelimiterspace} 6} $ [@Krive1976] (see also [@Spokoinyi1984]) quadratic in $\phi $, where the coefficient ${1\mathord{\left/ {\vphantom {1 6}} \right. \kern-\nulldelimiterspace} 6} $ ensures the conformal invariance of the theory in the limit of zero bosonic mass $\mu \to 0$. This leads to the renormalization of the parameter of self-action for the scalar field and of the Einstein gravitational constant $\kappa =8\pi G$ in the general relativity equations by the factor of about $\kappa \left|\phi \right|^{2} /3$. In particular, for the Higgs field [@Higgs1964; @Higgs1964a] with vacuum average $\phi _{H} \equiv \upsilon \approx 247$ GeV (see [@Weinberg1996]) this renormalization is anomalously small and has the order of magnitude of $G/G_{F} \sim 10^{-32} $ (where $G_{F} $ is the Fermi constant for the weak interaction).
As was shown in [@Zee1979; @Smolin1979; @Cervantes-Cota1995], the term of a more general form $-\xi R\left|\phi \right|^{2} $ in the Lagrangian of the Higgs field with $\mu \ne 0$, where the dimensionless constant $\xi $ may be treated as the constant of interaction of scalar and gravitational fields, leads in the framework of standard model to generation of mass of the order of the Planck one $\left(M_{P} \right) $ only for anomalously large values of the constant $\xi \ge 10^{34} $.
Another expression for this constant $\xi \approx 4\cdot 10^{4} \cdot {m_{H} \mathord{\left/ {\vphantom {m_{H} \upsilon \sqrt{2} }} \right. \kern-\nulldelimiterspace} \upsilon \sqrt{2} } $, where $m_{H} $ is the mass of the Higgs boson, was obtained by the authors of [@Bezrukov2008] for some modified exponentially flat potential of the nonlinear scalar field in the regime of “slow roll” of the system into the ground state. Bearing in mind the recently established value $m_{H} \approx 125.5$ GeV [@ATLASCollaboration2012; @CMSCollaboration2012] we obtain the magnitude of for the constant of interaction of the Higgs field with gravitation $\xi \approx 1.44\cdot 10^{4} $, which is still quite large.****
In the present paper, contrary to the Guth [@Guth1981] scenario with the first-order temperature-driven phase transition in the expanding Universe, we propose an alternative scenario of the evolution of the early cold Universe with the first-order phase transition, induced by the parameter of an “external field” proportional to scalar curvature. It is shown that this transition is possible given the following conditions:
\(i) the Universe born in a rather large quantum fluctuation is filled with some fundamental scalar field $\phi $, described by the Lagrangian which in the curved 4-space with $R\ne 0$ includes a term $\sim \xi R\left(\phi +\phi ^{+} \right)$ linear in $\phi $, playing the role of an “external field”, along with the quadratic in $\phi $ term $-\xi R\left|\phi \right|^{2} $.
\(ii) the early Universe with finite cosmological constant $\Lambda $ contains neutral cold matter, described by the equation of state $p=\nu \varepsilon $ with $\nu >1/3$, which ensures the decreasing of the scalar curvature $R=[\kappa (3\nu -1)\varepsilon -4\Lambda ]$ along with the decreasing of the matter’s energy density in the process of the Universe expansion.
In a sense, the model of evolution of the early cold Universe proposed hereafter may be viewed as a modification of the model of “hybrid inflation” [@Linde1994], where the role of the second (auxiliary) field is played by the “external field” parameter.
In section \[Sec\_Lagrangian\] of this paper we consider a modified Lagrangian of some fundamental complex scalar field $\phi $ with nonlinearity of $\phi ^{4} $ type, interacting with gravitational field, which is described by the term $-\xi R\left|(\phi -\phi _{0} )\right|^{2} $, where $\phi _{0} $ is the vacuum average of the scalar field amplitude. Hence, the Lagrangian contains both the standard term $-\xi R\left|\phi \right|^{2} $ quadratic in $\phi $ and the term $\xi R\phi _{0} \left(\phi +\phi ^{+} \right)$ linear in $\phi $ and $R$. The equation determining the extrema of the scalar field’s potential $U\left(\phi ,R\right)$ is a cubic one with respect to the real part of $\phi $, having three real roots for a certain range of $R$ values, describing two minima and one maximum of the potential $U(\phi ,R)$.
In section \[Sec\_Transition\] we introduce the dimensionless variables and obtain the potential of the nonlinear scalar field as the function of its amplitude for various values of the dimensionless “external field” $h={2\xi R\mathord{\left/ {\vphantom {2\xi R \mu ^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} } $. We plot the $h$ dependencies of the real roots of the cubic equation and extremal values of the potential, and use this dependencies to analyze the conditions for the first-order phase transition in metastable state.
In section \[Sec\_Evolution\] we describe the parameter domains where self-consistent solutions exist for the general relativity equations describing the expanding early cold Universe, filled with scalar nonlinear field and neutral matter with equation of state $p={2\varepsilon \mathord{\left/ {\vphantom {2\varepsilon 3}} \right. \kern-\nulldelimiterspace} 3} $, which corresponds to a neutral non-relativistic ideal gas of massive fermions (see [@Landau1980]). It is assumed that this Fermi-gas consists of the equal numbers of particles and antiparticles with half-integer spin, which are born from the vacuum as the result of a large quantum fluctuation and obtain finite mass due to the interaction with scalar field, in accordance with the Higgs mechanism of mass generation [@Higgs1964; @Higgs1964a]. The interaction between fermions is taken to be weak, so that the time of particle-antiparticle annihilation significantly exceeds the time of the early Universe’s evolution to the point of phase transition. For nonzero energy density of the vacuum $\lambda $, which is determined by the energy density of the scalar field in the potential minimum where the early Universe resides, we obtain numerical solutions of the nonlinear general relativity equations, which give for various parameters the time dependencies of the Universe’s radius up to the moment of phase transition $t_{c} $, when the radius reaches maximal value $a_{c} =a(t_{c} )$. We show that these solutions exist only for large enough initial values of the radius of quantum fluctuation $a_{0} \ge 4.5l_{P} $, while the radius $a_{c} $ of the early Universe in the transition point and the scalar field’s energy $E_{c} \sim a_{c}^{3} $, released in the phase transition, diverge with $\xi \to \xi _{\min } $, where $\xi _{\min } $ is some limiting value of $\xi $, dependent on the parameters of the scalar field. Such a regime of unbounded inflation of the early Universe may be called “hyperinflation”. The time of the Universe’s evolution $t_{c} $ diverges as well when $\xi \to \xi _{\min } $ and the initial radius $a_{0} $ of quantum fluctuation approaches some limiting value $a_{0 \min } (\xi )$. One should remember though that $t_{c} $ may not exceed the annihilation time of the fermions and antifermions in the early Universe.
In section \[Sec\_Model\] we discuss possible values of the parameters $\mu $ and $g$ of the fundamental nonlinear scalar field, which define the vacuum average $\phi _{0} =\mu /g$ and the potential energy density $U\sim \mu ^{2} \phi _{0}^{2} $. We show that in the case of $\phi _{0} \gg \phi _{H} $ the constant $\xi \ll 1$, contrary to those models where the fundamental field is assumed to be the Higgs field with dimensionless parameter $\kappa \phi _{H}^{2} \approx 10^{-32} $, thus giving $\xi \ge 10^{30} $ (see [@Zee1979; @Smolin1979; @Cervantes-Cota1995]). In particular, for the ratio $\phi _{0} /\phi _{H} \approx 10^{16} $, when $\kappa \phi _{0}^{2} \approx 1$, the constant $\xi $ is limited from below by the minimal critical value $\xi _{\min } \approx 0.04$ and the value of $\xi $ should be close to $\xi _{\min } $ for the Universe to expand to a significant size of $a_{c} \gg a_{0} \gg l_{P} $. We also estimate the frequency of oscillations of the scalar field amplitude, which are attenuated due to both the Universe’s expansion and the birth of a large number of various particle-antiparticle pairs from vacuum (see e.g. [@Kofman1997]). A rapid heating of the Universe, playing the role of “Big Bang”, should occur due to the large energy of scalar field $E_{c} =\Delta U\cdot \upsilon _{c} $ released in the first-order phase transition in the volume of the closed Universe $\upsilon _{c} =2\pi ^{2} a_{c}^{3} $.
\[Sec\_Lagrangian\] Modified Lagrangian of a nonlinear scalar field in curved space-time
========================================================================================
The Lagrangian of a complex scalar field with nonlinearity $\phi ^{4} $ and imaginary “mass” $i\mu $ in the curved 4-space with metric tensor $g^{\mu \nu } $ and finite scalar curvature $R\ne 0$, satisfying the condition of conformal invariance in the limit $\mu \to 0$, is written as [@Krive1976] $$\begin{gathered}
\label{EQ_1}
L=g^{\mu \nu } \left(\partial _{\mu } \phi \right)\left(\partial _{\nu } \phi ^{+} \right)+\mu ^{2} \phi \phi ^{+} -g^{2} \left(\phi \phi ^{+} \right)^{2} \\ +{R\phi \phi ^{+} \mathord{\left/ {\vphantom {R\phi \phi ^{+} 6}} \right. \kern-\nulldelimiterspace} 6},\end{gathered}$$ where $g^{2} $ is the parameter of nonlinearity (self-action) of the scalar field. The last term in Lagrangian , as shown in [@Krive1976], leads to the renormalization of the constants $\mu ^{2} $ and $g^{2} $, as well as the Einstein gravitational constant $\kappa =8\pi G$ in the equations of general relativity, by the dimensionless quantity $\kappa \phi _{0}^{2} $. As was mentioned earlier, for the Higgs field this renormalization is quite small (of the order of $G/G_{F} \sim 10^{-32} $)/
In the more general form with $\mu \ne 0$ the Lagrangian may be written as (see [@Bezrukov2008]): $$\label{EQ_2}
L=g^{\mu \nu } \left(\partial _{\mu } \phi \right)\left(\partial _{\nu } \phi ^{+} \right)+\frac{\mu ^{2} }{2} \left|\phi \right|^{2} -\frac{g^{2} }{4} \left|\phi \right|^{4} -\xi R\left|\phi \right|^{2},$$ where $\xi $ is the effective dimensionless constant of interaction between scalar and gravitational fields. As we can see, for nonzero curvature of the 4-space the parameter $\mu ^{2} $ is renormalized as $\mu ^{2} \to \left(\mu ^{2} -2\xi R\right)$, and there is still a possibility of the second-order phase transition with the curvature-dependent order parameter $\phi _{0} (R)=\sqrt{(\mu ^{2} -2\xi R)} /g$. The mass of the scalar boson, analogues to the Higgs boson, is written in this case as $m_{B} (R)=\sqrt{2(\mu ^{2} -2\xi R)} $.
In the present paper we consider a certain fundamental scalar field $\phi $ with nonlinearity $\phi ^{4} $ and with a modified Lagrangian in the curved space-time, which, along with the term $-\xi R\left|\phi \right|^{2} $ quadratic in $\phi $, contains also a term linear in $\phi $ and $R$. Earlier, in [@Pashitskii2014], this additional term was chosen in the form of $\zeta R\phi /\sqrt{\kappa } $, where $\zeta $ was some dimensionless constant, not equal to $\xi $ in general case, while the factor $1/\sqrt{\kappa } $ was introduced on the basis of dimensionality consideration, as the dimensionalities of $\phi ^{2} $ and $\kappa ^{-1} $ coincide. However, the introduction of an additional parameter $\zeta \ne \xi $ seems unnecessary.
In the model considered henceforth the interaction of the complex scalar nonlinear field with gravitation is given as $-\xi R\left|(\phi -\phi _{0} )\right|^{2} $, where $\phi _{0} $ is the vacuum average of the scalar field.
As the result, the $\phi ^{4} $ Lagrangian of the scalar field contains linear and quadratic in $\phi $ terms, which are proportional to scalar curvature $R$: $$\begin{aligned}
\label{EQ_3}
\tilde{L}=\frac{1}{2} g^{\mu \nu } \left(\partial _{\mu } \phi \right)\left(\partial _{\nu } \phi ^{+} \right) &+\frac{1}{2} \left(\mu ^{2} -2\xi R\right)\left|\phi \right|^{2} -\frac{1}{4} g^{2} \left|\phi \right|^{4} \nonumber \\ &+\xi R\phi _{0} (\phi +\phi ^{+} )-\xi R\phi _{0}^{2}.\end{aligned}$$
Assuming $\phi =\left(\Phi +\phi '\right)$, where $\Phi (R)$ is the real (classical) part of the field amplitude $\phi $ for $R\ne 0$ and $\phi '$ is its complex (quantum) part, and varying with respect to $\phi '$ we obtain in the linear approximation $\left|\phi '\right|\ll \Phi $ the equation for the bosonic field $\phi '$ with the curvature-dependent mass of the scalar boson: $$\label{EQ_4}
m_{B} (R)=\sqrt{3g^{2} \Phi ^{2} \left(R\right)-\left(\mu ^{2} -2\xi R\right)}.$$
In the zero-order approximation in $\phi '$ gives the expression for the potential energy density of the real part of the scalar field $\Phi $: $$\begin{gathered}
\label{EQ_5}
U(\Phi ,R)= \frac{1}{4} g^{2} \Phi ^{4} -\frac{1}{2} (\mu ^{2} -2\xi R)\Phi ^{2} \\ -2\xi R\phi _{0} (\Phi -\phi _{0} /2)+U_{0} ,\end{gathered}$$ where $U_{0} $ is some arbitrary constant, which should ensure the zero minimal value of the potential .
The condition of the existence of the extrema of potential is given by the cubic equation with respect to amplitude $\Phi $: $$\label{EQ_6}
\frac{\partial U}{\partial \Phi } =g^{2} \Phi ^{3} -\left(\mu ^{2} -2\xi R\right)\Phi -2\xi R\phi _{0} =0.$$
As will be shown below, in a certain parametric domain equation has three real roots, so the change in the scalar curvature may lead to the first-order phase transition.
\[Sec\_Transition\] First-order phase transition in the early Universe with changing scalar curvature
=====================================================================================================
For the farther analysis of equations and it is convenient to introduce dimensionless variables $x={\Phi \mathord{\left/ {\vphantom {\Phi \phi _{0} }} \right. \kern-\nulldelimiterspace} \phi _{0} } $ and $V={U\mathord{\left/ {\vphantom {U \mu ^{2} \phi _{0}^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} \phi _{0}^{2} } $: $$\label{EQ_7}
V\left(x,h\right)=\frac{x^{4} }{4} -\left(1-h\right)\frac{x^{2} }{2} -hx+\frac{h}{2} +V_{0};$$ $$\label{EQ_8}
\frac{\partial V}{\partial x} =x^{3} -\left(1-h\right)x-h=0,$$ where $h=2\xi R/\mu ^{2} $ is the dimensionless “external field” and $V_{0} ={U_{0} \mathord{\left/ {\vphantom {U_{0} \mu ^{2} \phi _{0}^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} \phi _{0}^{2} } $.
![\[Fig1\] The dimensionless potential of the nonlinear scalar field $V\left(x,h\right)={U\left(\Phi ,R\right) / \mu ^{2} \phi _{0}^{2} } $ in function of the dimensionless amplitude $x={\Phi / \phi _{0} } $ for various values of the “external field” $h={2\xi R / \mu ^{2} } $. Solid lines represent $V\left(x,h\right)$ for the threshold value $h=0.25$, when the second minimum of the potential appears, and also for the critical value $h=h_{c} \equiv -2$, when one of the minimums of the potential $V\left(x,-2\right)$ merges with the maximum, giving an inflection point at $x= 1$, while the other minimum at $x=-2$ becomes zero for $V_{0} =7$.](Fig1.pdf){width="\columnwidth"}
In Fig. \[Fig1\] the $x$ dependencies of the potential are shown for various values of the parameter $h$ for $V_{0} =7$. In the region $h\ge 0.25$ potential has only one minimum $V_{\min } =6.75$ at the point $x=1$, where the system (the early Universe) resides initially. As we can see, the depth and position of this minimum remain the same for all values of $h$ in the range $-2<h<\infty $. For the “external field” $h<0.25$ there appears a second minimum in the potential , separated from the first one by a potential barrier. At $h=0$ both minima have equal depth $V_{\min }^{\left(1\right)} =V_{\min }^{\left(2\right)} =6.75$ and positioned symmetrically in points $x=\pm 1$, while the maximum is at the point $x=0$ with $V_{\max } =7$. In the region $h<0$ the left minimum grows deeper with the decreasing parameter $h$. Finally, for $h=-2$ the maximum of the potential and its right minimum disappear, giving an inflection point at $x=1$, while the left minimum reaches zero value at the point $x=-2$.
\
The tree real roots $x_{i} \left(h\right)$ of the cubic equation are shown in Fig. \[Fig2\] for $-2\le h\le 0.25$. For all $h>-2$ there is a positive $h$-independent root $x=1$, which corresponds to the value $\Phi =\phi _{0} \equiv \mu /g$ of the amplitude of the scalar field (the $ABC$ branch). This positive root determines the constant position of the right minimum of the potential , the negative root (the $DEF$ branch) gives the position of the left minimum, while the sign-changing root (the $AOF$ branch) describes the position of the potential maximum, i.e. it corresponds to the absolutely unstable state. The segments $AB$ and $EF$ on the positive and negative branches correspond to metastable states. Notice that according to the boson mass for $\Phi =\phi _{0} $ equals $$\label{EQ_9}
m_{B} \left(R\right)=\sqrt{2\left(\mu ^{2} +\xi R\right)} \equiv \mu \sqrt{2+h},$$ and the value $m_{B} (R)$ should be bounded by the Planck mass $M_{P} $ from above.
Three extremal values of the potential , shown in Fig. \[Fig2b\], two minima $V_{\min }^{\left(1\right)} (h)$ and $V_{\min }^{\left(2\right)} (h)$ and one maximum $V_{\max } (h)$, correspond to the three roots of the cubic equation of the Fig. \[Fig2a\]. In the absence of the nucleation centers of the new phase, when no transitions, accompanied by the formation of domains with alternating sign of the order parameter, occur between the branches $ABC$ and $DEF$ for $-2<h<0.25$, the diminishing of scalar curvature $R$ implies that the system travels along the straight phase trajectory $ABC$ with constant minimal value of the potential $V_{\min }^{\left(1\right)} =6.75$ (for $V_{0} =7$) from the stable state at some initial point with $h>0.25$ into the critical point $A$ with $h=-2$. After that the system drops from point $A$ to point $D$, which corresponds to a phase transition with the decrease in the potential energy density of scalar field by $\Delta V=6.75$.
\[Sec\_Evolution\] Evolution of the early cold Universe towards the first-order phase transition
================================================================================================
Suppose that in homogeneous space filled with nonlinear scalar field in ground state at zero temperature with equilibrium amplitude $\phi _{0} $ and minimal potential energy density $U_{\min } (\phi _{0} )=6.75\mu ^{2} \phi _{0}^{2} $ at some (initial) moment appears a rather large quantum fluctuation, spontaneously producing some matter, which is neutral with respect to all charges and characterized by the equation of state $p=\nu \varepsilon $, where $p$ and $\varepsilon $ are pressure and energy density of the matter, with dimensionless coefficient $\nu $ satisfying condition $\nu >1/3$. The initial scalar curvature of the 4-space equals $R_{i} =-4\tilde{\kappa }\lambda $ (where $\tilde{\kappa }$ is the renormalized Einstein constant, $\lambda $ is the energy density of vacuum), while after the emergence of matter it increases to positive values (see below).
Suppose also that due to space isotropy the form of the quantum fluctuation is close to spherical, while its initial radius considerably exceeds the Planck length $l_{P} $, so the description of the evolution of the spherically symmetric incipient Universe may neglect quantum effects, such as the tunneling through the potential barrier between two potential minima. Consequently, farther evolution of this big fluctuation may be studied using the classical general relativity equations for the homogeneous isotropic closed Universe: $$\label{EQ_10}
\dot{a}^{2} +1=\frac{\tilde{\kappa }}{3} \left(\varepsilon +\lambda \right)a^{2}; \ddot{a}=-\frac{\tilde{\kappa }}{6} \left(\varepsilon +3p-2\lambda \right)a,$$ where $a$ is the scale (radius) of the Universe, $\dot{a}$ and $\ddot{a}$ – its first and second proper time derivatives, $\tilde{\kappa }={\kappa \mathord{\left/ {\vphantom {\kappa \left(1+2\xi \kappa \phi _{0}^{2} \right)}} \right. \kern-\nulldelimiterspace} \left(1+2\xi \kappa \phi _{0}^{2} \right)} $ is the Einstein gravitational constant, renormalized due to interaction of scalar and gravitation fields (see [@Krive1976]), and parameter $\lambda $ is the vacuum energy density, which is related to the Einstein cosmological constant $\lambda ={\Lambda \mathord{\left/ {\vphantom {\Lambda \tilde{\kappa }}} \right. \kern-\nulldelimiterspace} \tilde{\kappa }} $.
Notice that the assumption of a rather big initial quantum fluctuation implies the uniqueness of our Universe, as the probability of simultaneous appearance of several such fluctuation is vanishingly small.
With account for a possible time dependence of $\lambda $ equations give the energy conservation low: $$\label{EQ_11}
3\frac{\dot{a}}{a} +\frac{\dot{\varepsilon }+\dot{\lambda }}{\varepsilon +p} =0,$$ as well as the expression for the scalar curvature: $$\label{EQ_12}
R=-\frac{6}{a^{2} } \left(a\ddot{a}+\dot{a}^{2} +1\right)=\tilde{\kappa }\left[(3\nu -1)\varepsilon -4\lambda \right].$$
In the absence of matter ($\varepsilon =0$) or for ultrarelativistic matter or equilibrium electromagnetic radiation with equation of state $p=\varepsilon /3$ the scalar curvature of the 4-space equals $R=-4\Lambda \equiv -4\tilde{\kappa }\lambda $. However, for $\nu >1/3$ and $(3\nu -1)\varepsilon >4\lambda $ the scalar curvature is positive.
Contrary to the scenario of “chaotic inflation” [@Linde1990] where the vacuum energy density is defined as $\lambda =U(\phi )+\dot{\phi }^{2} /2$, due to the constancy of the scalar field amplitude $\phi =\phi _{0} $ and minimal density of the potential energy $U(\phi )=U_{\min }^{\left(1\right)} (\phi _{0} )$ on the phase trajectory $ABC$ we shall assume $$\label{EQ_13}
\lambda =U_{\min }^{\left(1\right)} (\phi _{0} )=6.75\mu ^{2} \phi _{0}^{2} =const.$$
In the capacity of matter filling the early cold Universe we shall consider non-relativistic degenerate Fermi-gas with the equation of state $p=2\varepsilon /3$, consisting of the pairs of fermions and antifermions with masses $m_{F} =m_{AF} $ born in quantum fluctuation. Notice that the finite fermionic mass may be generated due to the interaction between scalar and fermionic fields in accordance with the Higgs mechanism [@Higgs1964; @Higgs1964a]. It is assumed that the interaction between fermions is weak enough for the characteristic annihilation time $t_{A} $ of the particles and antiparticles is much greater than the maximal evolution time of the early cold Universe to the point of phase transition (see below).
Thus, assuming $p=2\varepsilon /3$ and $\lambda =const$, in accordance with we have: $$\label{EQ_14}
\varepsilon \left(t\right)=\varepsilon _{0} \cdot \left[{a_{0} \mathord{\left/ {\vphantom {a_{0} a\left(t\right)}} \right. \kern-\nulldelimiterspace} a\left(t\right)} \right]^{5}.$$ Here $\varepsilon _{0} $ and $a_{0} $ are the initial values of the energy density of the matter and radius of the nucleus of the Universe, which satisfy conditions $a_{0} >l_{P} $ and $\varepsilon _{0} \le \varepsilon _{P} $, where $\varepsilon _{P} =M_{P}^{4} $ is the Planck energy density (in the system of units $\hbar =c=1$, when $l_{P} =t_{P} =1/M_{P} $). From it follows then: $$\label{EQ_15}
R(t)=\tilde{\kappa }[\varepsilon (t)-4\lambda ].$$
Let us assume that at the initial moment $R\left(0\right)=\tilde{\kappa }(\varepsilon _{0} -4\lambda )>0$ and the curvature value is such that it satisfies the condition $h\left(0\right)\equiv 2\xi R\left(0\right)/\mu ^{2} >0.25$, so that the scalar field potential has a single minimum at the point $\Phi =\phi _{0} $ (see Fig. \[Fig1\]).
In the process of the Universe expansion the scalar curvature decreases in time with reduction of $\varepsilon \left(t\right)$ in accordance with the power-low dependence . In the framework of the proposed model of the nonlinear scalar field this corresponds to the decreasing of the dimensionless parameter of the “external field” $h\left(t\right)={2\xi R\left(t\right)\mathord{\left/ {\vphantom {2\xi R\left(t\right) \mu ^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} } $. The system is moving then along the phase trajectory $ABC$ (see Fig. \[Fig2\]) where the value of the scalar field amplitude $\Phi =\phi _{0} \equiv {\mu \mathord{\left/ {\vphantom {\mu g}} \right. \kern-\nulldelimiterspace} g} $ and the minimal value of potential $U_{\min }^{\left(1\right)} \left(\phi _{0} \right)=6.75\mu ^{2} \phi _{0}^{2} $ remain constant till the point of phase transition at $h=-2$.
On the assumption of the quantum origin of the Universe it is convenient to introduce dimensionless variables $\tilde{a}\left(\tau \right)={a\left(t\right)\mathord{\left/ {\vphantom {a\left(t\right) l_{P} }} \right. \kern-\nulldelimiterspace} l_{P} } $ and $\tau ={t\mathord{\left/ {\vphantom {t_{P} }} \right. \kern-\nulldelimiterspace} t_{P} } $ (where $t_{P} $ is the Planck time). In this case the scalar curvature and dimensionless “external field” parameter $h$, with account for the energy conservation , are written as: $$\label{EQ_16}
R\left(\tau \right)=-\Lambda \cdot \left[4-\frac{\varepsilon _{0} }{\lambda } \cdot \left(\frac{\tilde{a}_{0} }{\tilde{a}\left(\tau \right)} \right)^{5} \right];$$ $$\label{EQ_17}
h\left(\tau \right)\equiv \frac{2\xi R\left(\tau \right)}{\mu ^{2} } =-\frac{\tilde{\xi }\cdot \beta }{\left(1+\tilde{\xi }\right)} \cdot \left[4-\frac{\tilde{\varepsilon }_{0} }{\beta } \left(\frac{\tilde{a}_{0} }{\tilde{a}\left(\tau \right)} \right)^{5} \right],$$ where $\tilde{a}_{0} ={a_{0} \mathord{\left/ {\vphantom {a_{0} l_{P} }} \right. \kern-\nulldelimiterspace} l_{P} } $, $\tilde{\xi }=2\xi \kappa \phi _{0}^{2} $, $\tilde{\varepsilon }_{0} ={\varepsilon _{0} \mathord{\left/ {\vphantom {\varepsilon _{0} \mu ^{2} \phi _{0}^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} \phi _{0}^{2} } $ and $\beta ={\lambda \mathord{\left/ {\vphantom {\lambda \mu ^{2} \phi _{0}^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} \phi _{0}^{2} } $ are the dimensionless parameters of the present model.
In order to describe the dynamics of the Universe we use the first of the equations for the velocity of the expansion (along with the energy conservation low): $$\label{EQ_18}
\frac{d\tilde{a}}{d\tau } =\left\{b\left[1+\frac{\tilde{\varepsilon }_{0} }{\beta } \cdot \left(\frac{\tilde{a}_{0} }{\tilde{a}\left(\tau \right)} \right)^{5} \right]\cdot \tilde{a}^{2} \left(\tau \right)-1\right\}^{1/2}.$$ The quantity $b=\tilde{\kappa }\lambda l_{P}^{2} /3$ is a function of $\tilde{\xi }$ due to the renormalization of the Einstein gravitation constant: $$\label{EQ_19}
b\left(\tilde{\xi }\right)=\frac{\beta }{3} \frac{\Omega _{P} }{\tilde{\varepsilon }_{P} } \frac{1}{1+\tilde{\xi }},$$ where $\Omega _{P} =\kappa \varepsilon _{P} l_{P}^{2} =25.1327...$ is a universal constant, expressed in terms of the world constants, while $\tilde{\varepsilon }_{P} ={\varepsilon _{P} \mathord{\left/ {\vphantom {\varepsilon _{P} \mu ^{2} \phi _{0}^{2} }} \right. \kern-\nulldelimiterspace} \mu ^{2} \phi _{0}^{2} } $ is an additional dimensionless model parameter, dependent on the parameters of the universal scalar field $\mu $ and $\phi _{0} =\mu /g$. The domain of applicability of the solutions of the classical equation should be limited by the conditions $\tilde{a}(\tau )>1$ and $\tau >1$.
The requirement of the real values for the velocity of the early Universe’s expansion is equivalent to the requirement of the non-negativeness of the subradical expression in , which, with account for , is translated into the following conditions: $$\label{EQ_20}
\tilde{\varepsilon }_{0 \min } \left(\xi \right) =
\begin{cases}
\frac{1}{b\left(\tilde{\xi }\right)\tilde{a}_{0}^{2} } -\beta & \text{\hspace{-0.9em} if } \tilde{\xi }\le \frac{5}{9} \beta \frac{\Omega _{P} }{\tilde{\varepsilon }_{P} } \tilde{a}_{0}^{2} ;
\\
\frac{2}{3} \beta \left(\frac{5}{3} \beta b\left(\tilde{\xi }\right)\tilde{a}_{0}^{2} \right)^{-\frac{2}{5} } & \text{\hspace{-0.9em} if } \tilde{\xi }>\frac{5}{9} \beta \frac{\Omega _{P} }{\tilde{\varepsilon }_{P} } \tilde{a}_{0}^{2} .
\end{cases}$$
The described scenario assumes that the Universe’s expansion may continue only till the time $\tau _{c} $ when parameter $h\left(\tau \right)$ reaches its critical value $h_{c} \equiv h\left(\tau _{c} \right)=-2$ in the point $A$ on the phase trajectory $ABC$ (see. Fig. \[Fig2\]) and the dimensionless radius becomes equal to the limit value $\tilde{a}_{c} \equiv \tilde{a}\left(\tau _{c} \right)$.
From for $h=-2$ we obtain the ratio ${\tilde{a}_{c} \mathord{\left/ {\vphantom {\tilde{a}_{c} \tilde{a}_{0} }} \right. \kern-\nulldelimiterspace} \tilde{a}_{0} } $, dependent on the model parameters $\beta $, $\tilde{\varepsilon }_{0} $ and $\tilde{\xi }$: $$\label{EQ_21}
\frac{\tilde{a}_{c} }{\tilde{a}_{0} } =\left[\frac{\tilde{\xi }}{2\left[\left(2\beta -1\right)\tilde{\xi }-1\right]} \tilde{\varepsilon }_{0} \right]^{{1\mathord{\left/ {\vphantom {1 5}} \right. \kern-\nulldelimiterspace} 5} }.$$
The inequality $\tilde{a}_{c} >\tilde{a}_{0} $ necessary in the case of the Universe’s expansion leads to the following restriction on the parameters $\tilde{\varepsilon }_{0} $ and $\tilde{\xi }$: $$\label{EQ_22}
\tilde{\varepsilon }_{0} >2\frac{\left(2\beta -1\right)\tilde{\xi }-1}{\tilde{\xi }}; \quad \tilde{\xi }\ge {1\mathord{\left/ {\vphantom {1 \left(2\beta -1\right)}} \right. \kern-\nulldelimiterspace} \left(2\beta -1\right)}.$$
According to , when $\tilde{\xi }$ tends to its minimal value $\tilde{\xi }_{\min } =1/(2\beta -1)$ the radius of the Universe in the point of phase transition goes to infinity, $a_{c} \to \infty $.
On the other hand, the initial energy density $\varepsilon _{0} $ of the matter, which was born as the result of the quantum fluctuation of vacuum on the time scale of about $t_{P} $, an not exceed the Planck energy density $\varepsilon _{P} $. Thus the total initial energy of matter $E_{0} =\varepsilon _{0} a_{0}^{3} $ should be bounded from above by the Planck energy $\varepsilon _{P} l_{P}^{3} $, whence we have the inequality: $$\label{EQ_23}
\tilde{\varepsilon }_{0} \le {\tilde{\varepsilon }_{P} \mathord{\left/ {\vphantom {\tilde{\varepsilon }_{P} \tilde{a}_{0}^{3} }} \right. \kern-\nulldelimiterspace} \tilde{a}_{0}^{3} }.$$
The conditions , and , together with $\tilde{a}_{0} >1$, are the complete set of restrictions, applied to the parameters of the proposed model.
As follows from , the dimensionless parameter $\beta $ equals $\beta =6.75$, so the conditions may be written as $$\label{EQ_24}
\tilde{\varepsilon }_{0} >25\frac{(\tilde{\xi }-0.08)}{\tilde{\xi }}; \quad \tilde{\xi }\ge \tilde{\xi }_{\min } =0.08.$$
On the other hand, the total potential energy of scalar field, which is released in the first-order phase transition, is determined by the drop of the scalar field potential $\Delta U=6.75\mu ^{2} \phi _{0}^{2} $ and is equal to: $$\label{EQ_25}
E_{c} =2\pi ^{2} a_{c}^{3} \cdot \Delta U.$$
In this case the relation and the ratio of the final $E_{c} $ and initial $E_{0} $values of the total energy may be represented as: $$\label{EQ_26}
\begin{aligned}
\frac{\tilde{a}_{c} }{\tilde{a}_{0} } &=\left(\frac{0.04\tilde{\xi }}{\tilde{\xi }-0.08} \tilde{\varepsilon }_{0} \right)^{1/5} ; \\ \frac{E_{c} }{E_{0} } &=\frac{\beta }{\tilde{\varepsilon }_{0} } \cdot \left(\frac{\tilde{a}_{c} }{\tilde{a}_{0} } \right)^{3} =\frac{6.75}{\tilde{\varepsilon }_{0}^{{2\mathord{\left/ {\vphantom {2 3}} \right. \kern-\nulldelimiterspace} 3} } } \left\{\frac{0.04\tilde{\xi }}{\tilde{\xi }-0.08} \right\}^{3/5}.
\end{aligned}$$
Thus, in the point of phase transition the maximal radius of the early cold Universe diverge $a_{c} \to \infty $, as well as the total released energy $E_{c} \to \infty $, for all possible initial values of $a_{0} $ and $E_{0} $, if $\tilde{\xi }\to \tilde{\xi }_{\min } =0.08$.
Notice that the value $\tilde{\xi }_{\min } =0.08$ for $\beta =6.75$ corresponds to some minimal value of the interaction constant of scalar and gravitational field $\xi _{\min } =\tilde{\xi }_{\min } /2\kappa \phi _{0}^{2} =0.04/\kappa \phi _{0}^{2} $, which depends on the magnitude of the vacuum average of the scalar field. For example, for the Higgs field with vacuum average $\phi _{H} =\mu _{H} /g_{H} \approx 247$ GeV with good accuracy we have $\kappa \phi _{H}^{2} \approx 10^{-32} $, which gives unrealistically large value $\xi _{\min } \approx 4\cdot 10^{30} $ (cf. [@Zee1979; @Smolin1979; @Cervantes-Cota1995]) and indicates the impossibility of the direct unification of the standard model of elementary particles with gravitation.
Nevertheless, if we assume that the value of the vacuum average for the fundamental scalar field in the early Universe satisfied condition $\kappa \phi _{0}^{2} \approx 1$, which corresponds to the ratio ${\phi _{0} \mathord{\left/ {\vphantom {\phi _{0} \phi _{H} }} \right. \kern-\nulldelimiterspace} \phi _{H} } \approx \sqrt{{G_{F} \mathord{\left/ {\vphantom {G_{F} G}} \right. \kern-\nulldelimiterspace} G} } \approx 10^{16} $, then for the constant $\xi _{\min } $ we obtain a reasonable estimate $\xi _{\min } \approx 0.04$.
In this case the renormalizations of the constant of self-action for the nonlinear scalar field $g^{2} \to g^{2} \cdot (1+\xi \kappa \phi _{0}^{2} )$ and the Einstein gravitational constant $\tilde{\kappa }=\kappa /(1+2\xi \kappa \phi _{0}^{2} )$ (see [@Krive1976]) are about 4% and 8% respectively.
![\[Fig3\] Two-dimensional domain of existence of solutions for equation in the space of dimensionless parameters $\tilde{a}_{0} \equiv {a_{0} / l_{P} } $ and $\tilde{\varepsilon }_{0} \equiv {\varepsilon _{0} / \varepsilon _{P} } $, obtained with account for , and , when dimensionless renormalized constant $\tilde{\xi }$ of scalar field’s interaction with gravity equals $\tilde{\xi }=2\xi \kappa \phi _{0}^{2} =0.0801$, which is close to the minimal value $\tilde{\xi }_{\min } =0.08$ for $\beta =6.75$ and $\tilde{\varepsilon }_{P} =4660$.](Fig3.pdf){width="\columnwidth"}
Fig. \[Fig3\] represents the domain of existence of solutions for the evolution equations in the space of parameters $\tilde{\varepsilon }_{0} $ and $\tilde{a}_{0} $ for $\tilde{\xi }=0.0801$ and $\tilde{\varepsilon }_{P} \equiv \varepsilon _{P} /\mu ^{2} \phi _{0}^{2} =4660$, determined by the restrictions , and . The upper boundary is given by inequality , the lower one is defined by the condition $\tilde{\varepsilon }_{0} =\tilde{\varepsilon }_{0 \min } (\tilde{\xi })$ in . The calculation is done for $\beta =6.75$ and $\tilde{\varepsilon }_{P} \equiv \varepsilon _{P} /\mu ^{2} \phi _{0}^{2} =4660$. The choice of the value for $\tilde{\varepsilon }_{P} $ corresponds to the condition $\mu /\mu _{H} =\phi _{0} /\phi _{H} $ (see below). For the chosen parameters solutions exist only for sufficiently large initial values of the radius of the Universe $a_{0} >4.5l_{P} $, i.e. only for a rather big quantum fluctuation.
![\[Fig4\] The dimensionless radius of the Universe $\tilde{a}_{c} $ at the moment of phase transition (see ) in function of parameters $\tilde{\xi }$ and $\tilde{\varepsilon }_{0} $ for $\tilde{a}_{0} =5$, $\beta =6.75$ and $\tilde{\varepsilon }_{P} =4660$. The dark region in the plane $\tilde{\xi }$ – $\tilde{\varepsilon }_{0} $ is the domain of allowable values of these parameters, defined by , and .](Fig4.pdf){width="\columnwidth"}
The value of the dimensionless radius of the Universe in the point of phase transition $\tilde{a}_{c} $ is shown in Fig. \[Fig4\] in function of the parameters $\tilde{\xi }$ and $\tilde{\varepsilon }_{0} $ for $\tilde{0}_{0} =5$, $\beta =6.75$ and $\tilde{\varepsilon }_{P} =4660$. We can see, that $\tilde{a}_{c} \to \infty $ for $\tilde{\xi }\to 0.08$, in accordance with , and significant expansion of the Universe, with $\tilde{a}_{c} \gg \tilde{a}_{0} $, is possible only in a narrow range of the values of $\tilde{\xi }$m when $(\tilde{\xi }-\tilde{\xi }_{\min } )\ll 1$.
![\[Fig5\] The dimensionless time $\tau ={t / t_{P} } $ dependencies of the dimensionless radius $\tilde{a}\left(\tau \right)$ of the early cold expanding Universe up to the moment of phase transition $h=h_{c} $. The solutions of equation are shown for several values of the parameter $\tilde{\varepsilon }_{0} $ near its minimal value $\tilde{\varepsilon }_{0 \min } $, defined by inequalities , for $\tilde{a}_{0} =5$, $\tilde{\xi }=0.0801$, $\beta =6.75$ and $\tilde{\varepsilon }_{P} =4660$.](Fig5.pdf){width="\columnwidth"}
The temporal evolution of the dimensionless radius of the Universe $\tilde{a}(\tau )$ up to the moment of phase transition is illustrated by Fig. \[Fig5\], where solutions of equation are shown for several values of the parameter $\tilde{\varepsilon }_{0} $ in the vicinity of its minimal value $\tilde{\varepsilon }_{0 \min } (\xi )$, determined by the relations . In this case the maximal radius of the Universe, which is restricted by the phase transition at the moment $t=t_{c} $, is almost independent of $\tilde{\varepsilon }_{0} $ and equals $a_{c} \approx 20l_{P} $, while the time of the Universe’s evolution changes in a wide range $25t_{P} <t<120t_{P} $ owing to the grows of the “plateau”. In all cases though, at the later stage, when $(\tilde{a}_{c} /\tilde{a}_{0} )^{5} \gg 1$, the Universe expands in accordance with exponential low, typical for inflationary solutions. It should be emphasized the in this case, similarly to the de Sitter model, inflation is driven by the constant vacuum energy density $\lambda =const$, contrary to the scenario of “chaotic inflation” [@Linde1982; @Linde1983; @Linde1983a; @Linde1990], where the expansion of the Universe occurs on the background of the diminishing energy density of the scalar field.
The dimensionless duration of the early Universe expansion $\tau _{c} \equiv t_{c} /t_{P} $ is shown in Fig. \[Fig6\] in functions of $\tilde{\varepsilon }_{0} $ and $\tilde{\xi }$. The unlimited grows of the time $\tau _{c} \to \infty $ for $\tilde{\varepsilon }_{0} \to \tilde{\varepsilon }_{0 \min } $ (Fig. \[Fig6a\]) is due the fact that for $\tilde{\varepsilon }_{0} =\tilde{\varepsilon }_{0 \min } (\tilde{\xi })$ the minimal value of the expansion velocity $\dot{a}$ becomes zero with simultaneous vanishing of acceleration $\ddot{a}$ and all higher time derivatives of $a$. As $\tilde{\varepsilon }_{0} $ approaches $\tilde{\varepsilon }_{0 \min } $($\tilde{\xi }$) on the lower boundary of the shaded region in Fig. \[Fig3\] the plateau on the curve $\tilde{a}(\tau )$ (see Fig. \[Fig5\]) may stretch to infinity, provided $\tilde{a}_{c} $ is larger than the value $$\label{EQ_27}
\tilde{a}_{c\min } =\tilde{a}_{0} \cdot (3\tilde{\varepsilon }_{0} /2\beta )^{1/5},$$ which corresponds to the minimal value of $\dot{a}$ according to .
\
\[Sec\_Model\] Model parameters’ estimates and evolution of the Universe after the first-order phase transition
===============================================================================================================
Let us make estimates of the scalar field parameters in the framework of the proposed model of the early cold Universe. As was shown in section \[Sec\_Evolution\], a physically reasonable assessment of the dimensionless constant of interaction of scalar and gravitational fields $\xi \approx 0.04$ may be achieved with assumption $\kappa \phi _{0}^{2} \approx 1$, which corresponds to a rather large ratio $\phi _{0} /\phi _{H} \approx 10^{16} $ of the vacuum averages of the fundamental scalar field $\phi _{0} =\mu /g$ and the Higgs field $\phi _{H} =\mu _{H} /g_{H} $.
On the other hand, the choice of the big value for the dimensionless parameter $\tilde{\varepsilon }_{P} \equiv \varepsilon _{P} /\mu ^{2} \phi _{0}^{2} =4660$, with $\varepsilon _{P} =M_{P}^{4} $, corresponds to the ratio $\mu \phi _{0} /\mu _{H} \phi _{H} \approx 10^{32} $. Both these choices may by consistent only if $\mu /\mu _{H} \approx 10^{16} $ and $g/g_{H} \approx 1$, which also gives $\mu \approx 0.1M_{P} $ and $\phi _{0} =\mu /g\approx 0.274M_{P} $ for $g\approx g_{H} \approx 0.364$. The ratio of the potential energy density of the scalar field to the Planck energy density then equals $\Delta U/\varepsilon _{P} =6.75\mu ^{2} \phi _{0}^{2} /M_{P}^{4} \approx 0.00145$.
Accordingly, the fundamental scalar field in the early cold Universe prior to Big Band could have the potential energy density $\Delta U=6.75\mu ^{4} /g^{2} $, which is 64 orders of magnitude larger than that value for the Higgs field, once more emphasizing the lack of usability of the latter in the inflation theories of the early Universe.
In the model of “chaotic inflation” [@Linde1990], with the restriction on the scalar field potential $V(\phi )\le M_{P}^{4} $, for the size of the “inflated” Universe to exceed by many orders of magnitude the size of observable Universe it is necessary to have large initial amplitude of the scalar field $\phi \gg M_{P} $ and anomalously small values of either the effective mass of the scalar field $m\ll M_{P} $ for the quadratic potential $V(\phi )=m^{2} \phi ^{2} /2$, or the nonlinearity coefficient $\gamma \ll 1$ for the potential $V(\phi )=\gamma \phi ^{4} $. On the other hand, it should be stressed that in the framework of the proposed model with $\xi \to \xi _{\min } =0.04$ the Universe’s inflation to arbitrary large sizes in the point of the phase transition is possible for $\phi _{0} <M_{P} $ and $\Delta U\ll \varepsilon _{P} $.
Accordingly, the presently proposed scenario of evolution of the early Universe towards the point of phase transition with unrestricted “inflation” for $\xi \to \xi _{\min } $, may be called “hyperinflation”.
Let us estimate the total energy of the scalar field $E_{c} $, freed in the first-order phase transition, with account for the parameters’ values $a_{c} \approx 20l_{P} $ and $\Delta U\approx 1.45\cdot 10^{-3} \cdot M_{P}^{4} $, obtained above: $$\label{EQ_28}
E_{c} =2\pi ^{2} a_{c}^{3} \cdot \Delta U\approx 2.3\cdot 10^{2} \cdot M_{P} \approx 2.76\cdot 10^{21} ~\text{GeV}.$$
The discharge of such enormous energy during phase transition should lead to the birth of a huge amount of particle-antiparticle pairs of various kinds, and to the rapid heating of the matter to high temperature of the order of the Planck one $T_{P} \approx M_{P} /k_{B} \approx 1.2\cdot 10^{32} $ K ($k_{B} $ is the Boltzmann constant), thus causing the “Big Bang” starting the hot phase of our Universe.
On the other hand, potential for $h=-2$ with zero minimum in the point $x=-2$ (see Fig. \[Fig1\]) may be rewritten with shifted field amplitude $y=x+2$: $$\label{EQ_29}
V(y)=\frac{9}{2} y^{2} -2y^{3} +\frac{y^{4} }{4}.$$ One may assume it to be an analog of various potentials, considered in the models of “chaotic inflation” [@Linde1990]. In this case the frequency of oscillations of the scalar field amplitude near the minimum at $y=0$ equals $\omega =3\mu \approx 0.3M_{P} $.
In this way the scenario of evolution of the early cold Universe, considered here, may be connected to a subsequent process of evolution described by the model of “chaotic inflation” [@Linde1990], which allows to solve many problems of cosmology. Nevertheless, the problems of dynamics of scalar field with potential and farther evolution of the Universe with its heating after the first-order phase transition go beyond the scope of the present paper.
Conclusions
===========
We have proposed a scenario of evolution of the early cold Universe which appears as the result of a rather big quantum fluctuation of vacuum with consequent first-order phase transition, driven by an “external field” parameter, which is proportional to the time-dependent scalar curvature $R(t)$. It is assumed that this phase transition occurs in the expanding Universe due to the interaction of the fundamental nonlinear scalar field with gravitational field, on the one hand, and also because of the presence of matter with equation of state $p=\nu \varepsilon $ with $\nu >{1\mathord{\left/ {\vphantom {1 3}} \right. \kern-\nulldelimiterspace} 3} $, on the other hand. The solutions of the nonlinear general relativity equations with finite energy density of vacuum may exist only for a rather large initial radius $a_{0} \ge 5l_{P} $ of the incipient Universe, when the quantum effects may be considered small, justifying the applicability of the classical relativity equations. The probability of such a big fluctuation is by itself rather low, while simultaneous appearance of spatially close multiple fluctuations is most improbable, which implies uniqueness of the Universe, developed in accordance with the described scenario.
We have obtained estimates for various parameters of the model: for $\mu $ and $g$, and also the vacuum average $\phi _{0} =\mu /g$ of the fundamental nonlinear scalar field; for the vacuum energy density, which is determined by the density of the potential energy of the scalar field $\lambda =\Delta U$; for the constant of interaction of the scalar and gravitational fields $\xi $; for the total energy of the scalar field $E_{c} =\Delta U\cdot \upsilon _{c} $, which is released during the first-order phase transition in the whole volume of the closed Universe $\upsilon _{c} =2\pi ^{2} a_{c}^{3} $, where $a_{c} $ is the maximal radius of the early Universe in the point of phase transition. We have shown that when the parameter $\xi $ approaches some limit value $\xi _{\min } $ (dependent on the value of the vacuum average of the scalar field $\phi _{0} $) the radius $a_{c} $ and, consequently, energy $E_{c} $ tend to infinity, which may be called a “hyperinflation” regime of evolution of the early Universe.
In conclusion, we would like to express our gratitude to D.S. Gorbunov, G.M. Zinoviev, A.I. Zhuk, I.V. Krive, V.V. Lebedev, V.A. Rubakov and S.M. Ryabchenko for enlightening discussions and useful criticism.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Internet is smoothly migrating from an Internet of people towards an Internet of Things (IoT). By 2020, it is expected to have 50 billion things connected to the Internet. However, such a migration induces a strong level of complexity when handling interoperability between the heterogeneous Internet things, e.g., RFIDs (Radio Frequency Identification), mobile handheld devices, and wireless sensors. In this context, a couple of standards have been already set, e.g., IPv6, 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks), and M2M (Machine to Machine communications). In this paper, we focus on the integration of wireless sensor networks into IoT, and shed further light on the subtleties of such integration. We present a real-world test bed deployment where wireless sensors are used to control electrical appliances in a smart building. Encountered problems are highlighted and suitable solutions are presented.'
author:
-
-
-
-
bibliography:
- 'IEEEabrv.bib'
- 'myrefs.bib'
nocite: '[@*]'
title: Wireless Sensor Network for Internet of Things
---
Introduction
============
The Internet of Things (IoT) is smoothly migrating from an Internet of people towards an Internet of Things. According to Cisco [@ref1], 50 billion things will be connected to the Internet in 2020, thus overshadowing the data generated by humans. This is limited by the birth rate: in 2020, it is expected to have 8 billion people worldwide [@ref2]. The things to be connected to the Internet largely vary in terms of characteristics. This ranges from very small and static devices (e.g., RFIDs) to large and mobile devices (e.g., vehicles). Such heterogeneity induces complexity and stipulates the presence of an advanced middleware that can mask this heterogeneity and promote transparency. In particular, Wireless Sensor Networks (WSNs) are connecting things to the Internet through a gateway that interfaces the WSN to the Internet. Unlike other networks, WSNs have the particular characteristic of collecting sensed data (temperature, motion, pressure, fire detection, Voltage/current, etc) and forwarding it to the gateway through a one-way communication protocol. Even though most WSN protocols were not designed for two-way communications, they should also be able to receive information and send it to the sensors (as a form of a command for instance), and react on behalf of the commander/user, e.g., automating home appliances.\
IoT will integrate rich set of applications into the Internet, e.g., automation, weather sensing, and Smart Grids (SGs). The latter is one of the most promising IoT applications. In SGs, Wireless Sensors are used to measure and keep track of energy consumption and production in order to optimize energy usage.\
In general, Internet things communicate by producing and consuming information and execute “smart” algorithms to interact intelligently with other things in the Internet. Besides, Internet Protocol Version 6 (IPv6) is used to uniquely identify the things in the Internet. To enable the integration of WNS in the IoT, there are two key points that should be added to the relevant protocols: First, the IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) protocol should be implemented and deployed in Wireless Sensor Networks (WSNs); Second, Machine to Machine communications (M2M) protocols [@ref3] need to be standardized.\
In this paper, we deploy a wireless sensor network (WSN) test bed and use 6LoWPAN to leverage wireless sensors as Internet end-with a two- way communication capability. The deployed tested is composed of a WSN, a middleware, and a mobile client for smart home energy monitoring and control. Data is collected from the motes within the WSN and communicated to the middle-ware. The mobile client is able to monitor and visualize the sensed data and control appliances remotely. The main two contributions of this paper are:
1. Identifying the challenges of deploying IPv6 over 6LoWPAN, and ways to interface with IPv4 networks. The paper presents the performance of the deployed network in terms of delay in different segments of the network.
2. Identifying the challenges of deploying a two-way communication between the wireless sensors and the Internet users, and implementing in the WSN.
The rest of the paper is organized as follows: Section II presents related work and background. Section III describes the system architecture. In section IV, the deployment of the system is highlighted and section V presents relevant experiments evaluating the system. The paper is concluded by section VI.
Background and Literature Review
================================
IoT is a new Internet paradigm based on the fact that there will be much more things than humans connected to the Internet. This means that machines/things will be able to communicate autonomously without the need to interact with human beings, thus rendering them into becoming the major entity generating data in the Internet. Currently, there are already over 12.5 billion things connected to the Internet [@ref1] and they will surpass humans in terms of the data they generate. In IoT, M2M will be the main communication standard between the Internet things [@ref3].\
Besides, ubiquitous and pervasive computing are key technologies significantly contributing to the advent of IoT. They bring computing all the way to physical objects which can communicate in the Internet by producing, consuming, and computing information, through RFID, mobile computing, and WSNs among other technologies [@ref4; @ref5]. RFID tags bear electronic identification data of different physical objects (e.g., goods, cars, and even wearable sensors), and can even used to identify people. RFIDs consume very little energy by reflecting signals received from RFID readers. On the other hand, mobile and handheld devices (e.g., smartphone and PDAs) are changing the way we access and interact with things in the Internet, and is rendering the Internet into a ubiquitous service. Along with cloud computing, the capabilities of these devices will be further boosted by providing storage and computing power in the cloud.\
WSNs are a prevalent instance of ubiquitous computing that enables small things to connect to the Internet. The sensory data will make a significant portion of the information flowing in the Internet. In particular, Smart Grids are one of the applications where different parts (things) of the grid (e.g., smart meters) communicate in order to optimize energy consumption as well as energy management in the Grid. SGs are heterogeneous by nature as it feeds power to different consumers (Homes, commercial buildings, factories, etc.) and therefore use heterogeneous technologies such as WiMax, WiFi, Zigbee, WSN, 6LoWPAN, M2M and IP Multimedia Subsystem (IMS) [@ref6].\
Zigbee is one of the technologies used in WSNs and is being adopted as a standard in SG for home area networks to connect appliances, equipment, and producers of energy such as solar panels to communicate information. IPv6, as being part of the wireless sensor network, bring numerous advantages. However, there are challenges that had to be addressed for IPv6 to be implemented on top of Zigbee, namely fragmentation, frame size, addressing, security [@ref7], and IPv4/IPv6 translation. This paper introduces a real test bed that includes the whole TCP/IP protocol implemented by Berkeley Low-power IP stack (BLIP) [@ref7] and that takes into consideration most of those issues. The test bed implements the two-way communication as needed by smart grids and measures the performance of such a system. El Kouche et. al [@ref8] investigates the widely used WSNs architectures and technologies and highlights the most suitable architectures for WSN deployment into IoT . In \[5\], authors present the requirements for deploying an IoT gateway, and propose architecture for the corresponding system to be deployed in the gateway. A similar architecture to what is presented in this paper uses Global System for Mobile Communications (GSM) to communicate information [@ref9].\
From the architectural point of view, integrating SGs into IoT imposes the stringent need of addressing heterogeneity. An IoT gateway system based on Zigbee and GPRS protocols helps partly in dealing with the heterogeneity problem and therefore enables the WSNs to communicate with the mobile telecommunication network [@ref10]. Another solution to the heterogeneity problem is proposed with a new light-weight web service transport protocol called Lean Transport Protocol (LTP) [@ref11] that allows transparent exchange of web service messages between all kinds of devices. This protocol is platform-independent and uses low-energy communication. Other researchers claim that the major source of heterogeneity arises from the fact that there are different types of WSN devices (e.g. Micaz, Mica2, and Telosb) that do not use the same standards \[9\]. A proposition has been made to migrate WSN communication towards an ”all-IP” mode. This would eliminate most of the heterogeneity. A relevant architecture is sketched, and is capable of converting all the WSNs, new and legacy, to support IPv6 [@ref12].\
In order to make the smallest devices connected to the Internet, 6LoWPAN has been used for this purpose. 6LoWPAN is based on the idea that all things should support the TCP/IP protocol stack and thus join the IoT. In order to build the TCP/IP protocol stack in these devices, multiple aspects of IP need to be addressed, basically IP Maximum Transmission unit (MTU) should be fixed at 1280 Bytes whereas in the Zigbee MTU is only 127 Bytes. This means that IPv6 packets cannot be encapsulated within Zigbee frames. Another issue is related to the addressing with the 128 bits address; in 6LoWPAN, IPv6 addressing is performed hierarchically. The main purpose behind is to identify the packet’s destination network ID before forwarding it to the network. These were just two instances of a large set of issues that 6LoWPAN solves in order to enable the low- power devices to join IoT. TinyOS [@ref13], which is a common operating system for WSNs, comes with a lightweight implementation of 6LoWPAN called BLIP.\
This project makes use of BLIP to provide the TCP/IP protocol stack to the WSN. 6LoWPAN is used at different parts of the system and more details about these parts will be provided in the system architecture section.
System Architecture
===================
The proposed system for integrating WSNs into IoT is composed of four essential blocks:
- Wireless Sensor Network (WSN)
- Gateway Server
- Middle-ware
- Mobile client
{width="\textwidth" height="7cm"}
The WSN uses Zigbee as the communication medium and uses IPv6 in the network layer. However, the communication between the gateway server, the middle-ware, and the mobile client is based on IPv4 over Wi-Fi. This architecture enables any device within the system to communicate with any other device independently of the communication medium used (e.g., Zigbee or Wi-Fi) or the network protocol used (e.g., IPv4 or IPv6). In Figure \[fig:gen\_architecture\], the system architecture is presented. It depicts the four main components of the system along with the relevant subcomponent. This figure also shows the communication flow between the different components of the system.\
Figure \[fig:network\_diagram\] presents the deployed network diagram, and depicts the different components of the system as well as the interconnections that exist between these different components.\
![Network Diagram[]{data-label="fig:network_diagram"}](network_diagram.jpg)
Wireless Sensor Network
-----------------------
The WSNs test-bed is composed of seven motes of type Crossbow MPR2600 [@ref14]. From the network topology perspective, the WSN is a multi-hop mesh network that uses the Ad hoc On-Demand Distance Vector (AODV) routing protocol [@ref15]. It is an ad-hoc network, whereby motes can be placed anywhere, without a preset topology, as long as there is at least one wireless link for communication. These communication links are created and refreshed dynamically between different motes of the WSN provided that their frames can reach the destination. In addition to the seven motes in the test-bed, there is an additional mote that plays the role of a sink connecting the WSN motes to the gateway server machine. The connection between the sink and the gateway is based on Universal Serial Bus (USB) connection.
Gateway Server
--------------
The gateway server is a key component in the system. It extracts and sniffs Wi-Fi frames, transforms them into Zigbee frames by replacing the appropriate frames’ headers, and forwards them to the sink. In the other direction, the gateway server receives Zigbee frames containing IP packets. These latter get encapsulated in a USB frame and then extracted at the level of the Gateway server to fit in a Wi-Fi frame.\
The gateway server is also responsible for receiving IPv4 packets and transforming them into IPv6 and vice versa. Besides, it has other functionalities such as receiving sensor data from the WSN and forwarding them to the middle-ware. In case the link between the gateway server and the middle-ware is lost, the gateway server stores the received data in a temporary data store and communicates this data once the link is up again.
Middle-ware
-----------
The middle-ware is a software component that is used to mask the heterogeneity in the system, and thus rendering it transparent to external users. The middle-ware also provides automation mechanisms in order to control and reduce the energy consumption. The main features are the ability to receive data, filter it, transform and store it in a coherent fashion in order to use it smartly in order to reduce consumption. In addition, the middle-ware provides an interface to end users via a set of web services that enable them to access all needed information (e.g., real-time and periodic consumption levels), and issue commands to control the appliances through the WSN.
Mobile Client
-------------
The mobile client application is an application deployed on Android phones that enables users to access the real-time energy consumption at their homes. Besides, it remotely controls the appliances by turning them On and Off. The mobile client, when wanting to turn On or Off an appliance, sends a command directly to the mote responsible for controlling the appliance and addresses the mote using its ”virtual” IPv4 address. The latter is a virtual one since only IPv6 addresses are supported. A virtual IPv4 address is reserved and assigned for each mote and the translation is made at the gateway level.\
Now that all components have been introduced, the data flow of the information is to be explained. As it was stated, any component in the system can communicate with any other independently of the data link layer technology or network layer technology.
Data Flows
----------
One of the main goals of this paper is to build a two-way communication between the client and sensor nodes.\
Figure 3 depicts the data flow diagram corresponding to a mobile user sending a command to the WSN. The mobile client is connected to a Wi-Fi network that uses IPv4 whereas the WSN uses IPv6. Therefore, there should be a process that controls, tracks and transforms the incoming and outgoing packets. The client starts by sending an IPv4 packet to the virtual IPv4 address of the mote. Afterwards, the gateway receives it, translates the virtual IPv4 address into the real IPv6 address of the mote by setting as source address the virtual IPv6 address of the mobile client. The new IPv6 packet is created, carrying the payload coming from the original packet. This new IPv6 packet is forwarded to the wireless sensor network using an IPv6-over- USB tunnel that encapsulates the packet into a USB frame and communicates it to the mote sink. The latter extracts the IPv6 packet from the USB frame and encapsulates it into a Zigbee frame. Once the Zigbee frame arrives to the destination mote, the TCP datagram is extracted and passed to the TCP server port in the mote that reads the message and executes it by turning On/Off the appliance using I2C (Inter- Integrated Circuit) [@ref16].\
![Data flow diagram for the mobile client sending On/Off commands[]{data-label="fig:data_flow_command"}](data_flow_command.jpg)
In the other direction, the mote sends periodically sensory data to the middle-ware. The relevant communication passes through several steps, which are depicted in figure 4:The mote periodically reads sensory data from the sensor, transforms the data and communicates it. To send it, the mote client connects to a TCP server hosted at the gateway server. An IPv6 packet is encapsulated in a Zigbee frame that is forwarded to the mote sink that extracts the IPV6 packet and encapsulates it into a USB frame and then forwards it to the gateway where the TCP datagram is extracted. Once the sensory data is at the gateway, it is communicated to the middle-ware. If the link is down, the sensory data is temporarily stored in a database hosted in the gateway. Once the link is up, all the stored data is sent to the middle- ware and cleared from the database.
![Data flow diagram depicting the sending of sensory data[]{data-label="fig:data_flow_sensing"}](data_flow_sensing.jpg)
System Deployment
=================
To meet the constraints of a system capable of providing two-way communication between any host and any mote in the WSN, the following components have been deployed:
- Mote Programming
- Mote sink Packet Forwarding
- IPv4/IPv6 Gateway
- Network Gateway Sensor Data Server
Next sections highlight these components.
Mote Programming
----------------
Each mote within the WSN network is equipped with an electric current transformer that is attached to the data acquisition board through which data is read and transformed to the appropriate format and sent to the network host. Besides, the appliance is attached to the mote through the relay pins existing within the data acquisition board in the mote. In other words, the mote can control the electricity going to each mote and can allow or block it. This means that one can control the appliance by using some of the functionalities provided by the mote. From the mote’s perspective there are two parts that are implemented within its TinyOS program. A TCP server that is used to receive On/Off requests in order to control the mote and a TCP client used to send sensor data. Once the program is installed, an IPV6 address is passed to the installation routine in order to assign a static IPv6 address to the mote in which the program is cross-compiled and installed. The TCP server and the TCP client work in parallel as each one’s traffic is handled separately:
### TCP Server
It is an important component in the mote’s program. To control the appliance, one must connect to the TCP server and send requests. As a mote may control more than one appliance, we identify the appliance by a unique ID and send a zero to turn Off or one to turn On.
### TCP Client
It serves as means to send sensor data to the gateway in a reliable way. Once the mote is turned On, the client connects to the gateway TCP. The consumption data is then sensed periodically (once per second) and sent to the TCP server who deals processes the sensor data.
Mote Sink Packet Forwarding
---------------------------
The mote sink packet forwarding module is a special program installed within a mote that is equipped with a USB port that plays the role of a network interface card. The mote is attached to the gateway station and has the module within it. In addition, it communicates with the gateway using USB protocol. In the gateway station, the network interface module is an IPv6 over USB tunnel. This means that IPv6 packet destined to the sink are encapsulated within a USB frame, and once it arrives to the sink, the IPv6 packet is extracted and forwarded to the destination mote holding that IPv6 address. The other way around is fairly similar, when a mote wants to send an IPv6 packet to the outside world, the mote creates the packet, sends it to the sink that forwards it by encapsulating it into a USB frame.
IPv4/IPv6 Gateway
-----------------
This is the most crucial component in the system. It addresses the “gatewaying” issue between IPv4 and IPv6 networks, i.e., the Internet and the WSN. The WSN network supports only IPv6 while other components such as the middle-ware and the mobile client do not necessary have an IPv6 address, but we still want all the components to communicate independently of the IP technology to be used. To do so, we have created a network packet transformation program. This program basically converts IPv4 to IPv6 and vice versa. To do so, it assigns virtual IPv4 addresses to IPv6 address holders and IPv6 address to IPv4 address holders. With such a program, each player in the network has both an IPv4 and an IPv6; still, it is aware of only the one that is assigned to it. The other ”virtual” address is known only at the level of the program installed at the gateway station between the WSN and the outside world.\
The flow of information works as follows: when a station wants to send requests to a mote, it sends an IPv4 packet holding the request to the mote. This *packet transformation program* that will extract the TCP datagram, create a new IPv6 packet specifying the source address as the virtual address of the host and the destination as the real address of the mote. Afterwards, the TCP datagram is appended to the newly created packet and sent to the mote sink packet forwarding component that is seen by the gateway as a network interface card. Still, this leads to a complicated issue that needs to be handled separately.\
The issue consists of the fact that the gateway program should keep track of the request responses in order to forward them correctly to the destination. To solve such a problem, an algorithm has been created whose sole role is to mechanically compute the IPv6 address of the host based on its IPv4 and vice versa. This algorithm is based on a mapping function whose primary feature is bijectivity. This means that any IPv4 address is uniquely mapped to one and only one IPv6 adress and vice versa. Thus, whenever a request is coming, the source and destination addresses will be converted using this algorithm, hence avoiding the whole request response tracking part.
Evaluation
==========
To evaluate the system, we tracked the extent to which the system is able to operate reliably, and offering an acceptable level of performance. Two experiments were conducted to measure the system’s performance.\
![Delay and Jitter variation with increasing traffic[]{data-label="fig:data_flow_command"}](delay_jitter.JPG)
In the first experiment, the behaviour of the system is recorded, where each mote reads sensory data every second and generates traffic in the WSN. We measure the delay and observe its variance regarding the traffic intensity. The routing protocol in principle gives more priority to routing control packets rather than data ones. Therefore, this priority might affect the network’s delay. In Figure 5, we clearly notice that the number of motes in the network does not significantly affect the average delay. On the other hand, the number of motes in the network significantly affects the jitter. The jitter is more sensitive to the change in traffic because there are time intervals where the network’s load is higher than other times which makes the jitter grow and keeping the delay constant.\
In the second experiment, we measure the contribution of the Gateway Packet Transformation process to the overall communication delay. In other words, how much delay will will be added when adding the packet transformation process?The results present the average delay and jitter computed over the elapsed time starting from the sniffing of the packet in the Gateway Packet Transformation process to the transformation and sending to the recipient. This was carried over 200 packets that were sniffed and transformed by the process.\
![Delay Frequency Histogram[]{data-label="fig:data_flow_command"}](histogram.JPG)
The transformation process’s elapsed time is measured in microseconds. The average delay is on average 100 microseconds whereas the jitter is around 30 microseconds. This means that the process’s time varies between a few microseconds to at most 150 microseconds. In addition, depending on the machine’s load, the distribution of the delay frequency is shown in the histogram depicted in figure 6. From this figure one can conclude that the delay is normally distributed. In addition, this experiment shows that the Gateway Packet Transformation process does not significantly contribute to the overall delay.
Conclusion
==========
In this paper, we presented the subtleties of integrating wireless sensors networks into the Internet in order to control electrical appliances. We delineated the architecture for deploying a real- world testbed. The presented architecture is simple and can be easily adopted for similar deployments. We highlighted relevant problems mainly IPv4 to IPv6 gatewaying.\
As a future work, we intend to further research the middleware system component to support heterogeneous wireless sensor motes, and thus not to limit deployment to specific motes, e.g., TinyOS ones.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract:
- 'Ce texte est le deuxième article sur une généralisation de système d’Euler de Kato. Il est consacré à la construction d’une famille de systèmes d’Euler de Kato sur la courbe de Hecke, qui interpole les systèmes d’Euler de Kato associés aux formes modulaires paramétrées par la courbe de Hecke cuspidale. Par ailleurs, on explique la construction d’une famille de distributions sur $\Z_p$ sur la courbe de Hecke cuspidale à partir de cette famille de systèmes d’Euler de Kato; cette distribution fournit une fonction L $p$-adique en $2$-variable qui interpole les fonctions L $p$-adiques des formes modulaires précédentes.'
- 'This article is the second article on the generalization of Kato’s Euler system. The main subject of this article is to construct a family of Kato’s Euler systems over the cuspidal eigencurve, which interpolate the Kato’s Euler systems associated to the modular forms parametrized by the cuspidal eigencurve. We also explain how to use this family of Kato’s Euler system to construct a family of distributions on $\Z_p$ over the cuspidal eigencurve; this distribution gives us a two variable $p$-adic L function which interpolate the $p$-adic L function of modular forms.'
author:
- 'Shanwen <span style="font-variant:small-caps;">WANG</span>'
title: 'Le système d’Euler de Kato en famille (II)[^1]'
---
Introduction
=============
Introduction
------------
Dans une séries d’articles datant des années $80$, Hida montre que les formes modulaires ordinaires vivent dans des familles $p$-adiques et le poids varie $p$-adiquement. En $1995$, Coleman montre que la même chose est vraie pour les formes modulaires surconvergentes non-ordinaires de pente finie. Ensuite, Coleman et Mazur [@CM] construisent un objet géométrique $\fC$, appelé la courbe de Hecke (“Eigencurve”), paramétrant les formes modulaires surconvergentes de pente finie. De plus, ils ont aussi construit une famille de représentations galoisiennes de rang $2$ sur la courbe de Hecke. On note $\fC^0$ la sous-courbe fermée de $\fC$, appelé la courbe de Hecke cuspidale, paramétrant les formes modulaires surconvergentes cuspidales de pente finie, ainsi que $\tilde{\fC}^0$ la normalisation de $\fC^0$. Notre résultat principal (cf. théorème \[principal\] ci-dessous) est que la fonction L $p$-adique d’une forme modulaire $f$ varie analytiquement avec $f$ sur $\tilde{\fC}^0$:
On choisit un caractère de Dirichlet $\chi$ modulo $N$ avec $N$ suffisantment grand[^2] et $(N,p)=1$, ce qui permet de fixer les périodes par lesquelles on doit diviser les valeurs spéciales des fonctions L que l’on veut interpoler (cf. §\[cal\] pour détails). Si $f\in \fC^0$ est une forme propre classique non-critique de niveau modéré $\Gamma_1(N)$, on dispose d’une distribution $\mu_{f,\chi}$ sur $\Z_p^*$ à valeurs dans $\ol{\Q}_p$, telle que, quels que soient $0\leq j\leq k-2 $ et $\eta$ un caractère de Dirichlet modulo $p^m$ vérifiant que $\eta\chi(-1)=(-1)^{k-j-1}$, on a $\int_{\Z_p^*}\eta(x) x^{j}\mu_{f,\chi}= L(f\otimes\eta, j+1)$ à multiplication près par des facteurs explicites (facteurs d’Euler, périodes,$\cdots$), et qui fournit la fonction L $p$-adique attachée à $f$, en posant[^3] $$L_{p,\chi}(f,\kappa, s)=\int_{\Z_p^*}\kappa(x)\cdot \langle x\rangle^{s}\cdot \mu_{f,\chi},$$ si $\kappa$ est un caractère localement analytique de $\Z_p^*$ et $s\in \Z_p$.
\[principal\]Si $x$ est un point classique non-critique de $\tilde{\fC}^0$, alors il existe un ouvert affinoïde $X\subset \tilde{\fC}^0$ contenant $x$ et une distribution $\mu_{X,\chi} $ sur $\Z_p^*$ à valeurs dans $\cO(X)$, tels que, pour tout point $f$ dans l’intersection de $X$ et le sous-ensemble $Z$ des formes propres classiques non-critiques de $\tilde{\fC}^0$, on a $$\mathrm{Ev}_f(\mu_{X,\chi})= C(f)\mu_{f,\chi},$$ où $C(f)$ est une constante dans $\bar{\Q}_p^*$ dépendant de la forme $f$.
\(1) Il y a au moins trois manières de construire $\mu_{f,\chi}$ correspondant aux différentes réalisations des motifs associés aux formes modulaires:
$\bullet$ La méthode classique utilise la réalisation de Betti, c’est à dire, la théorie de symboles modulaires (Mazur-Swinnerton-Dyer [@MS], Manin [@Ma], Vishik [@Vi], Amice-Vélu [@Av], Mazur-Tate-Teiltelbaum [@MTT], Stevens [@St], Pollack-Stevens [@SP] et [@SP1]...);
$\bullet$ Une méthode plus récente, correspondante à la réalisation de de Rham, passe par la méthode de Rankin-Selberg (Hida [@HD] et [@HD1] dans le cas ordinaire, Panchishkin [@AP] dans le cas général);
$\bullet$ La méthode de Kato [@KK], via la réalisation étale $p$-adique, passe par la construction d’un système d’Euler, utilise la théorie des $(\varphi,\Gamma)$-modules de Fontaine [@Fo] pour en déduire, via une variante de l’application exponentielle de Perrin-Riou [@PR] et [@PR1], une distribution. Montrer que cette distribution est celle que l’on cherche (i.e. interpole les valeurs spéciales de la fonction L complexe de la forme modulaire) nécessite de comparer deux lois de réciprocités explicites, et d’utiliser la méthode de Rankin comme dans l’approche de Panchishkin.
\(2) Des fonctions L $p$-adiques en deux variables, dont une variable varie sur un morceau de $\tilde{\fC}^0$, ont déjà été construites par des méthodes différentes correspondant aux constructions de $\mu_{f,\chi}$ ci-dessus:
$\bullet$ La stratégie de Stevens [@St1] (travail non publié), Pollack-Stevens [@SP1] et Bellaïche [@Be] est d’utiliser la théorie des symboles modulaires surconvergents, et ils réussissent à construire une fonction L $p$-adique $L_p(x,s)$ en deux variables, où $x$ varie dans un voisinage d’une forme modulaire raffinée (non-critique pour Stevens, et critique pour Bellaïche) sur la courbe de Hecke;
$\bullet$ La stratégie de Hida [@HD1] (pour la famille ordinaire) et de Panchishkin [@AP] (pour la famille de pente finie fixée) est d’utiliser la méthode de Rankin-Selberg en famille;
$\bullet$ La stratégie d’Emerton [@EM] est d’utiliser la cohomologie complété et le foncteur de Jacquet dans la théorie de représentations localement analytiques de $\GL_2(\Q_p)$;
$\bullet$ La stratégie de Fukaya [@Fu] et de Delbourgo [@DD], dans le cas ordinaire, passe par la déformation de systèmes d’Euler de Kato via la $K$-théorie et via la théorie des symboles modulaires respectivement, utilise la série de Coleman pour $K_2$ et une grande exponentielle duale respectivement pour en déduire une fonction L $p$-adique sur la famille ordinaire.
\(3) Notre stratégie a pour point de départ les travaux de Kato [@KK] et de Colmez [@PC1] (revisités par l’auteur dans $\cite{Wang}$) sur le système d’Euler de Kato.
$\bullet$ Dans [@WangI], pour $c,d\in \Z_p^*$, on a construit une déformation $z_{\Kato,c,d}(\nu_j)$ (cf. §3.3) du système d’Euler de Kato sur l’espace des poids en reprenant la construction de Kato et défini une famille d’applications exponentielles duales, qui interpole l’application exponentielle duale de Kato et qui envoie la famille de systèmes d’Euler de Kato sur le produit d’une famille de séries d’Eisenstein avec une série d’Eisenstein.
$\bullet$ Dans cet article, on construit une famille de systèmes d’Euler de Kato sur $\fC^0$ (cf. §4) à partir de $z_{\Kato,c,d}(\nu_j)$; ensuite on utilise la théorie des $(\varphi,\Gamma)$-modules en famille ([@BC], [@KJX], [@Liu]) pour en déduire une distribution $\mu_{X,c,d,\chi}$ sur $\Z_p$ à valeurs dans $\cO(X)$ (cf. proposition \[principal\_vrai\]), où $X\subset\tilde{\fC}^0$ un ouvert affinoïde comme dans le théorème. En divisant la restriction de $\mu_{X,c,d,\chi}$ à $\Z_p^*$ par un facteur explicit (cf. la formule (\[CD\])), on obtient la distribution voulue, qui est indépendante du choix de $c,d$.
\(4) Le théorème ci-dessus montre qu’il existe une fonction L $p$-adique en deux variables $L_p(x,s)$, où $x$ varie sur $\tilde{\fC}^0$, interpolant les fonctions L $p$-adiques de formes modulaires. Par prolongement analytique, on en déduit que le théorème \[principal\] est encore valable aux points classiques cuspidals critiques.
Le plan de cet article est le suivant: la démonstration comporte deux étapes principales mentionnées dans la remarque ci-dessus, qui correspondent aux chapitres $\S 4$ et $\S 5$. Ces deux étapes reposent sur deux chapitres de préparations (§2 et 3): au chapitre §2, on rappelle la théorie des $(\varphi,\Gamma)$-modules et la triangulation en famille; au chapitre §3, on rappelle la construction de la famille de système d’Euler de Kato sur l’espace des poids et ses variantes, à qui on appliquera la projection du système d’Euler de Kato sur $\fC^0$ (cf. §4).\
Remerciements: {#remerciements .unnumbered}
--------------
Ce travail repose sur les travaux d’Ash-Stevens, Berger-Colmez, Bellaïche, Chenevier, Colmez, Kato, Kedlaya-Pottharst-Xiao, et Liu. Je tiens à leur exprimer ma gratitude. Pendant la préparation de cet article, j’ai bénéficié de communications et discussions avec F. Andreatta, J. Bellaïche, D. Benois, P. Colmez, A. Iovita, R. Liu, G. Stevens, J. Tong et L. Xiao. Je voudrais aussi remercier les Prof. Y. Tian et Prof. S. Zhang et le Morningside Center de Pékin, ainsi que les Prof. H. Chen et Prof. L. Fu et le CIM de Tianjin, pour leur hospitalité; les exposés que j’ai donnés lors des conférences à ces deux endroits, en août 2012 et juin 2013, m’ont grandement aidé à mettre mes idées au claire. Je remercie aussi le Cariparo Eccellenza Grant, le projet SFB 45 et le projet SFB 1085, d’avoir financé mes séjours de 2011 à 2013 à Padoue, Italie, et de 2014 à Essen et à Regensburg, Allemagne. Les rédactions de cet article a été faite pendant mes séjours à l’IMJ et à l’IHES en 2013, et au CRM de montréal en 2015. Je souhaite remercier ces institutions pour m’avoir fourni d’excellentes conditions de travail.
Notations
---------
On note $\overline\Q$ la clôture algébrique de $\Q$ dans $\C$, et fixe, pour tout nombre premier $p$, une clôture algébrique $\overline\Q_p$ de $\Q_p$, ainsi qu’un plongement de $\overline\Q$ dans $\overline\Q_p$.
Si $N\in\N$, on note $\zeta_N$ la racine $N$-ième $e^{2i\pi/N}\in\overline\Q$ de l’unité, et on note $\Q^{\rm cycl}$ l’extension cyclotomique de $\Q$, réunion des $\Q(\zeta_N)$, pour $N\geq 1$, ainsi que $\Q^{\rm cycl}_p$ l’extension cyclotomique de $\Q_p$, réunion de $\Q_p(\z_N)$, pour $N\geq 1$.
### Objets adéliques {#objets-adéliques .unnumbered}
Soient $\cP$ l’ensemble des nombres premiers de $\Z$ et $\hat{\Z}$ le complété profini de $\Z$, alors $\hat{\Z}=\prod_{p\in\cP}\Z_p$. Soit $\A_f=\Q\otimes\hat{\Z}$ l’anneau des adèles finis de $\Q$. Si $x\in\A_f$, on note $x_p$ (resp. $x^{]p[}$) la composante de $x$ en $p$ (resp. en dehors de $p$). Notons $\hat{\Z}^{]p[}=\prod_{l\neq p}\Z_l$. On a donc $\hat{\Z}=\Z_p\times\hat{\Z}^{]p[}$. Cela induit les décompositions suivantes: pour tout $d\geq 1$, $$\bM_d(\A_f)=\bM_d(\Q_p)\times\bM_d(\Q\otimes\hat{\Z}^{]p[})
\text{ et }
\GL_d(\A_f)=\GL_d(\Q_p)\times\GL_d(\Q\otimes\hat{\Z}^{]p[}).$$ On définit les sous-ensembles suivants de $\A_f$ et $\bM_2(\A_f)$: $$\begin{aligned}
\hat{\Z}^{(p)}=\Z_p^{*}\times\hat{\Z}^{]p[} &\text{ et
}
\bM_{2}(\hat{\Z})^{(p)}=\GL_2(\Z_p)\times\bM_2(\hat{\Z}^{]p[}), \\
\A_f^{(p)}=\Z_p^{*}\times(\Q\otimes\hat{\Z}^{]p[})
&\text{ et }
\bM_{2}(\A_f)^{(p)}=\GL_2(\Z_p)\times\bM_2(\Q\otimes\hat{\Z}^{]p[}).\end{aligned}$$
### Actions de groupes {#actions-de-groupes .unnumbered}
Soient $X$ un espace topologique localement profini, et $V$ un $\Z$-module. On note $\LC_c(X,V)$ le module des fonctions localement constantes sur $X$ à valeurs dans $V$ dont le support est compact dans $X$. On note $\fD_{\alg}(X,V)$ l’ensemble des distributions algébriques sur $X$ à valeurs dans $V$, c’est à dire des applications $\Z$-linéaires de $\LC_c(X,\Z)$ à valeurs dans $V$. On note $\int_X\phi\mu$ la valeur de $\mu$ sur $\phi$ où $\mu\in\fD_{\alg}(X,V)$ et $\phi\in \LC_c(X,\Z)$.
Soit $G$ un groupe localement profini, agissant continûment à droite sur $X$ et $V$. On munit $\LC_c(X,\Z)$ et $\fD_{\alg}(X,V)$ d’actions de $G$ à droite comme suit:
si $g\in G, x\in X,\phi\in\LC_c(X,\Z), \mu\in\fD_{\alg}(X,V),$ alors $$\label{actiondis} (\phi*g)(x)=\phi(x*g^{-1}) \text{ et } \int_{X}\phi(\mu*g)=\bigl(\int_{X}(\phi*g^{-1})\mu\bigr)*g.$$ Si $M$ est un $G$-module topologique à droite, on note $\rH^i(G,M)$ le $i$-ième groupe de cohomologie continue de $G$ à valeurs dans $M$. Si $X$ est en plus muni d’une action à gauche de $G$ (notée $(g,x)\mapsto g\star x$) commutant à l’action à droite de $G$, les modules $\rH^i(G, \fD_{\alg}(X,M))$ sont naturellement des $G$-modules à gauche.
### Formes modulaires {#formes-modulaires .unnumbered}
Soient $A$ un sous-anneau de $\C$ et $\Gamma$ un sous-groupe d’indice fini de $\SL_2(\Z)$. On note $\cM_k(\Gamma,\C)$ le $\C$-espace vectoriel des formes modulaires de poids $k$ pour $\Gamma$. On note aussi $\cM_{k}(\Gamma,A)$ le sous $A$-module de $\cM_k(\Gamma,\C)$ des formes modulaires dont le $q$-développement est à coefficients dans $A$. On pose $\cM(\Gamma,A)=\oplus_{k=0}^{+\infty}\cM_k(\Gamma,A)$. Et on note $\cM_k(A)$ (resp. $\cM(A)$) la réunion des $\cM_k(\Gamma,A)$ (resp. $\cM(\Gamma,A)$), où $\Gamma$ décrit tous les sous-groupes d’indice fini de $\SL_2(\Z)$. On définit de même: $$\cM^{\con}_k(A)=\bigcup\limits_{\substack{\Gamma \text { sous-groupe de congruence } }}\cM_k(\Gamma,A)\text{ et }
\cM^{\con}(A)=\bigcup_k\cM_k^{\con}(A).$$
Soit $K$ un sous-corps de $\C$ et soit $\ol{K}$ la clôture algébrique de $K$. On note $\Pi_K$ le groupe des automorphismes de $K$-algèbres graduées $\cM(\bar{K})$ sur $\cM(\SL_2(\Z),K)$; c’est un groupe profini. Si $f\in\cM(\ol{K})$, le groupe de galois $\cG_K$ agit sur les coefficients du $q$-développement de $f$; ceci nous fournit une section de $\Pi_K\ra \cG_K$, notée par $\iota_K$.
Le groupe des automorphismes de $\cM^{\con}(\Q^{\cycl})$ sur $\cM(\SL_2(\Z),\Q^{\cycl})$ est le groupe $\SL_2(\hat{\Z})$, le complété profini de $\SL_2(\Z)$ par rapport aux sous-groupes de congruence. D’autre part, soit $f\in\cM^{\con}(\Q^{\cycl})$, le groupe $\cG_{\Q}$ agit sur les coefficients du $q$-développement de $f$ à travers son quotient $\Gal(\Q^{\cycl}/\Q)$ qui est isomorphe à $\hat{\Z}^{*}$ par le caractère cyclotomique $\chi_{\cycl}$. On note $H$ le groupe des automorphismes de $\cM^{\con}(\Q^{\cycl})$ sur $\cM(\SL_2(\Z),\Q)$. La sous-algèbre $\cM^{\con}(\Q^{\cycl})$ est stable par $\Pi_{\Q}$ qui agit à travers $H$. Le groupe $H$ est isomorphe à $\GL_2(\hat{\Z})$ et on a le diagramme commutatif de groupes suivant (cf. par exemple [@Wang théorème 2.2]): $$\label{diagram}
\xymatrix{
1\ar[r]&\Pi_{\bar{\Q}}\ar[r]\ar[d]&\Pi_{\Q}\ar[r]\ar[d]&\cG_{\Q}\ar[r]\ar[d]^{\chi_\cycl}\ar@{.>}@/^/[l]^{\iota_\Q}&1\\
1\ar[r]&\SL_{2}(\hat{\Z})\ar[r]&\GL_2(\hat{\Z})\ar[r]^{\det}&\hat{\Z}^{*}\ar[r]\ar@{.>}@/^/[l]^{\iota}&1 },$$ où la section $\iota_\Q$ de $\cG_{\Q}$ dans $\Pi_{\Q}$ décrite plus haut envoie $u\in \hat{\Z}^{*}$ sur la matrice $(\begin{smallmatrix}1&0\\0&u\end{smallmatrix})\in \GL_2(\hat{\Z})$.
### Anneaux de séries de Laurent {#anneaux-de-séries-de-laurent .unnumbered}
Fixons une extension finie $L$ de $\Q_p$. Le caractère cyclotomique $\chi_{\cycl}$ induit un isomorphisme de $\Gamma=\Gal(\Q_p(\z_{p^\infty})/\Q_p)$ sur $\Z_p^*$. Soient $\cR^{+}$ l’anneau des fonctions analytique sur le disque $v_p(T)>0$ à coefficient dans $L$, $\cE^+$ le sous-anneau de $\cR^+$ des éléments bornés, $\cR$ l’anneau des fonctions annalytiques sur une couronne $0<v_p(T)\leq r$, où $r>0$ dépend de l’élément considéré (l’anneau de Robba), $\cE^{\dag}$ le sous-anneau de $\cR$ des éléments bornés (c’est un corps) et $\cE$ le complété de $\cE^\dag$ pour la valuation $p$-adique. On munit ces anneaux d’actions continues de $\Gamma$ et d’un Frobenius $\varphi$, commutant entre elles, en posant $\varphi(T)=(1+T)^p-1$ et $\gamma(T)=(1+T)^{\chi_{\cycl}(\gamma)}-1$ si $\gamma\in \Gamma$.
Soit $C$ un pro-$p$-groupe qui est isomorphe à $1+p\Z_p$. Si $c$ est un générateur de $C$, l’algèbre de groupe complété $\Lambda_C$ de $C$ est isomorphe à $\Z_p[[c-1]]$. On définit l’anneau $\cR^+(C)$ en remplaçant par $c-1$ la variable $T$ intervenant dans la définition de $\cR^+$. Si $C_n$ est le sous-groupe fermé de $C$ d’indice $p^n$, on a un isomorphisme $\Lambda_C\otimes_{\Lambda_{C_n}}\cR^+(C_n)\cong \cR^+(C)$.
Soit $H$ un groupe isomorphe à $\Z_p^*$. On note $H_d$ le sous-groupe de $H$ correspondant à $1+p^d\Z_p$. On définit l’anneau $\cR^+(H)$ par le produit tensoriel $\Lambda_H\otimes\cR^+(H_d)$, qui est indépendant du choix de $H_d$.
[^1]: 2010 Mathematics Subject Classification. 11F85, 11F67, 11G40, 11R33, 11S80, 14G10, 14G35
[^2]: C’est une condition technique (cf. §5.2.2 pour plus de détails) pour que l’on peut fixer les périodes en utilisant seulement un caractère $\chi$.
[^3]: $\langle\cdot\rangle$ est l’application de projection $\Z_p^*\ra 1+p\Z_p$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we study the topological invariant ${{\sf {TC}}}(X)$ reflecting the complexity of algorithms for autonomous robot motion. Here, $X$ stands for the configuration space of a system and ${{\sf {TC}}}(X)$ is, roughly, the minimal number of continuous rules which are needed to construct a motion planning algorithm in $X$. We focus on the case when the space $X$ is aspherical; then the number ${{\sf {TC}}}(X)$ depends only on the fundamental group $\pi=\pi_1(X)$ and we denote it ${{\sf {TC}}}(\pi)$. We prove that ${{\sf {TC}}}(\pi)$ can be characterised as the smallest integer $k$ such that the canonical $\pi\times\pi$-equivariant map of classifying spaces $$E(\pi\times\pi) \to E_{{\mathcal D}}(\pi\times\pi)$$ can be equivariantly deformed into the $k$-dimensional skeleton of $E_{{\mathcal D}}(\pi\times\pi)$. The symbol $E(\pi\times\pi)$ denotes the classifying space for free actions and $E_{{\mathcal D}}(\pi\times\pi)$ denotes the classifying space for actions with isotropy in a certain family ${{\mathcal D}}$ of subgroups of $\pi\times\pi$. Using this result we show how one can estimate ${{\sf {TC}}}(\pi)$ in terms of the equivariant Bredon cohomology theory. We prove that ${{\sf {TC}}}(\pi) \le \max\{3, {{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)\},$ where ${{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)$ denotes the cohomological dimension of $\pi\times\pi$ with respect to the family of subgroups ${{\mathcal D}}$. We also introduce a Bredon cohomology refinement of the canonical class and prove its universality. Finally we show that for a large class of [*principal*]{} groups (which includes all torsion free hyperbolic groups as well as all torsion free nilpotent groups) the essential cohomology classes in the sense of Farber and Mescher [@FM] are exactly the classes having Bredon cohomology extensions with respect to the family ${{\mathcal D}}$.'
address:
- |
School of Mathematical Sciences\
Queen Mary, University of London\
London, E1 4NS\
United Kingdom
- |
Institute of Pure and Applied Mathematics\
University of Aberdeen\
Aberdeen AB24 3UE\
United Kingdom
- |
Department of Mathematics\
Cleveland State University\
Cleveland OH 44115\
U.S.A.
- |
Department of Mathematics\
Cleveland State University\
Cleveland OH 44115\
U.S.A.
author:
- Michael Farber
- Mark Grant
- Gregory Lupton
- John Oprea
title: Bredon cohomology and robot motion planning
---
\[section\] \[subsection\] \[proposition\][Lemma]{} \[proposition\][Corollary]{} \[proposition\][Definition]{} \[proposition\][Remark]{} \[proposition\][Example]{}
Introduction
============
[*The topological complexity*]{}, ${{\sf {TC}}}(X)$, is a numerical homotopy invariant of a path-connected topological space $X$, originally introduced in [@Far03] (see also [@Finv]) which is motivated by the motion planning problem of robotics. Roughly, ${{\sf {TC}}}(X)$ is the minimal number of continuous rules which are needed to construct an algorithm for autonomous motion planning of a system having $X$ as its configuration space.
To give more detail, assume that a system (robot) has to be programmed to move autonomously from any initial state to any final state. Let $X$ denote the configuration space of the system; points of $X$ represent states of the system and continuous paths in $X$ represent motions of the system. [*A motion planning algorithm*]{} is a function which associates with any pair of states $(A, B)\in X\times X$ a continuous motion of the system starting at $A$ and ending at $B$. In other words, a motion planning algorithm is a section of the path fibration $$\begin{aligned}
\label{fibration}
p: X^I\to X\times X, \quad p(\gamma) = (\gamma(0), \gamma(1)).\end{aligned}$$ Here $X^I$ denotes the space of all continuous paths $\gamma: I=[0, 1]\to X$ equipped with the compact-open topology. It is easy to see that the fibration (\[fibration\]) admits a continuous section if and only if $X$ is contractible [@Far03]. [*The topological complexity*]{} ${{\sf {TC}}}(X)$ is an integer (see Definition \[def1\] below) reflecting the complexity of this fibration. It has several different characterisations, see [@Far06]. Intuitively, ${{\sf {TC}}}(X)$ is a measure of the navigational complexity of $X$ viewed as the configuration space of a system. ${{\sf {TC}}}(X)$ is similar in spirit to the classical Lusternik - Schnirelmann category ${{\sf {cat}}}(X)$. The invariants ${{\sf {TC}}}(X)$ and ${{\sf {cat}}}(X)$ are special cases of a more general notion of genus of a fibration introduced by A. Schwarz [@Sv66]. A recent survey of the concept ${{\sf {TC}}}(X)$ and robot motion planning algorithms in practically interesting configuration spaces can be found in [@Frecent].
One of the main properties of ${{\sf {TC}}}(X)$ is its [*homotopy invariance*]{} [@Far03], i.e. ${{\sf {TC}}}(X)$ depends only on the homotopy type of $X$. This property is helpful for the task of computing ${{\sf {TC}}}(X)$ in various examples since cohomological tools can be employed. In the case when the configuration space $X$ is [*aspherical*]{}, i.e. $\pi_i(X)=0$ for all $i>1$, the number ${{\sf {TC}}}(X)$ depends only on the fundamental group $\pi=\pi_1(X)$ and it was observed in [@Far06] that one should to be able to express ${{\sf {TC}}}(X)$ in terms of the algebraic properties of the group $\pi$ alone. This remark justifies the notation ${{\sf {TC}}}(\pi)$ for ${{\sf {TC}}}(K(\pi,1))$.
A similar question for the Lusternik - Schnirelmann category ${{\sf {cat}}}(X)$ was solved by S. Eilenberg and T. Ganea in 1957 in the seminal paper [@EG]. Their theorem relates ${{\sf {cat}}}(X)$ and the cohomological dimension of the fundamental group $\pi$ of $X$.
The problem of computing ${{\sf {TC}}}(\pi)$ as an algebraic invariant of the group $\pi$ has attracted the attention of many mathematicians and many interesting partial results have been obtained. It is easy to see that ${{\sf {TC}}}(\pi)=\infty$ if $\pi$ has torsion; therefore we shall always restrict our attention to torsion free groups $\pi$.
The initial papers [@Far03], [@Far06] contained computations of ${{\sf {TC}}}(X)$ for graphs, closed orientable surfaces and tori. In [@FarberYuzvinsky] the number ${{\sf {TC}}}(X)$ was computed for the case when $X$ is the configuration space of many particles moving on the plane without collisions. D. Cohen and G. Pruidze [@CohenPruidze] calculated the topological complexity of complements of general position arrangements and Eilenberg – MacLane spaces associated to certain right-angled Artin groups.
In a recent breakthrough, the topological complexity of closed non-orientable surfaces of genus $g \geq 2$ was computed by A. Dranishnikov for $g \geq 4$ in [@Dranish] and by D. Cohen and L. Vandembroucq for $g=2, 3$ in [@CohenVandem]. In both these articles it is shown that ${{\sf {TC}}}(\pi)$ attains its maximum, i.e. coincides with $\mathrm{cd}(\pi \times \pi)$, the cohomological dimension of the group $\pi\times\pi$.
The estimates of M. Grant [@Grant] give good upper bounds for ${{\sf {TC}}}(\pi)$ for nilpotent fundamental groups $\pi$. In [@GrantLuptonOprea], M. Grant, G. Lupton and J. Oprea proved that ${{\sf {TC}}}(\pi)$ is bounded below by the cohomological dimension of $A \times B$ where $A$ and $B$ are subgroups of $\pi$ whose conjugates intersect trivially. Using these estimates, M. Grant and D. Recio-Mitter [@GrantRecio] have computed ${{\sf {TC}}}(\pi)$ for certain subgroups of Artin’s braid groups.
Y. Rudyak [@Rudyak] went in the opposite direction by showing that for any pair of positive integers $k, \ell$ satisfying $k\le \ell \le 2k$ there exists a finitely presented group $\pi$ such that ${{\rm {cd}}}(\pi)=k$ and ${{\sf {TC}}}(\pi)= \ell.$
In a recent preprint [@FM] M. Farber and S. Mescher showed that for a large class of groups (including all torsion free hyperbolic groups) the topological complexity ${{\sf {TC}}}(\pi)$ equals either ${{\rm {cd}}}(\pi\times\pi)$ or ${{\rm {cd}}}(\pi\times\pi)-1$. Since hyperbolic groups are typical in many models of random groups this gives an answer with possible error 1 for a typical group. Note that ${{\rm {cd}}}(\pi\times\pi)$ is obviously an upper bound for ${{\sf {TC}}}(\pi)$ for any $\pi$.
In this paper we tackle the general problem of understanding ${{\sf {TC}}}(\pi)$ from a different direction, using the tools of equivariant topology. We are not interested in computing examples, but rather in reformulating the problem itself so that interactions with subjects such as group theory and homological algebra become apparent. This re-interpretation, together with previously computed examples of ${{\sf {TC}}}(\pi)$, provides illustrative examples for Bredon cohomology with respect to a family of subgroups.
Firstly we reduce the problem to a question about classifying spaces of families of subgroups. Namely, we define a special class ${{\mathcal D}}$ of subgroups of $G=\pi\times\pi$ and prove that the number ${{\sf {TC}}}(\pi)$ coincides with the smallest $k$ such that the canonical map of classifying spaces $$E(G)\to E_{{\mathcal D}}(G)$$ can be factored through a $G$-CW-complex of dimension $\le k$. Here $E(G)$ is the classical classifying space for free $G$-actions and $E_{{\mathcal D}}(G)$ is the classifying space for $G$-actions with isotropy subgroups in the class ${{\mathcal D}}$. Using this reduction we establish an upper bound $$\begin{aligned}
{{\sf {TC}}}(\pi)\le \max\{3, {{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)\}.\end{aligned}$$ where ${{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)$ denotes the cohomological dimension of $\pi\times\pi$ with respect to the family ${{\mathcal D}}$. Secondly, we use Bredon cohomology to produce lower bounds for ${{\sf {TC}}}(\pi)$. Namely we show that if (for some ${{\mathcal {O_D}}}$-module ${{\underline M}}$) there exists a Bredon cohomology class $$\underline \alpha\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}})$$ such that the cohomology class $$\Phi(\underline \alpha)\not=0\in H^n(\pi\times\pi, M)$$ is nonzero, then ${{\sf {TC}}}(X) \ge n$. Here $M$ denotes the principal component of ${{\underline M}}$ and $$\Phi: H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}}) \to H^n(\pi\times\pi, M)$$ is a natural homomorphism from the Bredon cohomology to the usual twisted cohomology. The notions we use here are explained in full detail in the sequel.
We define a *Bredon cohomology generalisation of the canonical class* ${{\mathfrak u}}\in H^1_{{\mathcal D}}(\pi\times\pi;\underline{I})$ which refines the canonical class ${{\mathfrak v}}\in H^1(\pi\times\pi;I)$ introduced in [@CosFar]. We prove a universality theorem for the powers of the Bredon canonical class ${{\mathfrak u}}$ which implies, in particular, that ${{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi) = {\rm {height}}({{\mathfrak u}})$ (the *height* of a cohomology class is the exponent of the maximal non-vanishing power).
In [@FM], Farber and Mescher introduced the notion of [*an essential cohomology class*]{} as a class $\beta\in H^n(\pi\times\pi, A)$ which can be obtained via a coefficient homomorphism $\mu: I^n\to A$ from the power ${{\mathfrak v}}^n$ of the canonical class, i.e. $\beta=\mu_\ast({{\mathfrak v}}^n)$. In this paper we introduce a class of [*principal groups*]{} $\pi$ and we show that for principal groups a cohomology class in $H^n(\pi\times\pi;M)$ is in the image of the map $\Phi$ from Bredon cohomology if and only if it is essential. We also prove that the class of principal groups includes all torsion free hyperbolic groups and all torsion free nilpotent groups.
Curiously, the fundamental group of the Klein bottle is not principal (see Example \[Klein\]) but nevertheless for this group $${{\sf {TC}}}(\pi) = {\rm {height}}({{\mathfrak v}})$$ as follows from the theorem of D. Cohen and L. Vandembroucq [@CohenVandem].
While results of this paper are more conclusive for ${{\sf {TC}}}(\pi)\ge 3$, we mention that ${{\mathbb Z}}$ is the only group satisfying ${{\sf {TC}}}(\pi)=1$ (as follows from [@GLO1]) and groups with ${{\sf {TC}}}(\pi)=2$ are likely quite restricted, see [@BR]. The obvious examples of groups $\pi$ with ${{\sf {TC}}}(\pi)=2$ include ${{\mathbb Z}}^2$ and the non-commutative free group $F$.
The first reduction
===================
The concept of topological complexity
-------------------------------------
We start by recalling the definition of the invariant ${{\sf {TC}}}(X)$.
\[def1\][Given a path-connected topological space $X$, the topological complexity of $X$ is the minimal integer ${{\sf {TC}}}(X)=k$ such that the Cartesian product $X \times X$ can be covered by $k+1$ open subsets $$X \times X = U_0 \cup U_1 \cup \cdots \cup U_k$$ with the property that for any $i = 0, 1,2,\dots ,k$ there exists a continuous section of the fibration (\[fibration\]) $$s_i: U_i \to X^I,\quad
p\circ s_i={\rm incl}_{U_i}$$ over $U_i$. If no such $k$ exists we will set ${{\sf {TC}}}(X)=\infty$. ]{}
Note that in this paper we are using [*the reduced version*]{} of ${{\sf {TC}}}(X)$ which is one less than the original notion used in [@Far03], [@Finv], [@Frecent] and [@FM].
For convenience of the reader we also recall the notion of the Schwarz genus of a fibration (also known as [*the sectional category*]{}).
\[def12\][Let $p: E\to B$ be a Serre fibration over a path-connected topological space $B$. The Schwarz genus of $p$ is defined as the smallest integer $k$ such that the base $B$ admits an open cover $B= U_0\cup U_1\cup \cdots \cup U_k$ such that the fibration $p$ admits a continuous section over $U_i$ for each $i=0, 1, \dots, k$. ]{}
This paper is mainly dedicated to the problem of computing ${{\sf {TC}}}(X)$ in the case when $X$ is an aspherical finite cell complex. Recall that a connected cell complex $X$ is said to be [*aspherical*]{} if $\pi_i(X)=0$ for all $i>1$. The notation $X=K(\pi, 1)$ means that $X$ is aspherical and its fundamental group is $\pi$. A key property of ${{\sf {TC}}}(X)$ is its homotopy invariance, see [@Far03]. The homotopy invariance of the topological complexity implies that the number $${{\sf {TC}}}(\pi)={{\sf {TC}}}(K(\pi, 1))$$ depends only on the group $\pi$.
Many systems of practical interest have aspherical configuration spaces. Consider for example the problem of coordinated collision free motion planning of a set of objects on the plane ${{\mathbf R}}^2$. We may represent the objects by discs of radius $r>0$ and the state of each disc is determined by the position of its centre $A_i\in {{\mathbf R}}^2$ where $i=1, \dots, n$. Thus a state of the system is a configuration of points $(A_1, A_2, \dots, A_n)$, where $A_i\in {{\mathbf R}}^2$, such that $$|A_i-A_j|>2r, \quad i\not= j.$$ Let $F_r({{\mathbf R}}^2, n)$ denote the configuration space of this system. It is common to relax the problem and consider instead the weaker condition $$A_i\not=A_j, \quad i\not= j$$ which leads to the usual configuration space $F({{\mathbf R}}^2, n)$ of $n$ distinct points on the plane. It is easy to see that $F_r({{\mathbf R}}^2, n)$ and $F({{\mathbf R}}^2, n)$ are homeomorphic and, moreover, it is well known that the space $F({{\mathbf R}}^2, n)$ is aspherical as can be seen using the tower of Fadell-Neuwirth fibrations.
{#remarks1}
Consider a continuous partial section $s: U\to X^I$ of the fibration (\[fibration\]) over a subset $U\subset X\times X$. Using the exponential correspondence, the map $s$ can be viewed as a homotopy $h:U\times I\to X$ where $h(u, t)=s(u)(t)$ for $u\in U, t\in I$. Let $p_j: X\times X\to X$ (where $j=1,2$) denote the projections onto the first and the second factors. The property of $s$ being a section can be expressed by saying that the homotopy $h$ connects the projections of $U$ onto the first and second coordinates, i.e. $h(u, 0)=p_1(u)$ and $h(u, 1)=p_2(u)$.
Thus we see that the open sets $U_i\subset X\times X$ which appear in Definition \[def1\] can be equivalently characterised by the property that their two projections $U_i\to X$ on the first and the second factors are homotopic.
In the case when the space $X$ is aspherical we can use the following property: For a connected space $U$ that is homotopy equivalent to a cell complex, the set of homotopy classes of maps $U\to X$ is in a one-to-one correspondence with the set of conjugacy classes of homomorphisms $\pi_1(U, u_0)\to \pi_1(X, x_0)$, see Chapter V, Corollary 4.4 in [@Whi]. Recall that two group homomorphisms $f, g: \pi_1(U, u_0)\to \pi_1(X, x_0)$ are conjugate if there exists $\beta\in \pi_1(X, x_0)$ such that for all $\alpha\in \pi_1(U, u_0)$ one has $f(\alpha)=\beta g(\alpha)\beta^{-1}$.
These remarks lead to the following definition:
\[def2\][Let $X$ be a path-connected topological space with fundamental group $\pi=\pi_1(X, x_0)$. [*The ${{\mathcal D}}$-topological complexity*]{}, ${{\sf {TC}}}^{{\mathcal D}}(X)$, is defined as the minimal number $k$ such that $X \times X$ can be covered by $k+1$ open subsets $X \times X = U_0 \cup U_1 \cup \cdots \cup U_k$ with the property that for any $i = 0, 1,2,\dots ,k$ and for every choice of the base point $u_i\in U_i$ the homomorphism $\pi_1(U_i, u_i)\to \pi_1(X\times X, u_i)$ induced by the inclusion $U_i\to X\times X$ takes values in a subgroup conjugate to the diagonal $\Delta\subset \pi\times\pi$. ]{}
Recall that there is an isomorphism $\pi_1(X\times X, u_i) \to \pi_1(X\times X, (x_0, x_0))=\pi\times \pi$ determined uniquely up to conjugation, and the diagonal inclusion $X\to X\times X$ induces the inclusion $\pi\to \pi\times\pi$ onto the diagonal $\Delta$.
\[lm3\] One has ${{\sf {TC}}}^{{\mathcal D}}(X)={{\sf {TC}}}(X)$ if $X$ is a finite aspherical cell complex.
It follows from the remarks given in §\[remarks1\]. Here we use the known fact that an open subset of a finite CW-complex is homotopy equivalent to a countable CW-complex. Indeed, by Theorem 1 of J. Milnor [@Milnor], a space is homotopy equivalent to a countable CW-complex if and only if it is homotopy equivalent to an absolute neighbourhood retract (ANR). Any finite CW-complex is an ANR and an open subset of an ANR is an ANR. Thus, an open subset of a finite CW-complex is an ANR and hence has the homotopy type of a countable CW-complex.
\[lm4\] Let $X$ be a finite aspherical cell complex with fundamental group $\pi=\pi_1(X, x_0)$. Let $q: \widehat{X\times X}\to X\times X$ be the connected covering space corresponding to the diagonal subgroup $$\Delta\subset \pi\times\pi=\pi_1(X\times X, (x_0, x_0)).$$ Then the ${{\mathcal D}}$-topological complexity ${{\sf {TC}}}^{{\mathcal D}}(X)$ coincides with the Schwarz genus of $q$.
For an open subset $U\subset X\times X$, the condition that the induced map $\pi_1(U, u) \to \pi_1(X\times X, u)$ takes values in a subgroup conjugate to the diagonal $\Delta$ is equivalent to the condition that $q$ admits a continuous section over $U$. Using this remark the Lemma follows by comparing the definitions of ${{\sf {TC}}}^{{\mathcal D}}(X)$ and of Schwarz genus.
[ If we remove the assumption that $X$ is aspherical then the topological complexity ${{\sf {TC}}}(X)$ is greater than or equal to the Schwarz genus of $q$, see [@FTY], Theorem 4.1. ]{}
Next we introduce terminology and notations which will be used in the statement of Theorem \[thm0\].
{#section-1}
Recall that the join $X\ast Y$ of topological spaces $X$ and $Y$ can be defined as the quotient of the product $X\times [0,1]\times Y$ with respect to the equivalence relation $(x, 0, y)\sim (x, 0, y')$ and $(x, 1, y)\sim (x', 1, y)$ for all $x, x'\in X$ and $y, y'\in Y$. We have an obvious embedding $X\to X\ast Y$ given by $x\mapsto (x, 0, y)$ where $y\in Y$ is arbitrary.
One may use the following notation. A point $(x, t, y)\in X\times [0,1]\times Y/\sim$ can be written as a formal linear combination $(1-t)x+ty$. This notation is clearly consistent with the identifications of the join.
Let $\Delta^k$ denote the standard $k$-dimensional simplex, i.e. $$\Delta^k=\{(t_0, t_1, \dots, t_k); t_i\ge 0, \quad \sum_{i=0}^kt_i=1\}.$$ We may define the multiple join $X_0\ast X_1\ast\dots\ast X_k$ of topological spaces $X_0, \dots, X_k$ as the quotient of the product $(\prod_{i=0}^k X_i)\times \Delta^k$ with respect to an equivalence relation $\sim$ described below. The points of the join are written as formal linear combinations $$x=t_0x_0+t_1x_1+\dots+t_kx_k, \quad x_i\in X_i, \quad (t_0, t_1, \dots, t_k)\in \Delta^k,$$ and we say that $x\sim x'$ where $x'=t'_0x'_0+t'_1x'_1+\dots+t'_kx'_k$ iff $t_i=t'_i$ for all $i=0, \dots, k$ and $x_i=x'_i$ provided $t_i\not=0$.
{#sec25}
Let $\pi$ be a discrete group. We shall view $\pi$ as a discrete topological space with the following left $\pi\times\pi$-action: $$\begin{aligned}
\label{action}(x, y)\cdot g=xgy^{-1}.\end{aligned}$$ This action is transitive and the isotropy subgroup of the unit element $1\in \pi$ coincides with the diagonal subgroup $\Delta\subset \pi\times \pi$. The isotropy subgroups of the other elements are the conjugates of $\Delta$.
{#section-2}
For an integer $k\ge 0$, let $E_k(\pi)$ denote the $(k+1)$-fold join $$E_k(\pi) = \pi\ast\pi\ast \dots\ast \pi.$$ We shall equip $E_k(\pi)$ with the left diagonal $\pi\times\pi$-action determined by the $\pi\times\pi$-action on $\pi$ as in §\[sec25\] above. Each $E_k(\pi)$ is naturally a $k$-dimensional equivariant simplicial complex with $k$-dimensional simplexes in 1-1 correspondence with sequences $(g_0, g_1, \dots, g_k)$ of group elements $g_i\in \pi$. Note that $E_k(\pi)$ is $(k-1)$-connected and is in fact homotopy equivalent to a wedge of $k$-dimensional spheres.
{#section-3}
There is a natural equivariant embedding $$E_k(\pi) \hookrightarrow E_{k+1}(\pi)=E_k(\pi)\ast \pi.$$ Using it we may define the simplicial complex $$E(\pi)= \bigcup_{k=0}^\infty E_k(\pi) = \pi\ast \pi\ast \pi\ast \dots,$$ the join of infinitely many copies of $\pi$.
{#section-4}
Furthermore, let $E(\pi\times\pi)$ denote the classical classifying space for free $\pi\times\pi$ actions, i.e. $$E(\pi\times\pi)= (\pi\times\pi)\ast (\pi\times\pi)\ast \dots,$$ the join of infinitely many copies of $\pi\times\pi$. We shall view each copy of $\pi\times \pi$ as a discrete topological space with the left free action of $\pi\times\pi$ given by $(x, y)\cdot(g, h)=(xg, yh)$ for $x, y, g, h\in \pi$. The space $E(\pi\times\pi)$ inherits the diagonal action of the group $\pi\times\pi$.
{#section-5}
The map $F: \pi\times\pi\to \pi$ given by $F(x, y)=xy^{-1}$ is $\pi\times\pi$-equivariant. The natural extension of $F$ to the infinite joins defines a $\pi\times\pi$-equivariant map $$\begin{aligned}
\label{F}
F: E(\pi\times\pi) \to E(\pi).\end{aligned}$$
\[thm0\] Let $X$ be a finite aspherical cell complex and let $\pi=\pi_1(X, x_0)$ be its fundamental group. Then ${{\sf {TC}}}(X)$ coincides with the smallest integer $k$ such that there exists a $\pi\times\pi$-equivariant map $E(\pi\times\pi)\to E_k(\pi)$.
Let $p: \tilde X\to X$ denote the universal cover of $X$. Here $\tilde X$ is an equivariant cell complex with free left $\pi$-action. The map $p\times p: \tilde X\times\tilde X\to X\times X$ is the universal cover of $X\times X$. We shall view $p\times p$ as a principal $G=\pi\times\pi$-bundle and for $k=0, 1, \dots$ construct the associated bundle $$\begin{aligned}
\label{qk}
q_k: (\tilde X\times \tilde X)\times_{G} E_k(\pi)\to X\times X.\end{aligned}$$ Here $(\tilde X\times \tilde X)\times_{G} E_k(\pi)$ denotes the quotient of the product $(\tilde X\times \tilde X)\times E_k(\pi)$ with respect to the following $G=\pi\times\pi$-action: $(g, h)\cdot (x, x', z) = (gx, hx', (g, h)\cdot z)$ where $g, h\in \pi$, $x, x'\in \tilde X$ and $z\in E_k(\pi)$.
First we observe that the fibration $q_0$ coincides with the covering space $q: \widehat{X\times X}\to X\times X$ corresponding to the diagonal subgroup $\Delta\subset \pi\times\pi$ which appears in Lemma \[lm4\]. Indeed, $E_0(\pi)=\pi$ has a transitive $G=\pi\times\pi$-action and the isotropy of the unit element $1\in \pi$ is the diagonal $\Delta\subset G=\pi\times\pi$. Hence we obtain a homeomorphism $$(\tilde X\times \tilde X\times E_0(\pi))/G \to (\tilde X\times \tilde X)/\Delta$$ commuting with the projections onto $X\times X$; thus we see that the fibration $q_0$ is isomorphic to the fibration $p\times p: (\tilde X\times \tilde X)/\Delta \to X\times X$. It is obvious that the latter fibration is isomorphic to the connected covering $q$ corresponding to the diagonal subgroup $\Delta\subset \pi\times\pi$.
Applying Lemma \[lm3\] and Lemma \[lm4\] we obtain that ${{\sf {TC}}}(X)$ coincides with the Schwarz genus of the fibration $q_0$.
Next we apply a theorem of A. Schwarz (see [@Sv66], Theorem 3) stating that genus of a fibration $p: E\to B$ equals the smallest integer $k$ such that the fiberwise join $p\ast p\ast \dots \ast p$ of $k+1$ copies of the fibration $p: E\to B$ admits a continuous section. The fiberwise join of $k+1$ copies of the fibration $q_0$ coincides with the fibration $q_k$. Thus we obtain that ${{\sf {TC}}}(X)$ coincides with the smallest $k$ such that $q_k$ has a continuous section.
Finally we apply Theorem 8.1 from [@Hue], chapter 4, which states that continuous sections of the fibre bundle $q_k$ are in 1-1 correspondence with $G=\pi\times\pi$-equivariant maps $$\begin{aligned}
\label{finally}\tilde X\times \tilde X \to E_k(\pi).\end{aligned}$$ Thus, ${{\sf {TC}}}(X)$ is the smallest $k$ such that a $G=\pi\times\pi$-equivariant map (\[finally\]) exists. Finally we observe that the space $\tilde X\times \tilde X$ is $G=\pi\times\pi$-equivariantly homotopy equivalent to $E(\pi\times\pi)$ (in view of the Milnor construction) and the result follows.
The second reduction
====================
In this section we prove the following statement which gives an intrinsic version of Theorem \[thm0\].
\[thm00\] Let $X$ be a finite aspherical cell complex and let $\pi=\pi_1(X, x_0)$ be its fundamental group. Then ${{\sf {TC}}}(X)$ coincides with the minimal dimension of a $\pi\times\pi$-CW complex $L$ such that the map $F$ (see (\[F\])) can be factored as follows: $$\begin{aligned}
\label{FF}
E(\pi\times\pi) \to L \to E(\pi).\end{aligned}$$
The proof of Theorem \[thm00\] will follow a brief review of the basic material concerning classifying spaces and families of subgroups; we shall mainly follow [@Lue].
{#section-6}
Let $G$ be a discrete group. [*A $G$-CW-complex*]{} is a CW-complex $X$ with a left $G$-action such that for each open cell $e\subset X$ and each $g\in G$ with $ge\cap e\not=\emptyset$, the left multiplication by $g$ acts identically on $e$.
A simplicial complex with a simplicial $G$-action is a $G$-CW-complex (with respect to the barycentric subdivision), see [@Lue], Example 1.5.
[*A family ${{\mathcal {F}}}$ of subgroups*]{} of $G$ is a set of subgroups of $G$ which is closed under conjugation and finite intersections.
{#22}
[*A classifying $G$-CW-complex*]{} $E_{{\mathcal {F}}}(G)$ with respect to a family ${{\mathcal {F}}}$ of $G$ is defined as a $G$-CW-complex $E_{{\mathcal {F}}}(G)$ such that
- the isotropy subgroup of any element of $E_{{\mathcal {F}}}(G)$ belongs to ${{\mathcal {F}}}$;
- For any $G$-CW-complex $Y$ all of whose isotropy subgroups belong to ${{\mathcal {F}}}$ there is up to $G$-homotopy exactly one $G$-map $Y\to E_{{\mathcal {F}}}(G)$.
A $G$-CW-complex $X$ is a model for $E_{{\mathcal {F}}}(G)$ if and only if all its isotropy subgroups belong to the family ${{\mathcal {F}}}$ and for each $H\in {{\mathcal {F}}}$ the set of $H$-fixed points $X^H$ is weakly contractible, i.e. $\pi_i(X^H, x_0)=0$ for any $i=0, 1, \dots$ and for any $x_0\in X^H$. See [@Lue], Theorem 1.9.
{#section-7}
We shall use below the equivariant version of the Whitehead Theorem which we shall state as follows (see [@May], Theorem 3.2 in Chapter 1).
\[thwhite\] Let $f: Y\to Z$ be a $G$-map between $G$-CW-complexes such that for each subgroup $H\subset G$ the induced map $\pi_i(Y^H, x_0)\to \pi_i(Z^H, f(x_0))$ is an isomorphism for $i<k$ and an epimorphism for $i=k$ for any base point $x_0\in Y^H$. Then for any $G$-CW-complex $X$ the induced map on the set of $G$-homotopy classes $$f_\ast: [X, Y]_G \to [X, Z]_G$$ is an isomorphism if $\dim X<k$ and an epimorphism if $\dim X\le k$.
Proof of Theorem \[thm00\] {#sec24}
--------------------------
First note that $G=\pi\times\pi$ acts freely on $E(\pi\times\pi)$ which is the classifying $G$-CW-complex for free $G$-actions (the Milnor construction). We refer to Example 1.5 from [@Lue] which implies that $E(\pi\times\pi)$ is a $G$-CW-complex. Next we examine the isotropy subgroups of $G=\pi\times\pi$ acting on $E_k(\pi)$ and $E(\pi)$. Recall that $G$ acts on $\pi$ according to formula (\[action\]). The isotropy of an element $g\in \pi$ is the subgroup $\{(a, g^{-1}ag); a\in \pi\}\subset \pi\times\pi$ which is conjugate to the diagonal subgroup $\Delta$.
It is easy to see that for a subgroup $H\subset G$ the fixed point set $\pi^H$ is non-empty iff $H$ is contained in a subgroup conjugate to the diagonal $\Delta\subset G$.
For an element $x\in E_k(\pi)$, $$x= t_0x_0+t_1x_1+\dots+t_kx_k,$$ where $x_i\in \pi$, $t_i\in (0,1]$, $i=0, 1, \dots, k$, $t_0+t_1+\dots+t_k=1$, the isotropy subgroup is the intersection of the isotropy subgroups of the elements $x_i$. This intersection can be presented as follows. Let $S$ denote the set $\{x_ix_j^{-1}\, ;\, i, j =0, 1, \dots, k\}$. The symbol $Z(S)$ denotes the centraliser of $S$, i.e. the set of all $a\in \pi$ which commute with any element of $S$. Then the isotropy subgroup of $x$ equals $$\begin{aligned}
\label{hbs}
H_{b, S}=\{(a, bab^{-1}); a\in Z(S)\}\end{aligned}$$ where $b=x_i^{-1}$ for any $i=0, 1, \dots, k$.
If $H\subset \pi\times\pi$ is a subgroup contained in a subgroup of type (\[hbs\]), i.e. $H\subset H_{b, S}$, then the set $\pi^H$ is not empty and $$E_k(\pi)^H = \pi^H \ast \pi^H\ast \dots \ast \pi^H, \quad\quad (k+1 \quad \mbox{times}).$$ We see that the space $E_k(\pi)^H$ is nonempty and is $(k-1)$-connected. At the same time the space $E(\pi)^H=\pi^H\ast\pi^H\ast\cdots$ (the infinite join) is non-empty and contractible. We will use this property below in order to invoke the Whitehead theorem.
We denote by ${{\mathcal D}}$ the family of subgroups of $\pi\times\pi$ containing the trivial subgroup and the groups $H_{b, S}$, for all $b\in \pi$ and all finite subsets $S\subset \pi$.
The above discussion shows that $E(\pi)$ is the classifying $G$-CW-complex $E_{{\mathcal D}}(G)$ with respect to the family ${{\mathcal D}}$, see §\[22\]. In particular, we obtain that any two $G$-maps $X\to E(\pi)$ are $G$-homotopic provided all isotropy subgroups of $X$ are in ${{\mathcal D}}$.
Let $k_1$ denote the minimal $k$ such that there exists an equivariant map $E(\pi\times\pi)\to E_k(\pi)$. We know that $k_1={{\sf {TC}}}(X)$ by Theorem \[thm0\]. Let $k_2$ be the smallest dimension of a $G$-CW complex $L$ admitting a factorisation (\[FF\]). We have $k_2\le k_1$ since $\dim E_k(\pi)=k$ and any two equivariant maps $E(G)\to E(\pi)$ are equivariantly homotopic. On the other hand, suppose we have $$E(\pi\times\pi) \stackrel\alpha\to L \stackrel \beta\to E(\pi)$$ with $\dim L\le k$. We may apply the Whitehead Theorem \[thwhite\] to the inclusion $E_k(\pi)\to E(\pi)$ concluding that for any $G$-CW-complex $L$ of dimension $\le k$ the map $$[L, E_k(\pi)]_G\to [L, E(\pi)]_G$$ is surjective. We then obtain a $G$-map $g: L\to E_k(\pi)$ and its composition $g\circ \alpha: E(\pi\times\pi)\to E_k(\pi)$; clearly the composition $E(\pi\times\pi)\stackrel{g\circ \alpha}\to E_k(\pi) \hookrightarrow E(\pi)$ is $G$-homotopic to $\beta\circ\alpha$. This shows that $k_1\le k_2$ and hence $k_1=k_2$ proving Theorem \[thm00\].
We can restate Theorem \[thm00\] as follows:
\[thm000\] Let $X$ be a finite aspherical cell complex and let $\pi=\pi_1(X, x_0)$ be its fundamental group. Let $G$ denote the group $\pi\times\pi$. Then ${{\sf {TC}}}(X)$ coincides with the minimal integer $k$ such that the canonical map $$\begin{aligned}
\label{eq}
E(G) \to E_{{\mathcal D}}(G)\end{aligned}$$ is $G$-equivariantly homotopic to a map with values in the $k$-dimensional skeleton $E_{{\mathcal D}}(G)^{(k)}$.
If the map (\[eq\]) is $G$-homotopic to a map with values in $E_{{\mathcal D}}(G)^{(k)}$ then we can take $L=E_{{\mathcal D}}(G)^{(k)}$ to obtain a factorisation of Theorem \[thm00\]. Conversely, given a factorisation of Theorem \[thm00\], the map $L\to E_{{\mathcal D}}(G)$ can be deformed into $E_{{\mathcal D}}(G)^{(k)}$ using the $G$-cellular approximation theorem.
Let us recall that the Lusternik - Schnirelmann category of an aspherical space can be characterised in a similar way:
\[propeg\] Let $X$ be a finite aspherical cell complex and let $\pi=\pi_1(X, x_0)$ be its fundamental group. Then the Lusternik - Schnirelmann category ${{\sf {cat}}}(X)$ coincides with the minimal dimension of a $\pi$-CW complex $L$ such that the identity map $E(\pi)\to E(\pi)$ can be $\pi$-equivariantly factored as follows $$\begin{aligned}
\label{categ}
E(\pi) \to L \to E(\pi).\end{aligned}$$
This statement is essentially contained in [@EG], compare [@EG Proposition 1] where, however there is an assumption $n\ge 2$. The proof of Proposition \[propeg\] in the general case can be obtained similarly to the proof of Theorem \[thm000\] and we shall briefly indicate the main steps. Firstly, one states that ${{\sf {cat}}}(X)$ equals the Schwarz genus of the universal covering $\tilde X\to X$, compare Lemma \[lm3\] and Lemma \[lm4\]. Secondly, using the theorem of Schwarz about joins we obtain that ${{\sf {cat}}}(X)$ equals the smallest $k$ such that the fibration $\tilde X\times_\pi E_k(\pi)\to X$ admits a continuous section, compare the proof of Theorem \[thm0\]. Here we view the complex $E_k(\pi)$ with the left $\pi$-action which is free. Thirdly, we find that ${{\sf {cat}}}(X)$ equals the smallest $k$ such that there exists a $\pi$-equivariant map $\tilde X\to E_k(\pi)$, compare Theorem \[thm0\]. And finally, one uses the universal properties of the classifying space $E(\pi)=\tilde X$ and the equivariant Whitehead theorem to restate the result in the form of Proposition \[propeg\].
{#section-8}
Let ${{\mathcal {O_D}}}$ denote the orbit category with respect to the family ${{\mathcal D}}$, see [@Bre]; we shall recall these notions in the following section. Let ${{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)$ denote the cohomological dimension of the constant ${{\mathcal {O_D}}}$-module ${{\underline {{\mathbb Z}}}}$. Since $E(\pi)$ is a model for the classifying space $E_{{\mathcal D}}(G)$, applying Theorem 5.2 from [@Lue] we obtain that $E(\pi)$ has the equivariant homotopy type of a $G$-CW-complex of dimension $\le \max\{3, {{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)\}$. Together with Theorem \[thm00\] this gives the following Corollary.
\[upper\] Let $X$ be a finite aspherical cell complex and let $\pi=\pi_1(X, x_0)$ be its fundamental group. Then $$\begin{aligned}
{{\sf {TC}}}(X)\le \max\{3, {{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)\}.\end{aligned}$$
\[Shapiro\] For any discrete group $\pi$ we have $\mathrm{cd}(\pi)\leq \mathrm{cd}_\mathcal{D}(\pi\times \pi)$.
Recall that $G$ denotes $\pi\times\pi$. Assume first that $k:=\mathrm{cd}_\mathcal{D}(G)\ge 3$, so that there exists a $k$-dimensional model for $E_\mathcal{D}(G)$. Since the trivial subgroup is in $\mathcal{D}$, the space $E_\mathcal{D}(G)$ is contractible. Restricting the $G$-action to the subgroup $\pi\times 1\subseteq G$ gives a free $\pi$-action, since $E_\mathcal{D}(G)$ has isotropy in $\mathcal{D}$ and $(\pi\times 1)\cap H$ is trivial for all $H\in \mathcal{D}$. Hence $E_\mathcal{D}(G)$ is a $k$-dimensional model for $E(\pi)$, and it follows that $\mathrm{cd}(\pi)\leq k$.
The general algebraic result of Proposition \[Shapiro\] follows from Shapiro’s lemma in Bredon cohomology [@Fl Proposition 3.31], which gives isomorphisms $$H^*(\pi;M)\cong H^*_\mathcal{D}(G;\operatorname{coind}_I(M))$$ for each $\pi$-module $M$. Here the co-induction is along the inclusion functor $I:\mathcal{O}_{\{1\}}(\pi\times 1)\to \mathcal{O}_\mathcal{D}(G)$. This argument does not require the assumption $k\ge 3$.
Suppose that $\pi = \Bbb Z^k$. Then $\mathrm{cd}_\mathcal{D}(\pi\times \pi) = \mathrm{cd}(\pi)=k.$
The space $\Bbb R^k$ is a free, contractible $\pi$-CW-complex where the action is given by $(a, x) \mapsto a+x$ for $a\in \Bbb Z^k$ and $x\in \Bbb R^k$. We may promote this $\pi$-action to a $G$-action on $\Bbb R^k$, by setting $((a,b), x)\mapsto a-b+x$ (here we use the assumption that $\pi$ is abelian). It is easily seen that $\Bbb R^k$ with this $G$-action becomes a model for $E_\mathcal{D}(G)$. The inequality $\mathrm{cd}_\mathcal{D}(\pi\times \pi) \leq \mathrm{cd}(\pi)=k$ is now immediate and the inverse inequality $\mathrm{cd}(\pi)\leq \mathrm{cd}_\mathcal{D}(\pi\times \pi)$ is Proposition \[Shapiro\].
\[AtimesB\] Let $X$ be a finite aspherical complex with fundamental group $\pi=\pi_1(X,x_0)$, and let $K\leq G=\pi\times\pi$ be a subgroup such that $K\cap H = \{1\}$ for all $H\in \mathcal{D}$. Then $\mathrm{cd}(K)\leq{{\sf {TC}}}(X)$.
Under these assumptions any model for $E_\mathcal{D}(G)$ is free and contractible when viewed as a $K$-CW-complex, hence it is a model for $E(K)$. The same is true for $E(G)$. Letting $k:={{\sf {TC}}}(X)$, we get a sequence of $G$-maps $E(G)\to L\to E_\mathcal{D}(G)$, where $L$ is a $G$-CW complex of dimension $k$. Restricting to $K$-actions we get, up to $K$-homotopy, a factorisation of the identity map $E(K)\to L\to E(K)$. This obviously implies that any cohomology class in $H^m(\pi, M)$ with $m>k$ vanishes, i.e. $\mathrm{cd}(K)\leq k$, as stated.
As a particular case of the above, let $K=A\times B$ where $A$ and $B$ are subgroups of $\pi$ such that $gAg^{-1}\cap B=\{1\}$ for all $g\in \pi$. We obtain that $\mathrm{cd}(A\times B)\leq {{\sf {TC}}}(\pi)$, which recovers the main result of [@GrantLuptonOprea].
Lower bounds for ${{\sf {TC}}}(X)$ via Bredon cohomology
========================================================
In this section we shall give lower bounds for the topological complexity using Bredon cohomology. First we recall the basic constructions.
The family ${{\mathcal D}}$ {#secd}
---------------------------
Let $\pi$ be a discrete group, we shall denote $G=\pi\times\pi$. As above, we denote by $\mathcal D$ the smallest family of subgroups $H\subset \pi\times \pi=G$ which contains the diagonal $\Delta\subset \pi\times\pi$, the trivial subgroup and which is closed under taking conjugations and finite intersections. It is easy to see that a nontrivial subgroup $H\subset \pi\times\pi$ belongs to ${{\mathcal D}}$ iff it is of the form $$H_{b, S} \, =\, \{(a, bab^{-1}), \, a\in Z(S)\},$$ where $b\in \pi$ and $Z(S)$ denotes the centraliser of a finite set of elements $S\subset \pi$, i.e. $Z(S) =\{a\in \pi, sa=as\, \, \mbox{for any}\, \, s\in S\}.$
We denote by ${{\mathcal {O_D}}}$ [*the orbit category*]{} with objects transitive left $G$-actions having isotropy in ${{\mathcal D}}$ and with $G$-equivariant maps as morphisms, see [@Bre]. Objects of the category ${{\mathcal {O_D}}}$ have the form $G/H$ where $H\in {{\mathcal D}}$.
${{\mathcal {O_D}}}$-modules and their principal components {#pcomp}
-----------------------------------------------------------
[*A (right) ${{\mathcal {O_D}}}$-module*]{} ${{\underline M}}$ is a contravariant functor on the category of orbits ${{\mathcal {O_D}}}$ with values in the category of abelian groups. Such a module is determined by the abelian groups ${{\underline M}}(G/H)$ where $H\in {{\mathcal D}}$, and by a group homomorphism $${{\underline M}}(G/H) \to {{\underline M}}(G/H')$$ associated with any $G$-equivariant map $G/H' \to G/H$ satisfying the usual compatibility conditions, expressing the fact that ${{\underline M}}$ is a functor.
The abelian group $M= {{\underline M}}(G/1)$ is a left ${{{{\mathbb Z}}[\pi\times\pi]}}$-module; an element $(g, h)\in \pi\times \pi$ acts on $\pi\times\pi$ by right translation and applying the functor ${{\underline M}}$ this defines an action on $M$. We shall call the ${{{{\mathbb Z}}[\pi\times\pi]}}$-module $M$ [*the principal component of* ]{} ${{\underline M}}$.
\[free\]
Let $X$ be a left $G$-set. One defines an ${{\mathcal {O_D}}}$-module ${{\underline M}}_X$ by $${{\underline M}}_X(?)={{\mathbb Z}}[?, X]_G.$$ In other words, ${{\underline M}}_X(G/H)$ is the free abelian group generated by the set of $G$-equivariant maps $$[G/H, X]_G \, =\, X^H.$$ The homomorphism associated to a morphism $f: G/H' \to G/H$ is the map $X^H\to X^{H'}$ given by $x\mapsto f(1, 1)\cdot x\in X^{H'}$ for $x\in X^H$.
If the set $X$ is such that the isotropy subgroup of any point $x\in X$ belongs to the family ${{\mathcal D}}$ then the ${{\mathcal {O_D}}}$-module ${{\underline M}}_X$ is [*free and projective*]{}, see [@tomD], chapter 1 or [@Luebook], chapter 2.
The principal component of the ${{\mathcal {O_D}}}$-module ${{\underline M}}_X$ is $M={{\mathbb Z}}[X],$ the free abelian group generated by $X$. The left action of $G$ on ${{\mathbb Z}}[X]$ is induced by the left action of $G$ on $X$.
Any equivariant map between $G$-sets $f:X\to Y$ induces naturally a homomorphism of the ${{\mathcal {O_D}}}$-modules $f_\ast: {{\underline M}}_X\to {{\underline M}}_Y$.
Next we consider a few special cases of the previous example.
\[free0\][ Taking $X=\ast$, the one point orbit, we obtain the module ${{\underline M}}_X$ which will be denoted ${{\underline {{\mathbb Z}}}}$. It associates ${{\mathbb Z}}$ to any orbit $\pi\times\pi/H$ with the identity homomorphism associated to any morphism of the orbit category ${{\mathcal {O_D}}}$. Note that ${{\underline {{\mathbb Z}}}}$ is not a free ${{\mathcal {O_D}}}$-module since $G=\pi\times\pi$ is not in ${{\mathcal D}}$. ]{}
\[free1\]
In Example \[free\] take $X=\pi$, the group $\pi$ viewed as a $G=\pi\times \pi$-set via the action $(x, y)\cdot g=xgy^{-1}$. The isotropy subgroup of an element $g\in \pi$ is $\{(x, g^{-1}xg), x\in \pi\}$ which belongs to the family ${{\mathcal D}}$ and hence the Bredon module ${{\underline M}}_\pi$ is free. Note that ${{\underline M}}_\pi$ associates the abelian group ${{\mathbb Z}}[\pi^H]$ to any orbit $G/H$.
If $H=H_{b, S}$ then $\pi^H$ coincides with $Z(Z(S))\cdot b^{-1}$. In general, $\pi^H$ is not a subgroup.
\[frees\]
This is a generalisation of the previous example. For an integer $s\ge 1$, consider the $s$-th Cartesian power $\pi^s$ as a $G=\pi\times\pi$-set via the action $(x, y)\cdot (g_1, \dots, g_s)= (xg_1y^{-1}, \dots, xg_sy^{-1}).$ The isotropy subgroup of an element $(g_1, \dots, g_s)$ is the intersection of the isotropy subgroups of $g_i$ for $i=1, \dots, s$, hence it can be presented as $H_{b, S}$ with $b=g_1$ and $S=\{g_1g_2^{-1}, g_1g_3^{-1}, \dots, g_1g_s^{-1}\}$. We obtain a free Bredon module ${{\underline M}}_{\pi^s}$, $s\ge 1$. Its principal component is the module ${{\mathbb Z}}[\pi^s]$.
Bredon cohomology
-----------------
Now we recall the construction of Bredon cohomology, see for example [@Mis].
Let $X$ be a $G$-CW-complex such that the isotropy subgroup of every point $x\in X$ belongs to the family ${{\mathcal D}}$. For every subgroup $H\in {{\mathcal D}}$ we may consider the cell complex $X^H$ of $H$-fixed points and its cellular chain complex $C_\ast(X^H)$. A $G$-map $\phi: G/K\to G/L$, where $K, L\in {{\mathcal D}}$, induces a cellular map $X^L\to X^K$ by mapping $x\in X^L$ to $gx\in X^K$ where $g$ is determined by the equation $\phi(K)=gL$ (thus $g^{-1}Kgx=x$ since $g^{-1}Kg\subset L$ and therefore $Kgx=gx$, i.e. $gx\in X^K$). Thus we see that the chain complexes $C_\ast(X^H)$, considered for all $H\in {{\mathcal D}}$, form a chain complex of right ${{\mathcal {O_D}}}$-modules which will be denoted ${\underline C}_\ast(X)$; here ${\underline C}_\ast(X)(G/H) = C_\ast(X^H).$ The principal component of the ${{\mathcal {O_D}}}$-chain complex ${\underline C}_\ast(X)$ is the chain complex $C_\ast(X)$ of left ${{\mathbb Z}}[G]$-modules.
Note that the complex ${\underline C}_\ast(X)$ is free as a complex of ${{\mathcal {O_D}}}$-modules although the complex $C_\ast(X)$ might not be free as a complex of ${{\mathbb Z}}[G]$-modules.
There is an obvious augmentation $\epsilon: {\underline C}_0(X)\to {{\underline {{\mathbb Z}}}}$ which reduces to the usual augmentation $C_0(X^H)\to {{\mathbb Z}}$ on each subgroup $H\in {{\mathcal D}}$.
If ${{\underline M}}$ is a right ${{\mathcal {O_D}}}$-module, we may consider the cochain complex of ${{\mathcal {O_D}}}$-morphisms ${{\rm {Hom}}}_{{\mathcal {O_D}}}({\underline C}_\ast(X), {{\underline M}})$. Its cohomology $$\begin{aligned}
H_{{\mathcal D}}^\ast(X; {{\underline M}}) \, = \, H^\ast({{\rm {Hom}}}_{{\mathcal {O_D}}}({\underline C}_\ast(X), {{\underline M}}))\end{aligned}$$ is [*the Bredon equivariant cohomology of $X$ with coefficients in ${{\underline M}}$.* ]{}
Let $M$ denote the principal component of ${{\underline M}}$. By reducing to the principal components we obtain a homomorphism of cochain complexes $${{\rm {Hom}}}_{{\mathcal {O_D}}}({\underline C}_\ast(X), {{\underline M}})\to {{\rm {Hom}}}_{{{\mathbb Z}}[G]} ({C}_\ast(X), M)$$ and the associated homomorphism on cohomology groups $$\begin{aligned}
\label{red}
H^i_{{\mathcal D}}(X; {{\underline M}}) \, \to \, H^i_G(X, M).\end{aligned}$$
{#sec34}
If the action of $G$ on $X$ is free then obviously the homomorphism (\[red\]) is an isomorphism and $$H^i_{{\mathcal D}}(X; {{\underline M}}) \, \cong \, H^i(X/G, M),$$ where on the right we have the usual twisted cohomology. In particular we obtain $$H^n_{{\mathcal D}}(E(\pi\times\pi), {{\underline M}}) = H^n(\pi\times\pi, M).$$
{#sec35}
Suppose now that $X=E(\pi)$, viewed as a left $G$-CW-complex, where $G=\pi\times\pi$, see §\[sec24\]. We know that $E(\pi)$ is a model for the classifying space $E_{{\mathcal D}}(G)$ (as we established in §\[sec24\]) and the classifying complex $E_{{\mathcal D}}(G)$ is unique up to $G$-homotopy. Hence we may use the notation $$H^\ast_{{\mathcal D}}(E(\pi), {{\underline M}}) = H^\ast_{{\mathcal D}}(\pi\times\pi, {{\underline M}}).$$ We obtain that the number ${{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)$ coincides with the minimal integer $n$ such that $H^i_{{\mathcal D}}(\pi\times\pi, {{\underline M}}) =0$ for all $i>n$ and for all ${{\mathcal {O_D}}}$-modules ${{\underline M}}$.
{#section-9}
Consider now the effect of the equivariant map $F: E(\pi\times\pi) \to E(\pi)$, see (\[F\]). Note that any two equivariant maps $E(\pi\times\pi) \to E(\pi)$ are equivariantly homotopic. The induced map on Bredon cohomology $$F^\ast: H^i_{{\mathcal D}}(E(\pi), {{\underline M}}) \to H^i_{{\mathcal D}}(E(\pi\times\pi), {{\underline M}})$$ in the notations introduced in §\[sec34\] and §\[sec35\] produces a homomorphism $$\begin{aligned}
\label{Phi}
\Phi: H^i_{{\mathcal D}}(\pi\times\pi, {{\underline M}}) \, \to \, H^i(\pi\times\pi, M)\end{aligned}$$ which connects the Bredon cohomology with the usual group cohomology.
Now we may state a result which gives useful lower bounds for the topological complexity ${{\sf {TC}}}(X)$.
\[lower\] Let $X$ be a finite aspherical cell complex with fundamental group $\pi$. Suppose that for some ${{\mathcal {O_D}}}$-module ${{\underline M}}$ there exists a Bredon cohomology class $$\underline \alpha\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}})$$ such that the class $$\Phi(\underline \alpha)\not=0\in H^n(\pi\times\pi, M)$$ is nonzero. Then ${{\sf {TC}}}(X) \ge n$. Here $M$ denotes the principal component of ${{\underline M}}$.
Suppose that ${{\sf {TC}}}(X)<n$. Then by Theorem \[thm00\] the map $F: E(\pi\times\pi) \to E(\pi)$ admits a factorisation $$E(\pi\times\pi) \to L\to E(\pi)$$ where $L$ is a $G$-CW-complex of dimension less than $n$. Then the homomorphism $$\Phi: H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}}) \to H^n(\pi\times\pi, M)$$ factors as $$\Phi: H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}}) \to H^n_{{\mathcal D}}(L, {{\underline M}}) \to H^n(\pi\times\pi, M)$$ and the middle group vanishes since $\dim L <n$. This contradicts our assumption that $\Phi(\underline \alpha)\not=0$ for some $\underline \alpha\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}})$.
[ Theorem \[lower\] can be compared to the classical result concerning the Lusternik - Schirelman category (see Eilenberg - Ganea [@EG] or Schwarz [@Sv66]) stating that for an aspherical space $X$ the existence of a nonzero cohomology class $H^n(X, M)$ (with some local coefficient system $M$) implies that ${{\sf {cat}}}(X)\ge n$. It is not true that ${{\sf {TC}}}(X)\ge n$ if $H^n(X\times X, M)\not=0$ for $X$ aspherical. For example, in the case of the circle $X=S^1$ we know that ${{\sf {TC}}}(X)=1$ while $H^2(X\times X,{{\mathbb Z}})\not=0$. Theorem \[lower\] imposes a condition on the nontrivial cohomology class in the usual twisted cohomology to be extendable to a class in Bredon cohomology. We will investigate this property further in §\[essential\]. ]{}
The canonical class in Bredon cohomology and its universality
=============================================================
In this section we define a special Bredon cohomology class which will play an important role in this paper.
The canonical class {#secan}
-------------------
Consider an ${{\mathcal {O_D}}}$-module $M_X(?)={{\mathbb Z}}[?, X]_G$ where $X$ is a $G$-set, see Example \[free\]. Recall that $G$ denotes the group $\pi\times\pi$. The unique map $X\to \ast$ is $G$-invariant and induces a homomorphism of Bredon modules $\epsilon: {{\underline M}}_X \to {{\underline M}}_\ast ={{\underline {{\mathbb Z}}}}$, called [*the augmentation*]{}. We denote by ${{\underline I}}_X$ the kernel of $\epsilon$. Clearly, ${{\underline I}}_X$ is a Bredon module whose value on an orbit $G/H$ is $${{\underline I}}_X(G/H) = \ker[\epsilon: {{\mathbb Z}}[X^H] \to {{\mathbb Z}}].$$
As a special case of the previous construction we obtain the Bredon module ${{\underline I}}_\pi$ (where $X=\pi$, as in Example \[free1\]). Here $$\begin{aligned}
\label{ideal}
{{\underline I}}_\pi(G/H) = \ker[\epsilon: {{\mathbb Z}}[\pi^H] \to {{\mathbb Z}}] \, \equiv\, I (\pi^H).\end{aligned}$$ We shall shorten the notation ${{\underline I}}_\pi$ to $\underline I$. The principal component of ${{\underline I}}$ is the augmentation ideal $I=\ker[\epsilon: {{{{\mathbb Z}}[\pi]}}\to {{\mathbb Z}}]$.
One obtains a short exact sequence of Bredon modules $$\begin{aligned}
\label{sec1}
0\to {{\underline I}}\to {{\underline M}}_\pi\stackrel{\epsilon}\to {{\underline {{\mathbb Z}}}}\to 0.\end{aligned}$$ The latter defines a Bredon cohomology class $${{\mathfrak u}}\in {{\rm {Ext}}}_{{{\mathcal {O_D}}}}^1({{\underline {{\mathbb Z}}}}, {{\underline I}}) \, \equiv \, H^1_{{\mathcal D}}(\pi\times\pi, {{\underline I}}).$$ We shall call ${{\mathfrak u}}$ [*the canonical class in Bredon cohomology*]{}. It is a refinement of [*the ordinary canonical class*]{} $${{\mathfrak v}}\in H^1(\pi\times \pi, I)$$ which was defined in [@CosFar]. In [@FM], §3 it is shown that ${{\mathfrak v}}$ coincides with the class represented by the principal components of the sequence (\[sec1\]), i.e. by the exact sequence of left ${{{{\mathbb Z}}[\pi\times\pi]}}$-modules $$\begin{aligned}
\label{sec2}
0\to I \to {{{{\mathbb Z}}[\pi]}}\to {{\mathbb Z}}\to 0.\end{aligned}$$ Hence, the principal component of the class ${{\mathfrak u}}$ (i.e. the image of ${{\mathfrak u}}$ under the homomorphism (\[Phi\])), coincides with ${{\mathfrak v}}$.
The canonical class ${{\mathfrak v}}$ is closely related to [*the Berstein - Schwarz class*]{} $${\mathfrak b}\in H^1(\pi, I)$$ which is represented by the exact sequence (\[sec2\]) viewed as a sequence of left ${{{{\mathbb Z}}[\pi]}}$-modules.
The classes ${{\mathfrak u}}^n$
-------------------------------
Next we define classes $${{\mathfrak u}}^n\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline I}}^n), \quad n=1, 2, \dots.$$ In this paper we shall treat these classes formally and call them [*the powers of the canonical class*]{} ${{\mathfrak u}}$ without trying to justify this name. However we shall show that the principal component of the class ${{\mathfrak u}}^n$ is the $n$-fold cup product ${{\mathfrak v}}\cup{{\mathfrak v}}\cup\dots\cup {{\mathfrak v}}={{\mathfrak v}}^n$ of the canonical class ${{\mathfrak v}}\in H^1(\pi\times\pi, I)$.
The Bredon module ${{\underline I}}^n$ is defined by $${{\underline I}}^n(G/H) = I(\pi^H)\otimes_{{\mathbb Z}}I(\pi^H)\otimes_{{\mathbb Z}}\dots\otimes_{{\mathbb Z}}I(\pi^H), \quad H\in {{\mathcal D}}.$$We shall define the class ${{\mathfrak u}}^n$ by describing an explicit exact sequence of ${{\mathcal {O_D}}}$-modules $$\begin{aligned}
\label{derham}
0\to {{\underline I}}^n \to \underline C_{n-1}\stackrel{d}\to \underline C_{n-2}\stackrel{d}\to \dots\stackrel{d} \to \underline C_0\to {{\underline {{\mathbb Z}}}}\to 0\end{aligned}$$ in which the intermediate ${{\mathcal {O_D}}}$-modules $\underline C_0, \underline C_1, \dots, \underline C_{n-1}$ are projective. If $${{\underline P}}_\ast: \quad \cdots {{\underline P}}_2\to {{\underline P}}_1\to {{\underline P}}_0\to {{\underline {{\mathbb Z}}}}\to 0$$ is an ${{\mathcal {O_D}}}$-projective resolution of ${{\underline {{\mathbb Z}}}}$, we obtain a commutative diagram (unique up to chain homotopy) $$\begin{array}{cclccccclcc}
{{\underline P}}_{n+1}&\to &{{\underline P}}_n&\to {{\underline P}}_{n-1}&\to &\cdots& {{\underline P}}_0&\to&{{\underline {{\mathbb Z}}}}&\to& 0\\ \\
\downarrow && \downarrow f &\downarrow &&& \downarrow &&\downarrow = &\\ \\
0&\to &{{\underline I}}^n&\to C_{n-1}&\to &\cdots& C_0&\to&{{\underline {{\mathbb Z}}}}&\to& 0.
\end{array}$$ The ${{\mathcal {O_D}}}$-homomorphism $f$ is a cocycle, and its cohomology class $$\{f\}\in H^n({{\rm {Hom}}}_{{\mathcal {O_D}}}({{\underline P}}_\ast, {{\underline I}}^n))=H^n_{{\mathcal D}}(\pi\times\pi, {{\underline I}}^n)$$ is independent of the choice of the chain map represented by the diagram above. We define the [*$n$-th power of the canonical class*]{} ${{\mathfrak u}}^n$ as the cohomology class $\{f\}$.
The principal components of the exact sequence (\[derham\]) define an exact sequence of left ${{{{\mathbb Z}}[\pi\times\pi]}}={{\mathbb Z}}[G]$-modules $$0\to {{\underline I}}^n(G/1)=I^n \to \underline C_{n-1}(G/1)\stackrel{d}\to \underline C_{n-2}(G/1)\stackrel{d}\to \dots\stackrel{d} \to \underline C_0(G/1)\to {{\mathbb Z}}\to 0.$$ This sequence determines a class in $${{\rm {Ext}}}_{{{{{\mathbb Z}}[\pi\times\pi]}}}^n({{\mathbb Z}}, I^n)=H^n(\pi\times\pi, I^n)$$ which is [*the principal component of the class ${{\mathfrak u}}^n$*]{}. We shall identify the principal component of ${{\mathfrak u}}^n$ with ${{\mathfrak v}}^n$, see Theorem \[thm3\].
Construction of the complex (\[derham\]) {#construction}
----------------------------------------
Here we shall generalise a construction of Dranishnikov and Rudyak [@DranRud]; see also [@FM].
We shall use the operation $\otimes_{{\mathbb Z}}$ of tensor product of ${{\mathcal {O_D}}}$-modules which is defined as follows. For two right ${{\mathcal {O_D}}}$-modules ${{\underline M}}$ and ${{\underline N}}$ we define ${{\underline M}}\otimes_{{\mathbb Z}}{{\underline N}}$ by the formula $$\left({{\underline M}}\otimes_{{\mathbb Z}}{{\underline N}}\right)(G/H) = {{\underline M}}(G/H)\otimes_{{\mathbb Z}}{{\underline N}}(G/H), \quad H\in {{\mathcal D}},$$ with the obvious action on morphisms.
The following obvious remark will be used in the sequel. Suppose that $$0\to {{\underline M}}_1\to {{\underline M}}_2\to {{\underline M}}_3\to 0$$ is an exact sequence of right ${{\mathcal {O_D}}}$-modules and let ${{\underline N}}$ be a right ${{\mathcal {O_D}}}$-module such that for any $H\in {{\mathcal D}}$ the module ${{\underline N}}(G/H)$ is free as an abelian group. Then the sequence $$0\to {{\underline N}}\otimes_{{\mathbb Z}}{{\underline M}}_1 \to {{\underline N}}\otimes_{{\mathbb Z}}{{\underline M}}_2\to {{\underline N}}\otimes_{{\mathbb Z}}{{\underline M}}_3\to 0$$ is also exact.
Let $X$ and $Y$ be left $G$-sets, where $G=\pi\times\pi$. Consider the ${{\mathcal {O_D}}}$-modules ${{\underline M}}_X$ and ${{\underline M}}_Y$, see Example \[free\]. Note that the tensor product ${{\underline M}}_X\otimes_{{\mathbb Z}}{{\underline M}}_Y$ can be naturally identified with ${{\underline M}}_{X\times Y}$. We know that the modules ${{\underline M}}_X$, ${{\underline M}}_Y$ and ${{\underline M}}_{X\times Y}$ are free iff the isotropy subgroups of all elements of $X$ and $Y$ belong to ${{\mathcal D}}$.
Tensoring the short exact sequence $$\begin{aligned}
\label{mxy}
0\to {{\underline I}}_Y \to {{\underline M}}_Y \stackrel{\epsilon}\to {{\underline {{\mathbb Z}}}}\to 0\end{aligned}$$ with ${{\underline M}}_X$ we obtain an exact sequence $$\begin{aligned}
\label{sec5}0\to {{\underline M}}_X \otimes_{{\mathbb Z}}{{\underline I}}_Y\to {{\underline M}}_{X\times Y} \to {{\underline M}}_X\to 0\end{aligned}$$ in which ${{\underline M}}_X$ and ${{\underline M}}_{X\times Y}$ are free and hence the sequence (\[sec5\]) splits. We conclude: [*If the isotropy subgroups of all elements of $X\sqcup Y$ belong to ${{\mathcal D}}$, then the ${{\mathcal {O_D}}}$-module $${{\underline M}}_X\otimes_{{\mathbb Z}}{{\underline I}}_Y$$ is projective*]{}. Taking in the above statement $X=\pi$ and $Y=\pi^r$, where $\pi^r$ is equipped with the left $\pi\times\pi$ action $(x, y)\cdot(a_1, \dots, a_r)= (xa_1y^{-1}, \cdots, xa_ry^{-1}),$ we obtain that [*the ${{\mathcal {O_D}}}$-module $${{\underline M}}_\pi \otimes_{{\mathbb Z}}{{\underline I}}^{r}$$ is projective for any $r\ge 0$.* ]{} Here ${{\underline I}}^r$ denotes the $r$-fold tensor product ${{\underline I}}\otimes_{{\mathbb Z}}{{\underline I}}\otimes_{{\mathbb Z}}\dots \otimes_{{\mathbb Z}}{{\underline I}}$.
Starting from the short exact sequence (\[sec1\]) and tensoring with ${{\underline I}}$ we iteratively obtain short exact sequences of ${{\mathcal {O_D}}}$-modules $$\begin{aligned}
\label{20}
0\to \, {{\underline I}}^r\, \stackrel{i\otimes 1}\to \, {{\underline M}}_\pi\otimes_{{\mathbb Z}}{{\underline I}}^{r-1}\, \stackrel{\epsilon\otimes 1}\to \, {{\underline I}}^{r-1} \to 0,\quad r=1, 2, \dots.\end{aligned}$$ Splicing them for $r=1, 2, \dots, n$ we obtain the long exact sequence of ${{\mathcal {O_D}}}$-modules $$\begin{aligned}
\label{mpi}
0\to {{\underline I}}^n\to {{\underline M}}_\pi\otimes_{{\mathbb Z}}{{\underline I}}^{n-1} \to {{\underline M}}_\pi\otimes_{{\mathbb Z}}{{\underline I}}^{n-2} \to \dots\to {{\underline M}}_\pi\otimes_{{\mathbb Z}}{{\underline I}}\to {{\underline M}}_\pi\to {{\underline {{\mathbb Z}}}}\to 0.\end{aligned}$$ This is a version of the complex (\[derham\]). Naturally, there exist many other chain complexes representing the same cohomology class ${{\mathfrak u}}^n$.
Universality of the canonical class
-----------------------------------
In this subsection we prove the following statement which is a generalisation of the well-known result of A.S. Schwarz (see [@Sv66], Proposition 34).
\[univ\] For any ${{\mathcal {O_D}}}$-module $ {{\underline M}}$ and for any cohomology class $${{\underline \alpha}}\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}})$$ there exists an ${{\mathcal {O_D}}}$-morphism $\phi: {{\underline I}}^n\to {{\underline M}}$ such that $\phi_\ast({{\mathfrak u}}^n) = {{\underline \alpha}}$.
One may construct a projective ${{\mathcal {O_D}}}$-resolution of ${{\underline {{\mathbb Z}}}}$ extending (\[derham\]) $$\dots \stackrel{d}\to \underline C_n\stackrel{d}\to \underline C_{n-1}\stackrel{d}\to \underline C_{n-2}\stackrel{d}\to \dots\stackrel{d}\to \underline C_0\to {{\underline {{\mathbb Z}}}}\to 0.$$ The class ${{\underline \alpha}}$ can be viewed as a cohomology class of the cochain complex ${{\rm {Hom}}}_{{\mathcal {O_D}}}(\underline C_\ast, {{\underline M}})$. Let $f: \underline C_n\to {{\underline M}}$ be a cocycle representing $\alpha$. In the diagram $$\begin{array}{ccccccc}
\underline C_{n+1}& \stackrel{d}\to & \underline C_n& \stackrel{d}\to & {{\underline I}}^n &\stackrel{d}\to & 0\\ \\
&&\downarrow f&\swarrow \phi& &&\\ \\
&&{{\underline M}}&&&&
\end{array}$$ the row is exact and the existence of a ${{\mathcal {O_D}}}$-homomorphism $\phi: {{\underline I}}^n\to {{\underline M}}$ follows from the assumption that $f$ is a cocycle. We claim that $\phi_\ast({{\mathfrak u}}^n) = {{\underline \alpha}}$. Indeed, the class ${{\mathfrak u}}^n$ is represented by a similar diagram $$\begin{array}{ccccccc}
\underline C_{n+1}& \stackrel{d}\to & \underline C_n& \stackrel{d}\to & {{\underline I}}^n &\stackrel{d}\to & 0\\ \\
&&\downarrow g&\swarrow {\rm {id}}& &&\\ \\
&&{{\underline I}}^n&&&&
\end{array}$$ implying that $\phi\circ g =f$. Hence we see that the cocycle representing the class ${{\underline \alpha}}$ is obtained from the cocycle representing ${{\mathfrak u}}^n$ by composing with $\phi$.
Theorem \[univ\] obviously implies:
One has $${{\rm {cd}}}_{{\mathcal D}}(\pi\times\pi)= {\rm {height}}({{\mathfrak u}}),$$ where the integer ${\rm {height}}({{\mathfrak u}})$ is defined as the largest $n$ such that the class ${{\mathfrak u}}^n\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline I}}^n)$ is nonzero.
\[thm3\] For any integer $n\ge 1$ the image of the class $${{\mathfrak u}}^n\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline I}}^n)$$ under the homomorphism (\[Phi\]) coincides with the $n$-fold cup-power $$\Phi({{\mathfrak u}}^n) \, =\, {{\mathfrak v}}^n \, =\, {{\mathfrak v}}\cup {{\mathfrak v}}\cup \dots\cup {{\mathfrak v}}\, \in\, H^n(\pi\times\pi, I^n)$$ of the canonical class ${{\mathfrak v}}\in H^1(\pi\times\pi, I)$.
The principal components of the complex (\[mpi\]) is exactly the chain complex (12) from [@FM] and our statement is identical to Lemma 3.1 from [@FM].
Principal ${{\mathcal {O_D}}}$-modules
======================================
In this section $G$ denotes the group $\pi\times\pi$ and ${{\mathcal D}}$ is the family of subgroups of $G$ defined in §\[secd\]
{#section-10}
Let ${{\underline M}}$ be an ${{\mathcal {O_D}}}$-module. [*The principal component*]{} of ${{\underline M}}$ is defined as ${{\underline M}}(G/1) = A$ which (as we noted in §\[pcomp\]) has the structure of a left ${{\mathbb Z}}[G]$-module. Note that for any orbit $G/H$ we have an ${{\mathcal {O_D}}}$-morphism $f_H: G\to G/H$ given by $g\mapsto gH$ which induces a homomorphism $${{\underline M}}(f_H): {{\underline M}}(G/H) \to {{\underline M}}(G/1)=A.$$ For $a\in H$ we have $f_H =f_H\circ r_a$ where $r_a: G\to G$ is the right multiplication by $a$, i.e. $r_{a}(g)=ga$. Applying the functor ${{\underline M}}$ we see that the homomorphism $\phi_H\equiv {{\underline M}}(f_H)$ takes values in $A^H$, i.e. $$\begin{aligned}
\label{ah}
\phi_H: {{\underline M}}(G/H) \to A^H. \end{aligned}$$
\[defprin\] We shall say that an ${{\mathcal {O_D}}}$-module ${{\underline M}}$ is principal if for any subgroup $H\in {{\mathcal D}}$ the homomorphism $$\begin{aligned}
\label{isom}\phi_H={{\underline M}}(f_H)\, :\, {{\underline M}}(G/H)\to A^H\end{aligned}$$ is an isomorphism.
Let ${{\underline M}}$ be a principal ${{\mathcal {O_D}}}$-module. Let $H, K\in {{\mathcal D}}$ and let $a\in G$ be such that $a^{-1}Ha\subset K$. Then we have an ${{\mathcal {O_D}}}$-morphism $f_a: G/H\to G/K$ where $f_a(gH) = gaK$ for any $g\in G$. We obtain the commutative diagram $$\begin{array}{ccc}
G& \stackrel{r_a}\to & G\\
f_H \downarrow & & \downarrow f_K\\
G/H & \stackrel{f_a}\to & G/K.
\end{array}$$ of orbits and applying the functor ${{\underline M}}$ we obtain the commutative diagram $$\begin{array}{ccc}
A^K& \stackrel{r_a^\ast}\to & A^H\\
\phi_K \uparrow \simeq & & \simeq \uparrow \phi_H\\
{{\underline M}}(G/K)& \stackrel{f_a^\ast}\to & {{\underline M}}(G/H).
\end{array}$$ where $r_a^\ast$ is multiplication by $a$. Thus we see that the structure of a principal ${{\mathcal {O_D}}}$-module ${{\underline M}}$ is fully determined by the left ${{\mathbb Z}}[G]$-module $A$ (the principal component of ${{\underline M}}$). Viewing $A$ as a left $G$-set we may write $${{\underline M}}(G/H) = [G/H, A] = A^H.$$
Principal modules appear in the book of G. Bredon [@Bre] as Example (2), page I-10.
{#section-11}
As an example consider the ${{\mathcal {O_D}}}$-module ${{\underline M}}_X(?)={{\mathbb Z}}[?, X]_G$ (see Example \[free\]) where $X$ is a left $G$-set. In this case the principal component is ${{\mathbb Z}}[X]$ viewed as a left ${{\mathbb Z}}[G]$-module. For an orbit $G/H$ with $H\in {{\mathcal D}}$ we have ${{\underline M}}_X(G/H) ={{\mathbb Z}}[X^H]$ and the map $f_H: G/1 \to G/H$ induces a homomorphism $$\begin{aligned}
\label{xh}
{{\mathbb Z}}[X^H]\to ({{\mathbb Z}}[X])^H\end{aligned}$$ which in general is an inclusion.
\[prin1\] The homomorphism (\[xh\]) is an isomorphism if and only if for any $H\in {{\mathcal D}}$, the set $X$ viewed as an $H$-set, has the following property: any orbit of $H$ contained in $X$ is either infinite or a single point.
Suppose that $X$ satisfies the condition of the Lemma. For $H\in {{\mathcal D}}$ we may split $X$ into a disjoint union of $H$-orbits $X=\sqcup_j X_j$ where each $X_j$ is either a single point or infinite. Then ${{\mathbb Z}}[X] =\oplus_j {{\mathbb Z}}[X_j]$ and ${{\mathbb Z}}[X]^H =\oplus_j {{\mathbb Z}}[X_j]^H$ with ${{\mathbb Z}}[X_j]^H = {{\mathbb Z}}[X_j]$ if $X_j$ is a single point and ${{\mathbb Z}}[X_j]^H = 0$ if $X_j$ is infinite. On the other hand the set $X^H$ is the union of the sets $X_j$ which are single points. Hence (\[ah\]) is an isomorphism.
The inverse statement follows similarly. Namely, suppose that $X_j\subset X$ is a finite $G$-orbit which is not a single point. Then the element $$\sum_{x\in X_j} x\, \, \in\, {{\mathbb Z}}[X]$$ is invariant with respect to $H$, i.e. it lies in $\left({{\mathbb Z}}[X]\right)^H$ but not in ${{\mathbb Z}}[X^H]$.
We want to restate Lemma \[prin1\] in terms of the isotropy subgroups of points of $X$. For a point $x\in X$ denote by $I(x)\subset \pi\times\pi$ its isotropy subgroup. For a subgroup $H\subset G=\pi\times\pi$ one has $x\in X^H$ iff $H\subset I(x)$. The orbit of $x$ with respect to $H$ is finite iff $H$ contains $I(x)\cap H$ as a finite index subgroup. Thus we obtain the following Corollary:
\[cor34a\] The ${{\mathcal {O_D}}}$-module ${{\underline M}}_X$ is principal if and only if for any $x\in X$ and any subgroup $H\in {{\mathcal D}}$ the index $[H:H\cap I(x)]$ is either $1$ or $\infty$.
For free ${{\mathcal {O_D}}}$-modules ${{\underline M}}_X$ the set $X$ has all isotropy subgroups in ${{\mathcal D}}$. This leads to the following Corollary:
\[cor34\] Suppose that for any two subgroups $H, H'\in {{\mathcal D}}$ the index $[H:H\cap H']$ is either 1 or $\infty$. Then any free ${{\mathcal {O_D}}}$-module is principal.
Note that the property of the family of subgroups ${{\mathcal D}}$ described in Corollary \[cor34\] is in fact a property of the group $\pi$ since the family ${{\mathcal D}}$ depends on the group $\pi$ alone.
\[defprincipal\] We shall say that a group $\pi$ is principal if any of the following equivalent conditions is satisfied:
1. Any free ${{\mathcal {O_D}}}$-module is principal,
2. For any two subgroups $H, H'\in {{\mathcal D}}$, the index $[H:H\cap H']$ is either 1 or infinity,
3. For any two finite subsets $S, S'\subset \pi$ the group $Z(S)/Z(S\cup S')$ is either infinite or trivial.
Recall that the symbol $Z(S)$ denotes the centraliser of $S$, i.e. the set of all elements $g\in \pi$ which commute with every element of $S$. The equivalence between (a) and (b) follows from Corollaries \[cor34a\] and \[cor34\]. The equivalence $(b)\sim (c)$ follows from the structure of the groups $H\in {{\mathcal D}}$.
[ Let $\pi={{\mathbb Z}}^n$. Then the class ${{\mathcal D}}$ contains only two subgroups, the trivial subgroup and the diagonal $\Delta$. The condition of Corollary \[cor34\] is clearly satisfied, i.e. ${{\mathbb Z}}^n$ is a principal group. ]{}
Other examples of principal groups will be described in §\[sec:examples\].
\[lm622\] Let $0\to {{\underline M}}_1\stackrel{\alpha}\to {{\underline M}}_2\stackrel{\beta}\to {{\underline M}}_3$ be an exact sequence of ${{\mathcal {O_D}}}$-modules such that the modules ${{\underline M}}_2$ and ${{\underline M}}_3$ are principal. Then the module ${{\underline M}}_1$ is also principal.
Denote $G=\pi\times\pi$ for short. For any $H\in {{\mathcal D}}$ we have the following commutative diagram $$\begin{array}{ccccccc}
0&\to & M_1(G/H)& \stackrel{\alpha}\to & M_2(G/H)& \stackrel{\beta}\to & M_3(G/H)\\ \\
&& \downarrow \phi^1_H && \downarrow \phi^2_H && \downarrow \phi^3_H \\ \\
0 & \to & A_1^H & \stackrel{\alpha}\to & A_2^H & \stackrel{\beta}\to & A_3^H
\end{array}$$ The rows are exact and $\phi^2_H$ and $\phi^3_H$ are isomorphisms. By the 5-lemma we obtain that $\phi_H^1$ is also an isomorphism. Hence ${{\underline M}}_1$ is principal.
Lemma \[lm622\] can also be stated as saying that the kernel of a ${{\mathcal {O_D}}}$-morphism of principal Bredon modules is principal.
\[cor48\] Assume that the group $\pi$ is principal. Then the ${{\mathcal {O_D}}}$-module ${{\underline I}}^n$ is principal for any $n\ge 1$.
First let us make the following general remark. Let $X$ and $Y$ be left $\pi\times\pi$-sets with all isotropy subgroups in ${{\mathcal D}}$. Then the ${{\mathcal {O_D}}}$-module ${{\underline M}}_X\otimes_{{\mathbb Z}}{{\underline I}}_Y$ is principal as follows by applying Lemma \[lm622\] to the exact sequence (\[sec5\]) and noting that the free modules ${{\underline M}}_{X\times Y}$ and ${{\underline M}}_X$ are principal.
The statement of Corollary \[cor48\] now follows by inductively applying the above remark to the exact sequence (\[20\]).
Morphisms between principal modules are determined by their effect on the principal components:
\[thm4\] Let ${{\underline M}}_1$ and ${{\underline M}}_2$ be principal ${{\mathcal {O_D}}}$-modules. Let $A_1$ and $A_2$ be their principal components. Then the map $$\begin{aligned}
\label{mor}
{{\rm {Hom}}}_{{\mathcal {O_D}}}({{\underline M}}_1, {{\underline M}}_2) \to {{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(A_1, A_2),\end{aligned}$$ associating with any morphism its effect on the principal components, is an isomorphism.
Let $f:{{\underline M}}_1\to {{\underline M}}_2$ be an ${{\mathcal {O_D}}}$-morphism. The map (\[mor\]) associates with $f$ the ${{{{\mathbb Z}}[\pi\times\pi]}}$-homomorphism $f_1: {{\underline M}}_1(G/1)=A_1\to {{\underline M}}_2(G/1)=A_2$. We have the following commutative diagram $$\begin{array}{ccc}
{{\underline M}}_1(G/H)& \stackrel{\phi^1_H}\to & A_1^H\\ \\
\downarrow f_H&&\downarrow f_1^H\\ \\
{{\underline M}}_2(G/H)& \stackrel{\phi^2_H}\to & A_2^H\end{array}$$ in which $\phi_H^1$ and $\phi_H^2$ are isomorphisms. Thus, we see that the homomorphism $f_H$ is uniquely determined by the restriction $f_1^H$ of $f_1$ onto $A_1^H$.
\[cor49\] Let ${{\underline C}}_\ast$ be a chain complex of principal ${{\mathcal {O_D}}}$-modules and let ${{\underline M}}$ be a principal ${{\mathcal {O_D}}}$-module. Then the canonical map $${{\rm {Hom}}}_{{\mathcal {O_D}}}({{\underline C}}_\ast, {{\underline M}}) \to {{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(C_\ast, M)$$ is an isomorphism of chain complexes. Here $C_\ast = {{\underline C}}_\ast(G/1)$ is the principal component of ${{\underline C}}_\ast$ and $M={{\underline M}}(G/1)$ is the principal component of ${{\underline M}}$.
This follows from Lemma \[thm4\].
\[isomorphism\] Suppose that the group $\pi$ is principal. Let $C_\ast$ be the chain complex of left ${{{{\mathbb Z}}[\pi\times\pi]}}$-modules consisting of principal components of a projective ${{\mathcal {O_D}}}$-resolution of ${{\underline {{\mathbb Z}}}}$. Then the natural map $$H^n_{{\mathcal D}}(\pi\times\pi, {{\underline I}}^n)\to H^n({{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(C_\ast, I^n)),$$ is an isomorphism.
We apply Corollary \[cor49\] to a ${{\mathcal {O_D}}}$-free resolution of ${{\underline {{\mathbb Z}}}}$ noting that under our assumptions the Bredon module ${{\underline I}}^n$ is principal (by Corollary \[cor48\]).
{#section-12}
Note that the complex $C_\ast$ which appears in Corollary \[isomorphism\] is a resolution of ${{\mathbb Z}}$ over the ring ${{{{\mathbb Z}}[\pi\times\pi]}}$ but it is neither free nor projective. Any projective resolution $P_\ast$ admits a chain map $P_\ast\to C_\ast$ and for any left ${{{{\mathbb Z}}[\pi\times\pi]}}$-module $A$ we have a chain map ${{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(C_\ast, A) \to {{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(P_\ast, A)$ (which is unique up to homotopy) inducing a well-defined homomorphism $$H^\ast({{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(C_\ast, A))\to H^\ast({{\rm {Hom}}}_{{{{\mathbb Z}}[\pi\times\pi]}}(P_\ast, A))= H^\ast(\pi\times\pi, A).$$
Essential cohomology classes {#essential}
============================
The following notion was introduced and studied in [@FM].
Let $A$ be a left ${{{{\mathbb Z}}[\pi\times\pi]}}$-module. A cohomology class $\beta\in H^n(\pi\times \pi, A)$ is said to be [*essential*]{} if there exists a homomorphism of ${{\mathbb Z}}[\pi\times \pi]$-modules $\mu: I^n \to A$ such that $$\mu_\ast({{\mathfrak v}}^n)=\beta.$$ Here ${{\mathfrak v}}^n\in H^n(\pi\times\pi, I^n)$ denotes the $n$-th power of the canonical class ${{\mathfrak v}}$.
In [@FM] the authors constructed a spectral sequence giving a full set of obstructions for a cohomology class to be essential. The first such obstruction is the requirement for the class $\beta\in H^n(\pi\times \pi, A)$ to be [*a zero-divisor*]{}, i.e. $$\begin{aligned}
\label{divisor}\beta|_\pi=0\in H^n(\pi, A|_\pi)\end{aligned}$$ where $\pi\subset \pi\times\pi$ denotes the diagonal subgroup; see [@FM], §5. The condition (\[divisor\]) is obvious since the canonical class ${{\mathfrak v}}$ and all its powers ${{\mathfrak v}}^n$ are zero-divisors.
Here we characterise the essential cohomology classes as principal components of Bredon cohomology classes.
\[thm8\] Let $A$ be a left ${{{{\mathbb Z}}[\pi\times\pi]}}$-module which is the principal component of an ${{\mathcal {O_D}}}$-module ${{\underline M}}$. Consider the homomorphism $$\begin{aligned}
\Phi: H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}})\to H^n(\pi\times\pi, A)\end{aligned}$$ which associates to a Bredon cohomology class its principal component, see (\[Phi\]).
\(1) Any class $\beta\in H^n(\pi\times \pi, A)$ in the image of $\Phi$ is essential.
\(2) If the group $\pi$ is principal then the set of essential cohomology classes coincides with the image on $\Phi$.
Suppose that $\beta=\Phi(\alpha)$ where ${{\underline M}}\in H^n(\pi\times\pi, {{\underline M}})$. By the Universality Theorem \[univ\], there exists a ${{\mathcal {O_D}}}$-homomorphism $\mu: {{\underline I}}^n\to {{\underline M}}$ such that $\alpha=\mu_\ast({{\mathfrak u}}^n)$. On the principal components we obtain a ${{{{\mathbb Z}}[\pi\times\pi]}}$-homomorphism $\mu: I^n \to A$ such that $\mu_\ast({{\mathfrak v}}^n)=\beta.$ Thus $\beta$ is essential. Here we used Theorem \[thm3\] stating that the principal component of ${{\mathfrak u}}^n$ is ${{\mathfrak v}}^n$. This proves statement (1).
Suppose now that a cohomology class $\beta\in H^n(\pi\times\pi, A)$ is essential, i.e. $\beta=\mu_\ast({{\mathfrak v}}^n)$ where $\mu: I^n\to A$ is a ${{{{\mathbb Z}}[\pi\times\pi]}}$-homomorphism. Let ${{\underline M}}$ denote the ${{\mathcal {O_D}}}$-module $${{\underline M}}(G/H) \, = \, A^H \, =\, [G/H, \, A]_G.$$ whose principal component is $A$. Here we view $A$ as a left $G$-set and the brackets $[\, \, , \, \, ]_G$ denote the set of $G$-maps. Since we assume that $\pi$ is principal we know that ${{\mathcal {O_D}}}$-module ${{\underline I}}^n$ is principal (see Corollary \[cor48\]). Applying Lemma \[thm4\] we obtain a ${{\mathcal {O_D}}}$-morphism $\hat\mu: {{\underline I}}^n\to {{\underline M}}$ having $\mu$ as its principal component. This produces a Bredon cohomology class $$\alpha=\hat\mu_\ast({{\mathfrak u}}^n)\in H^n_{{\mathcal D}}(\pi\times\pi, {{\underline M}}),$$ and using Theorem \[thm3\] we have $\Phi(\alpha)=\mu_\ast({{\mathfrak v}}^n) =\beta$.
This completes the proof.
Examples of principal groups {#sec:examples}
============================
In this section we show that all torsion free hyperbolic groups as well as all torsion free nilpotent groups are principal. Also, we give an example of a non-principal group.
\[def:propN\] We say that a group $\pi$ satisfies Property $N$ if, for any $a\in \pi$ and any finite set $S \subset \pi$, the inclusion $a^n \in Z(S)$, where $n\ge 1$, implies that $a\in Z(S).$
\[prop:propN\] Any group $\pi$ satisfying Property $N$ is principal.
We shall use the property (c) from Definition \[defprincipal\]. To show that the group $\pi$ is principal we need to show that for any two finite subsets $S, S'\subset \pi$ the group $Z(S)/Z(S\cup S')$ is either trivial or infinite. This will follow once we show that this group is torsion free. An element of order $n$ in $Z(S)/Z(S\cup S')$ is represented by an element $a\in Z(S)$ such that $a^n\in Z(S\cup S')$. But then Property N implies $a\in Z(S\cup S')$ i.e. $a$ represents the trivial class in $Z(S)/Z(S\cup S')$.
\[prop:propNnilp\] If $\pi$ is a finitely generated torsion free nilpotent group, then $\pi$ satisfies Property $N$ and therefore $\pi$ is principal.
If $\pi$ is abelian, then $Z(S)=\pi$, so Property $N$ holds tautologically. Suppose inductively that any finitely generated torsion free nilpotent group of class $<r$ satisfies Property $N$. Take $\pi$ of class $r$ and let $a^n \in Z(S)$ for some $S$. Denote the quotient of $\pi$ by its centre by $\bar\pi = \pi/Z(\pi)$ and note that: (1) The class of $\bar\pi$ is $<r$, so $\bar\pi$ satisfies Property $N$ and (2) $Z(S)$ maps into $Z(\bar S)$ under the quotient map $\pi \to \bar\pi$. Then we see that $\bar a^n \in Z(\bar S)$ and, by Property $N$, we have $\bar a \in Z(\bar S)$. Let $g \in S$ so that $\bar g \in \bar S$. Then we see that $[\bar a, \bar g]=1$ and this implies that $[a,g]\in Z(\pi)$. Let’s now employ a basic relation among higher commutators (which holds for any group \[cf. Hall, 10.2.12\]), $$[xy, z] = \big[ x, [y, z] \big]\, [y,z] \, [x,z].$$ Recall that we have $[a^n, g] = 1$. Expanding $[a^n, g]$ using the relation above gives $$[a^n, g] = \big[ a^{n-1}, [a,g] \big]\, [a,g] \, [a^{n-1},g] = [a,g] \, [a^{n-1},g],$$ where the last equality follows because $[a,g] \in Z(\pi)$. Repeating this step eventually leads to $1 = [a^n, g] = [a,g]^{n}$. Since $\pi$ is torsion free, we have $[a, g] = 1$ and $a\in Z(S)$. This completes the inductive step.
\[sur\] Let $\pi$ be a torsion free group such that the centraliser $Z(g)$ of any element $g\in \pi-\{1\}$ is cyclic. Then any two centralisers $Z(g_1)$, $Z(g_2)$, where $g_1, g_2\in \pi-\{1\}$, either coincide $Z(g_1)=Z(g_2)$ or their intersection is trivial, $Z(g_1)\cap Z(g_2)=\{1\}$.
Let $a_i\in Z(g_i)$ be a generator, $i=1, 2$. Assume that the intersection $Z(g_1)\cap Z(g_2)$ is not trivial. Then this intersection is an infinite cyclic group. Let $x\in Z(g_1)\cap Z(g_2)$ denote a generator of the intersection. Then $$\begin{aligned}
\label{xg}
x=a_1^{n_1}=a_2^{n_2}\end{aligned}$$ for some $n_1, n_2\not=0$. Consider the centraliser $Z(x)\subset \pi$. It is an infinite cyclic group (by our assumption) which contains $a_1$ and $a_2$ (because of (\[xg\])) implying that the elements $a_1$ and $a_2$ commute. Hence $Z(g_1)=Z(g_2).$
\[ex2\] Assume that a group $\pi$ is torsion free and the centraliser of any nontrivial element $g\in \pi$ is cyclic. Then $\pi$ satisfies property N and hence it is principal.
Let $S\subset \pi$ be a finite subset. By Lemma \[sur\], if $Z(S)$ is nontrivial then $Z(S)=Z(g)$ for some $g\in \pi-\{1\}$. If $a^n\in Z(S)$ then $a^n\in Z(a)\cap Z(g)$. We know that the centralisers $Z(a)$ and $Z(g)$ either coincide or have trivial intersection. If $Z(a)=Z(g)$ then $a\in Z(g)=Z(S)$. In the case $Z(a)\cap Z(g)=1$ we obtain $a^n=1$ and hence $a=1$ since $\pi$ is torsion free.
Any torsion free hyperbolic group is principal.
This follows from Lemma \[ex2\] since in a torsion free hyperbolic group the centraliser of any non-unit element is cyclic.
As example of a group that is not principal we have the following:
\[Klein\]
Consider the fundamental group $K$ of the Klein bottle, $$K=\langle c, d; c^2 = d^2\rangle.$$ Denote $z=c^2=d^2;$ this element generates the centre $Z\subset K$. Denote $x=cd$, $y=dc$. Any element of $K$ can be uniquely written in one of the four forms $$x^kz^l, \, y^kz^l,\, x^kz^lc,\, y^kz^ld, \quad k, l \in {{\mathbb Z}}.$$ Relations: $$xy=yx=z^2$$ $$cx=yc$$ $$dx=yd$$ $$cy=xc$$ $$dy=xd$$ We see that the centraliser of $x$ is the subgroup generated by $x, y$ and $z$. Note that $Z(x)\subset K$ is normal. Besides, $c\notin Z(x)$ while $c^2=z\in Z(x)$. This shows that $K$ does not have property $N$.
Besides, the centraliser of $xy=z^2$ is the whole group $K$. In this case the group $K/K\cap Z(x)=K/Z(x)$ is ${{\mathbb Z}}_2$. Consider the following two subgroups $H, H'\subset K\times K$. Let $H=\Delta\subset K\times K$ be the diagonal and let $H'$ be $H'= \{(a, xax^{-1}); a\in K\}$. Then $H\cap H'= \{(a, a); a\in Z(x)\}$ and hence $H/H\cap H'\simeq K/Z(x)\simeq {{\mathbb Z}}_2$. We conclude that the fundamental group of the Klein bottle $K$ is not principal.
[FF]{}
A. Boudjaj and Y. Rami, *On spaces of topological complexity two,* arXiv 1607. 05346v2, 25 July 2016.
G. Bredon, *Equivariant cohomology theories* Lecture Notes in Mathematics, No. 34 Springer-Verlag, Berlin-New York 1967.
K. S. Brown, *Cohomology of groups*, Graduate Texts in Mathematics, vol. 87, Springer-Verlag, New York, 1982.
D. Cohen and G. Pruidze, *Motion planning in tori*, Bull. Lond. Math. Soc. [**40**]{} (2008), 249–262.
D. Cohen and L. Vandembroucq, *Topological complexity of the Klein bottle*, J Appl. and Comput. Topology (2017). https://doi.org/10.1007/s41468-017-0002-0, arXiv:1612.03133.
A. Costa and M. Farber, *Motion planning in spaces with small fundamental groups*, Commun. Contemp. Math. 12 (2010), no. 1, 107-119.
O. Cornea, G. Lupton, J. Oprea, and D. Tanr[é]{}, *Lusternik-[S]{}chnirelmann [C]{}ategory*, Mathematical Surveys and Monographs, vol. 103, American Mathematical Society, Providence, RI, 2003.
A. Dranishnikov and Y. Rudyak, *On the Berstein-Svarc theorem in dimension 2*, Math. Proc. Cambridge Philos. Soc. 146 (2009), no. 2, 407-413.
A. Dranishnikov, *The topological complexity and the homotopy cofiber of the diagonal map for non-orientable surfaces,* Proc. Amer. Math. Soc. 144 (2016), no. 11, 4999–5014.
S. Eilenberg and T. Ganea, *On the [L]{}usternik–[S]{}chnirelmann category of abstract groups*, Ann. of Math. (2) **65** (1957), 517–518.
M. Farber, *Topological complexity of motion planning*, Discrete Comput. Geom. **29** (2003), no. 2, 211–221.
M. Farber, *Instabilities of robot motion*, Topology Appl. **140** (2004), no. 2-3, 245–266.
M. Farber, S. Tabachnikov and S. Yuzvinsky, *Motion planning in projective spaces*, International Mathematics Research Notices 2003, No. 34, pp. 1853 - 1870.
M. Farber, *Topology of robot motion planning*, in: Morse theoretic methods in nonlinear analysis and in symplectic topology, NATO Sci. Ser. II Math. Phys. Chem., vol. 217, Springer, Dordrecht, 2006, pp. 185–230.
M. Farber, *Invitation to topological robotics*, Zurich Lectures in Advanced Mathematics, European Mathematical Society (EMS), Zürich, 2008.
M. Farber, *Configuration Spaces and Robot Motion Planning Algorithms*, in: Combinatorial and Toric Topology, A. Darby, J. Grbic, Z. Lü and J. Wu editors, Lecture Notes Series, IMS, National University of Singapure, 2017, pp. 263 – 303.
M. Farber, S. Mescher, *On the topological complexity of aspherical spaces*, preprint arXiv:1708.06732.
M. Farber, S. Yuzvinsky, *Topological robotics: subspace arrangements and collision free motion planning*, Geometry, topology, and mathematical physics, 145–156, Amer. Math. Soc. Transl. Ser. 2, 212, Adv. Math. Sci., 55, Amer. Math. Soc., Providence, RI, 2004.
M. Fluch, *PhD Thesis: On Bredon (Co-)Homological Dimensions of Groups*, University of Southhampton (2011).
M. Grant, *Topological complexity, fibrations and symmetry,* Topology Appl. 159 (2012), no. 1, 88–97.
M. Grant, G. Lupton and J. Oprea, *New lower bounds for the topological complexity of aspherical spaces*, Topology Appl. 189 (2015), 78–91.
M. Grant, G. Lupton and J. Oprea, *Spaces of topological complexity one,* Homology, Homotopy and Applications, vol. 15(2) (2013) 73-81.
M. Grant, G. Lupton, and J. Oprea, *A mapping theorem for topological complexity*, Algebr. Geom. Topol. [**[15]{}**]{}(2015), no. 3, 1643–1666.
M. Grant and D. Recio-Mitter, *Topological complexity of subgroups of Artin’s braid groups*, arXiv:1607.04830.
A. Hatcher, *Algebraic topology*, Cambridge University Press, Cambridge, 2002.
D. Husemoller, *Fibre Bundles*, McGraw-Hill Company, 1966.
W. Lück, *Survey on classifying spaces for families of subgroups*, Infinite groups: geometric, combinatorial and dynamical aspects, 269-322, Progr. Math., 248, Birkhäuser, Basel, 2005.
W. Lück, *Transformation groups and algebraic K-theory*, Lecture Notes in Math, v. 1408, Springer - Verlag, Berlin, 1989.
S. Mac Lane, *Homology*, Springer-Verlag, 1995.
J.P. May, *Equivariant homotopy and cohomology theory*, AMS Regional Conference Series in Mathematics [**[91]{}**]{},1996.
G. Mislin, *Equivariant $K$-Homology of the Classifying Space for Proper Actions*, in G. Mislin and A. Valette, *Proper group actions and the Baum-Connes Conjecture*, Birkhauser, 2003.
J. Milnor, *On spaces having the homotopy type of a CW complex*, Trans. Amer. Math. Soc. [**[90]{}**]{} (1959), 272–280.
Y. Rudyak, *On topological complexity of Eilenberg-MacLane spaces.* Topology Proceedings [**48**]{} (2016), 65– 67.
A. Schwarz, *The genus of a fiber space*, Amer. Math. Soc. Transl. Ser. 2 **55**(1966), 49 – –140.
T. tom Dieck, *Transformation groups*, De Gruyter Studies in Math. 8, 1987.
G. Whitehead, *Elements of Homotopy Theory*, Grad. Texts in Math. 61 Springer-Verlag (1978).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Many quantum algorithms make use of oracles which evaluate classical functions on a superposition of inputs. In order to facilitate implementation, testing, and resource estimation of such algorithms, we present quantum circuits for evaluating functions that are often encountered in the quantum algorithm literature. This includes Gaussians, hyperbolic tangent, sine/cosine, inverse square root, arcsine, and exponentials. We use insights from classical high-performance computing in order to optimize our circuits and implement a quantum software stack module which allows to automatically generate circuits for evaluating piecewise smooth functions in the computational basis. Our circuits enable more detailed cost analyses of various quantum algorithms, allowing to identify concrete applications of future quantum computing devices. Furthermore, our resource estimates may guide future research aiming to reduce the costs or even the need for arithmetic in the computational basis altogether.'
author:
- Thomas Häner
- Martin Roetteler
- 'Krysta M. Svore'
bibliography:
- 'references.bib'
title: Optimizing Quantum Circuits for Arithmetic
---
\[sec:intro\]Introduction
=========================
Quantum computers are expected to excel at certain computational tasks with an asymptotic advantage over their classical counterparts. Examples for such tasks include factoring [@shor1994algorithms] and the simulation of quantum chemical processes [@reiher2016elucidating; @babbush2016exponentially]. While new quantum algorithms tackling these problems offer favorable asymptotic behavior, exact runtime estimates are often lacking due to the absence of reversible implementations for functions such as the ones considered in this paper. However, the implementation details of these functions greatly influence the constant overheads involved and, thus, also the crossover points at which the choice of quantum/classical algorithm changes.
We address this issue by presenting circuits for arithmetic which can be added to a quantum software stack such as LiQ$Ui\Ket{}$ [@wecker2014liqui], Quipper [@green2013quipper], ScaffCC [@javadiabhari2014scaffcc], Q\# [@svore2018q], and ProjectQ [@steiger2016projectq] to name a few. In particular, we discuss the implementation of general smooth functions via a piecewise polynomial approximation, followed by functions that are used in specific applications. Namely, we analyze the costs of implementing an inverse square root ($1/\sqrt x$) using a reversible fixed-point version of the method used in the computer game Quake III Arena [@quake3] and we then combine this with our evaluation scheme for smooth functions in order to arrive at an implementation of $\arcsin(x)$.
Having reversible implementations of these functions available enables more detailed cost analyses of various quantum algorithms such as HHL [@harrow2009quantum], where the inverse square root can be used to arrive at $x\mapsto 1/x$ and $\arcsin(x)$ can be used to get $1/x$ from the computational basis state into the amplitude. Similar use cases arise in Quantum Metropolis sampling [@KOV+:2011], Gibbs state preparation [@PW:2009] and in the widely applicable framework of Quantum Rejection Sampling [@ORR:2013] to transform one or more samples of a given quantum state into a quantum state with potentially different amplitudes, while maintaining relative phases. In all these examples the computation of $\arcsin(x)$ is useful for the rejection sampling step. Further applications of numerical functions can be anticipated in quantum machine learning, where sigmoid functions may need to be evaluated on a superposition of values employing $\tanh(x)$, and $1/\sqrt x$ can be used for (re-)normalization of intermediate results [@CNW:2010]. In quantum algorithms for chemistry, further examples for numerical functions arise for on-the-fly computation of the one- and two-body integrals [@babbush2016exponentially]. There, $1/\sqrt x$ as well as the evaluation of smooth functions such as Gaussians is needed. Similarly, on-the-fly computation of finite element matrix elements often involves the evaluation of functions such as $\sin(x)$ and $\cos(x)$ [@Scherer2017].
#### Related work.
As a result of the large impact that the implementation details of such functions may have on the practicality of a given quantum algorithm, there is a vast amount of literature available which provides circuits for various low-level arithmetic functions such as addition [@takahashi2009quantum; @draper2000addition; @cuccaro2004new; @Draper:2006:LQC:2012086.2012090]. Furthermore, Refs. [@cao2013; @Bhaskar:2016:QAC:3179448.3179450; @MT:2018] discuss implementations of higher-level arithmetic functions such as $\sin(x)$, $\arcsin(x)$ and $\sqrt{x}$ which we also consider in the present work, although using different approaches. In particular, our piecewise polynomial evaluation circuit enables evaluating piecewise smooth functions to high accuracy using polynomials of very low degree. As a result, we require only a small number of additions and multiplications, and few quantum registers to hold intermediate results in order to achieve reversibility. While Ref. [@cao2013] employs several evaluations of the $\sin(x)$ function in order to hone in on the actual value of its inverse, our implementation of $\arcsin(x)$ features costs that are similar to just one invocation of $\sin(x)$ for $x\in [-0.5,0.5]$. Otherwise, if $x\in[-1,1]$, our implementation also requires an evaluation of the square root. For evaluating inverse square roots, we optimize the initial guess which was also used in [@Bhaskar:2016:QAC:3179448.3179450] in order to reduce the number of required Newton iterations by 1 (which corresponds to a reduction by $20$-$25\%$). In contrast to the mentioned works, we implement all our high-level arithmetic functions at the level of Toffoli gates in the quantum programming language LIQ$Ui\Ket{}$. As a result, we were able to test our circuits on various test vectors using a Toffoli circuit simulator, ranging up to several hundreds of qubits.
Throughout this paper, we adapt ideas from classical high-performance computing in order to reduce the required resources in the quantum setting. While these methods allow to reduce the Toffoli and qubit counts significantly, the resulting circuits are still quite expensive, especially in terms of the number of gates that are required. We hope that this highlights the fact that more research in the implementation of quantum algorithms is necessary in order to further reduce the cost originating from arithmetic in the computational basis.
Learning from Classical Arithmetic Libraries
============================================
While there is no need for computations to be reversible when using classical computers, a significant overlap of techniques from reversible computing can be found in vectorized high-performance libraries. In quantum computing, having an if-statement collapses the state vector, resulting in a loss of all potential speedup. Similarly, if-statements in vectorized code require a read-out of the vector, followed by a case distinction and a read-in of the handled values, which incurs a tremendous overhead and results in a deterioration of the expected speedup or even an overall slowdown. Analogous considerations have to be taken into account when dealing with, e.g., loops. Therefore, classical high-performance libraries may offer ideas and insights applicable to quantum computing, especially for mathematical functions such as (inverse) trigonometric functions, exponentials, logarithms, etc., of which highly-optimized implementations are available in, e.g., the Cephes math library [@moshier2000cephes] or games such as Quake III Arena (their fast inverse square root [@quake3] is reviewed in [@lomont2003fast]).
Although some of these implementations rely on a floating-point representation, many ideas carry over to the fixed-point domain, and remain efficient enough even when requiring reversibility. Specifically, we adapt implementations of the arcsine function from [@moshier2000cephes] and the fast inverse square root from [@lomont2003fast] to the quantum domain by providing reversible low-level implementations. Furthermore, we describe a parallel version of the classical Horner scheme [@knuth1962evaluation], which enables the conditional evaluation of many polynomials in parallel and, therefore, efficient evaluation of piecewise polynomial approximations.
{width="\linewidth"}
![Our parallel polynomial evaluation circuit. NEXT$_a$ changes the register to hold the next set of coefficients (in superposition) $\sum_l\Ket l\Ket{a_{l,i-1}}\mapsto \sum_l\Ket l\Ket{a_{l,i}}$. MUL and ADD perform a multiplication and an addition, respectively. The small triangle indicates the output of the ADD and MUL gates.[]{data-label="fig:ppoly"}](parallelpoly){width="\linewidth"}
Evaluation of piecewise polynomial approximations
=================================================
A basic scheme to evaluate a single polynomial on a quantum computer in the computational basis is the classical Horner scheme, which evaluates $$P(x) = \sum_{i=0}^d a_ix^i$$ by iteratively performing a multiplication by $x$, followed by an addition of $a_i$ for $i\in\{d,d-1,...,0\}$. This amounts to performing the following operations: $$\begin{aligned}
a_d x + a_{d-1}&\mapsto a_dx^2+a_{d-1}x+a_{d-2}\\
&\cdots\\
&\mapsto a_dx^d+\cdots+a_0\;.\end{aligned}$$
A reversible implementation of this scheme simply stores all intermediate results. At iteration $i$, the last iterate $y_{i-1}$ is multiplied by $x$ into a new register $y_{i}$, followed by an addition by the (classically-known) constant $a_i$, which may make use, e.g., the addition circuit by Takahashi [@takahashi2009quantum] (if there is an extra register left), or the in-place constant adder by Häner et al. [@haner2016factoring], which does not require an ancilla register but is more costly in terms of gates. Due to the linear dependence of successive iterates, a pebbling strategy can be employed in order to optimize the space/time trade-offs according to some chosen metric [@parent2015reversible].
Oftentimes, the degree $d$ of the minimax approximation over a domain $\Omega$ must be chosen to be very high in order to achieve a certain $L_\infty(\Omega)$-error. In such cases, it makes sense to partition $\Omega$, i.e., find $\Omega_i$ such that $$\Omega = \bigcup_{i=0}^M \Omega_i\;,\;\Omega_i\cap\Omega_j=\emptyset\;\forall i\neq j\;,$$ and to then perform a case distinction for each input, evaluating a different polynomial for $x\in\Omega_i$ than for $y\in\Omega_j$ if $i\neq j$. A straight-forward generalization of this approach to the realm of quantum computing would loop over all subdomains $\Omega_i$ and, conditioned on a case-distinction or label register $\Ket l$, evaluate the corresponding polynomial. Thus, the cost of this inefficient approach grows linearly with the number of subdomains.
In order to improve upon this approach, one can parallelize the polynomial evaluation if the degree $d$ is constant over the entire domain $\Omega$. Note that merely adding the label register $\Ket l$ mentioned above and performing $$\begin{aligned}
\label{eqn:init2}
\Ket{y_{l,i-1}x}\Ket0\Ket l&\mapsto \Ket{y_{l,i-1}x}\Ket{a_{l,i}}\Ket l\\
&\mapsto \Ket{y_{l,i-1}x+a_{l,i}}\Ket{a_{l,i}}\Ket l\\
&\mapsto\Ket{y_{l,i}}\Ket0\Ket l\;,\end{aligned}$$ enables the evaluation of multiple polynomials in parallel. The impact on the circuit size is minor, as will be shown in Appendix \[sec:polycost\]. The depth of the circuit remains virtually unaltered, since the initialization step can be performed while multiplying the previous iterate $y_{i-1}$ by $x$, see Fig. \[fig:ppoly\]. An illustration of the circuit computing the label register $\Ket l$ can be found in Fig. \[fig:label\]. A slight drawback of this parallel evaluation is that it requires one extra ancilla register for the last iteration, since the in-place addition circuit [@haner2016factoring] can no longer be used. Resource estimates of a few functions which were implemented using this approach can be found in Table \[tbl:funcs\]. The small overhead of using many intervals allows to achieve good approximations already for low-degree polynomials (and thus using few qubit registers).
Using reversible pebble games [@Bennett:89], it is possible to trade the number of registers needed to store the iterates with the depth of the resulting circuit. The parameters are: the number $n$ of bits per register, the total number $m$ of these $n$-qubit registers, the number $r$ of Horner iterations, and the depth $d$ of the resulting circuit. The trade-space we consider involves $m$, $r$, and $d$. In particular, we consider the question of what the optimal circuit depth is for a fixed number $m$ of registers and a fixed number $r$ of iterations. As in [@Knill:95; @PRS:2015] we use dynamic programming to construct the optimal strategies as the dependency graph is just a line which is due to the sequential nature of Horner’s method (the general pebbling problem is much harder to solve, in fact finding the optimal strategy for general graphs is known to be PSPACE complete [@Chan:2013]). The optimal number of pebbling steps as a function of $m$ and $r$ can be found in Table \[tab:pebbling\].
[c@c@c@c@c@c@c@c@c@c@c@c]{}\
$m \backslash r$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 16 & 32 & 64\
1 & 1 &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$\
2 & 1 & 3 &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$\
3 & 1 & 3 & 5 & 9 &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$ &$\infty$\
4 & 1 & 3 & 5 & 7 & 11 & 15 & 19 & 25 &$\infty$ &$\infty$ &$\infty$\
5 & 1 & 3 & 5 & 7 & 9 & 13 & 17 & 21 & 71 &$\infty$ &$\infty$\
6 & 1 & 3 & 5 & 7 & 9 & 11 & 15 & 19 & 51 & 193 &$\infty$\
7 & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 17 & 49 & 145 & 531\
8 & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 15 & 47 & 117 & 369\
Software Stack Module for piecewise smooth functions
====================================================
In order to enable automatic compilation of an oracle which implements a piecewise smooth function, the Remez algorithm [@remez1934determination] can be used in a subroutine to determine a piecewise polynomial approximation, which can then be implemented using the circuit described in the previous section.
In particular, we aim to implement the oracle with a given precision, accuracy, and number of available quantum registers (or, equivalently, the polynomial degree $d$ if no pebbling is employed) over a user-specified interval $\Omega=[a,a+L)$. Our algorithm proceeds as follows: In a first step, run the Remez algorithm which, given a function $f(x)$ over a domain $\Omega\subset\mathbb R$ and a polynomial degree $d$, finds the polynomial $P(x)$ which approximates $f(x)$ with minimal $L_\infty(\Omega)$-error, and check whether the achieved error is low enough. If it is too large, reduce the size of the domain $\Omega_1:=[a,a+\frac L2)$ and check again. Repeating this procedure and carrying out binary search on the right interval border will eventually lead to the first subdomain $\Omega_1=[a,b_1)$ which is the largest interval such that the corresponding degree $d$ polynomial achieves the desired accuracy. Next, one determines the next subdomain $\Omega_2=[b_1,b_2)$ using the same procedure. This is iterated until $b_i\geq b$, meaning that all required subdomains and their corresponding polynomials have been determined and $f(x)$ can be implemented using a parallel polynomial evaluation circuit. This algorithm was implemented and then run for various functions, target accuracies, and polynomial degrees in order to determine approximate resource estimates for these parameters, see Table \[tbl:funcs\] in the appendix.
Inverse square root
===================
For quantum chemistry or machine learning applications, also non-smooth functions are required. Most notably, the inverse square root can be used in both examples, namely for the calculation of the Coulomb potential and to determine the reciprocal when employing HHL [@harrow2009quantum] for quantum machine learning.
In classical computing, inverse square roots appear in computer graphics and the term “fast inverse square root” is often used: It labels the procedure to approximate the inverse square root using bit-operations on the floating-point representation of the input, as it was done in Quake III Arena [@quake3] (see [@lomont2003fast] for a review). The code ultimately performs a Newton-Raphson iteration in order to improve upon a pretty accurate initial guess, which it finds using afore-mentioned bit-operations. Loosely speaking, the bit-operations consist of a bit-shift to divide the exponent by two in order to approximate the square root, followed by a subtraction of this result from a *magic number*, effectively negating the exponent and correcting the mantissa, which was also shifted together with the exponent. The *magic number* can be chosen using an auto-tuning procedure and varies depending on the objective function being used [@lomont2003fast]. This provides an extremely good initial guess for the Newton iteration at very low cost.
In our reversible implementation, we use a similar procedure to compute the inverse square root using fixed-point arithmetic. While we cannot make use of the floating-point representation, we can still find a low-cost initial guess which allows for a small number of Newton iterations to be sufficient (i.e., 2-4 iterations). This includes determining the position of the first one in the bit-representation of the input, followed by an initialization which involves a case distinction on the *magic number* to use. Our three magic constants (see Appendix \[sec:newton\]) were tuned such that the error peaks near powers of two in Fig. \[fig:invsqrterrornoopt\] vanish. The peaks appear due to the fact that the initial guess takes into account the location of the first one but completely ignores the actual magnitude of the input. For example, all inputs in $[1,2)$ yield the same initial guess. The error plot with tuned constants is depicted in Fig. \[fig:invsqrterror\]. One can clearly observe that an entire Newton iteration can be saved when aiming for a given $L_\infty$-error.
Arcsine
=======
Following the implementation used in the classical math library Cephes [@moshier2000cephes], an arcsine can be implemented as a combination of polynomial evaluation and the square root. Approximating the arcsine using only a polynomial allows for a good approximation in $[-0.5,0.5]$, but not near $\pm1$ (where it diverges). The Cephes math library remedies this problem by adding a case distinction, employing a “double-angle identity” for $|x|\geq 0.5$. This requires computing the square root, which can be achieved by first calculating the inverse square root, followed by $x\cdot\frac 1{\sqrt x}=\sqrt x$. Alternatively, the new square root circuit from Ref. [@MT:2018] can be used.
We have implemented our circuit for arcsine and we show the resulting error plot in Fig. \[fig:arcsinerror\]. The oscillations stem from the minimax polynomial which is used to approximate the arcsine on $[-0.5,0.5]$. More implementation details and resource estimates can be found in Appendix \[sec:arcsine\].
Note that certain applications may allow to trade off error in the arcsine with, e.g., probability of success by rescaling the input such that the arcsine needs to be computed only for values in $[-0.5,0.5]$. This would allow one to remove the case-distinction and the subsequent calculation of the square root: One could evaluate the arcsine at a cost that is similar to the implementation costs of sin/cos. Estimates for the Toffoli and qubit counts for this case can also be found in the appendix, see Table \[tbl:funcs\].
Summary and Outlook
===================
We have presented efficient quantum circuits for the evaluation of many mathematical functions, including (inverse) square root, Gaussians, hyperbolic tangent, exponential, sine/cosine, and arcsine. Our circuits can be used to obtain accurate resource estimates for various quantum algorithms and the results may help to identify the first large-scale applications as well as bottlenecks in these algorithms where more research is necessary in order to make the resource requirements practical. When embedded in a quantum compilation framework, our general parallel polynomial evaluation circuit can be used for automatic code generation when compiling oracles that compute piecewise smooth mathematical functions in the computational basis. This tremendously facilitates the implementation of quantum algorithms which employ oracles that compute such functions on a superposition of inputs.
Basic circuit building blocks for fixed-point arithmetic {#sec:basiccircuits}
========================================================
In fixed-point arithmetic, one represents numbers $x$ using $n$ bits as $$x=\underbrace{x_{n-1}\cdots x_{n-p}}_p.\underbrace{x_{n-p-1}\cdots x_0}_{n-p}\;,$$ where $x_i\in\{0,1\}$ is the $i$-th bit of the binary representation of $x$, and the point position $p$ denotes the number of binary digits to the left of the binary point. We choose both the total number of bits $n$ and the point position $p$ to be constant over the course of a computation. As a consequence, over- and underflow errors are introduced, while keeping the required bit-size from growing with each operation.
#### Fixed-point addition.
We use a fixed-point addition implementation, which keeps the bit-size constant. This amounts to allowing over- and underflow, while keeping the registers from growing with each operation.
\[sec:mult\]
#### Fixed-point multiplication.
Multiplication can be performed by repeated-addition-and-shift, which can be seen from $$x\cdot y=x_{n-1}2^{n-1} y+\cdots+x_02^0y\;,$$ where $x=\sum_i x_i2^i$ with $x_i\in\{0,1\}$ denotes the binary expansion of the $n$-bit number $x$. Thus, for $i\in\{0,...,n-1\}$, $2^{i-(n-p)}y$ is added to the result register (which is initially zero) if $x_i=1$. This can be implemented using $n$ controlled additions on $1,2,...,n$ bits if one allows for pre-truncation: Instead of computing the $2n$-bit result and copying out the first $n$ bits before uncomputing the multiplication again, the additions can be executed on a subset of the qubits, ignoring all bits beyond the scope of the $n$-bit result. Thus, each addition introduces an error of at most $\varepsilon_A=\frac 1{2^{n-p}}$. Since there are (at most) $n$ such additions, the total error is $$\varepsilon=\frac n{2^{n-p}}\;,$$ a factor $n$ larger than using the costly approach mentioned above.
Negative multipliers are dealt with by substituting the controlled addition by a controlled subtraction when conditioning on the most significant bit [@wakerly2000digital] because it has negative weight $w_{MSB}=-2^{n-1}$ in two’s-complement notation. The multiplicand is assumed to be positive throughout, which removes the need for conditional inversions of input and output (for every multiplication), thus tremendously reducing the size of circuits that require many multiplications such as, e.g., polynomial evaluation.
#### Fixed-point squaring.
The square of a number can be calculated using the same approach as for multiplication. Yet, one can save (almost) an entire register by only copying out the bit being conditioned on prior to performing the controlled addition. Then, the bit can be reset using another CNOT gate, followed by copying out the next bit and performing the next controlled addition. The gate counts are identical to performing $$\Ket x\Ket0\Ket0\mapsto\Ket x\Ket x\Ket 0\mapsto \Ket x\Ket x\Ket{x^2}\mapsto\Ket x\Ket{x^2}\Ket 0\;,$$ while allowing to save $n-1$ qubits.
Resource estimates for polynomial evaluation {#sec:polycost}
============================================
The evaluation of a degree $d$ polynomial requires an initial multiplication $a_d \cdot x$, an addition of $a_{d-1}$, followed by $d-1$ multiply-accumulate instructions. The total number of Toffoli gates is thus equal to the cost of $d$ multiply-accumulate instructions. Furthermore, $d+1$ registers are required for holding intermediate and final result(s) if no in-place adder is used for the last iteration (and no non-trivial pebbling strategy is applied). Other strategies may be employed in order to reduce the number of ancilla registers, at the cost of a larger gate count, see Table \[tab:pebbling\] for examples.
Note that all multiplications can be carried out assuming $x>0$, i.e. $x$ can be conditionally inverted prior to the polynomial evaluation (and the pseudo-sign bit is copied out). The sign is then absorbed into the coefficients: Before adding $a_i$ into the $\Ket{y_{i-1}x}$-register, it is inverted conditioned on the sign-bit of $x$ being set if the coefficient corresponds to an odd power. This is done because it is cheaper to implement a fixed-point multiplier which can only deal with $y_{i-1}$ being negative (see Sec. \[sec:basiccircuits\]).
The Toffoli gate count of multiplying two $n$-bit numbers (using truncated additions as described in Sec. \[sec:mult\]) is $$\begin{aligned}
T_\text{mul}(n,p)&=\sum_{i=0}^{p-1} T_\text{cadd}(n-i)+\sum_{i=1}^{n-p}T_\text{cadd}(n-i)\\
&=\sum_{i=0}^{p-1} 3(n-i)+\sum_{i=1}^{n-p}3(n-i)+3n\\
&=\frac 32n^2 + 3np+\frac 32 n - 3p^2+3p\end{aligned}$$ if one uses the controlled addition circuit by Takahashi et al. [@takahashi2009quantum], which requires $3n+3$ Toffoli gates to (conditionally) add two $n$-bit numbers. The subsequent addition can be implemented using the addition circuit by Takahashi et al. [@takahashi2009quantum], featuring $2n-1$ Toffoli gates. Thus, the total cost of a fused multiply-accumulate instruction is $$T_\text{fma}(n,p)=\frac 32n^2 + 3np+\frac{7}2 n - 3p^2+3p-1\;.$$ Therefore, the total Toffoli count for evaluating a degree $d$ polynomial is $$T_\text{poly}(n,d,p)=\frac 32n^2d + 3npd+\frac{7}2 nd - 3p^2d+3pd-d\;.$$
Evaluating $M$ polynomials in parallel for piecewise polynomial approximation requires only $n+\lceil\log_2 M\rceil$ additional qubits (since one $n$-qubit register is required to perform the addition in the last iteration, which is no longer just a constant) and $2M$ $\lceil\log_2M\rceil$-controlled NOT gates, which can be performed in parallel with the multiplication. This increases the circuit size by $$T_\text{extra}(M)=2M(4\lceil\log_2M\rceil - 8)$$ Toffoli gates per multiply-accumulate instruction, since a $k$-controlled NOT can be achieved using $4(k-2)$ Toffoli gates and $k-2$ dirty ancilla qubits [@barenco1995elementary], which are readily available in this construction.
The label register $\Ket l$ can be computed using 1 comparator per subinterval $$I_i = [a_i,a_{i_{i+1}}),\;a_0<a_1<...<a_{M-1}\;.$$ The comparator stores its output into one extra qubit, flipping it to $\Ket 1$ if $x \leq a_{i+1}$. The label register is then incremented from $i-1$ to $i$, conditioned on this output qubit still being $\Ket 0$ (indicating that $x > a_i$). Incrementing $\Ket l$ can be achieved using CNOT gates applied to the qubits that correspond to ones in the bit-representation of $(i-1)\oplus i$. Finally, the comparator output qubit is uncomputed again. This procedure is carried out $M$ times for $i=0,...,M-1$ and requires 1 additional qubit. The number of extra Toffoli gates for this label initialization is $$\begin{aligned}
T_\text{label}(M,n) &= M\cdot 2T_\text{cmp}(n)\\
&=4Mn\;,\end{aligned}$$ where, as a comparator, we use the CARRY-circuit from [@haner2016factoring], which needs $2n$ Toffoli gates to compare a classical value to a quantum register, and another $2n$ to uncompute the output and intermediate changes to the $n$ required dirty ancilla qubits.
In total, the parallel polynomial evaluation circuit thus requires $$\begin{aligned}
T_\text{pp}(n,d,p,M)&=T_\text{poly}(n,d,p)+d\cdot T_\text{extra}(M)\\
&\phantom{={}}+T_\text{label}(M,n)\\
&=\frac 32n^2d + 3npd+\frac{7}2 nd - 3p^2d+3pd-d\\
&\phantom{={}}+2Md(4\lceil\log_2M\rceil-8)+4Mn\end{aligned}$$Toffoli gates and $(d+1)n+\lceil\log_2M\rceil + 1$ qubits.
(Inverse) Square root {#sec:newton}
=====================
The inverse square root, i.e., $$f(x)=\frac 1{\sqrt x}$$ can be computed efficiently using Newton’s method. The iteration looks as follows: $$x_{n+1} = x_n\left(1.5-\frac{ax_n^2}2\right)\;,$$ where $a$ is the input and $x_n\overset{n\rightarrow\infty}{\longrightarrow} \frac 1{\sqrt a}$ if the initial guess is sufficiently close to the true solution.
Reversible implementation
-------------------------
#### Initial guess and first round.
Finding a good initial guess $x_0\approx \frac 1{\sqrt a}$ for Newton’s zero-finding routine is crucial for (fast) convergence. A crude approximation which turns out to be sufficient is the following: $$\frac 1{\sqrt a} = \left(2^{\log_2 a}\right)^{-\frac 12}=2^{-\frac{\log_2a}2}\approx 2^{\lfloor -\frac{\lfloor\log_2a\rfloor}2\rceil}=\tilde x_0\;,$$ where $\lfloor\log_2 a\rfloor$ can be determined by finding the first “1” when traversing the bit-representation of $a$ from left to right (MSB to LSB). While the space requirement for $\tilde x_0$ is in $\mathcal O(\log_2 n)$, such a representation would be impractical for the first Newton round. Furthermore, noting that the first iteration on $\tilde x_0=2^k$ leads to $$\label{eq:initialguess}
\tilde x_1 = 2^k\left(1.5-\frac{a2^{2k}}2\right)=:x_0\;,$$ one can directly choose this $x_0$ as the initial guess. The preparation of $x_0$ can be achieved using $(n-1)+n+1$ ancilla qubits, which must be available due to the space requirements of the subsequent Newton steps. The one ancilla qubit is used as a flag indicating whether the first “1” from the left has already been encountered. For each iteration $i\in \{n-1,...,1,0\}$, one determines whether the bit $a_{i}$ is 1 and stores this result $r_i$ in one of the $n$ work qubits, conditioned on the flag being unset. Then, conditioned on $r_i=1$, the flag is flipped, indicating that the first “1” has been found. If $r_i=1$, the $x_0$-register is initialized to the value in as follows: Using CNOTs, the $x_0$-register can be initialized to the value $1.5$ shifted by $k=\frac{p-2i}2$, where $p$ denotes the binary point position of the input, followed by subtracting the $(3k-1)$-shifted input $a$ from $x_0$, which may require up to $n-1$ ancilla qubits.
In order to improve the quality of the first guess for numbers close to $2^k$ for some $k\in\mathbb Z$, one can tune the constant $1.5$ in $\eqref{eq:initialguess}$, i.e., turn it into a function $C(k)$ of the exponent $k$. This increases the overall cost of calculating $x_0$ merely by a few CNOT gates but allows to save an entire Newton iteration even when only distinguishing three cases, namely $$\label{eq:initguessconstant}
C(k):=\left\{\begin{matrix} 1.613, & k < 0\\
1.5,& k = 0\\
1.62,& k > 0\end{matrix}\right.\;.$$
#### The Newton iteration.
Computing $x_{n+1}$ from $x_n$ by $$x_{n+1} = x_n\left(1.5 - \frac{ax_n^2}2\right)\;,$$ can be achieved as follows:
1. Compute the square of $x_n$ into a new register.
2. Multiply $x_n^2$ by the shifted input to obtain $ax_n^2/2$.
3. Initialize another register to 1.5 and subtract $ax_n^2/2$.
4. Multiply the result by $x_n$ to arrive at $x_{n+1}$.
5. Uncompute the three intermediate results.
The circuit of one such Newton iteration is depicted in Fig. \[fig:newtonit\].
![Circuit for the $n$-th Newton iteration of computing the inverse square root of $a$, given in a quantum superposition in $\Ket a$. SQR computes the square of the previous iterate $x_n$ into an empty result-register, which is then multiplied by the input $a$ (MUL), followed by subtracting (SUB) this intermediate result from the value $1.5$ (initialized using the SET$_{1.5}$-gate). Finally, the next iterate, i.e., $x_{n+1}=x_n(1.5-\frac 12ax_n^2)$ can be computed by multiplying this intermediate result by $x_n$. All temporary results are then cleared by running the appropriate operations in reverse order.[]{data-label="fig:newtonit"}](newtoniteration){width="\linewidth"}
Therefore, for $m$ Newton iterations, this requires $m+3$ $n$-qubit registers if no pebbling is done on the Newton iterates, i.e., if all $x_i$ are kept in memory until the last Newton iteration has been completed.
Resource estimates
------------------
Computing the initial guess for the fast inverse square root requires $n$ controlled additions of two $n$-bit numbers plus $2n$ Toffoli gates for checking/setting the flag (and uncomputing it again). Thus, the Toffoli count for the initial guess is $$T_\text{init}(n)=nT_\text{cadd}(n)+2n=3n^2+5n\;.$$ Each Newton iteration features squaring, a multiplication, a subtraction, a final multiplication (yielding the next iterate), and then an uncomputation of the three intermediate results. In total, one thus employs 5 multiplications and 2 additions (of which 2 multiplications and 1 addition are run in reverse), which yields the Toffoli count $$\begin{aligned}
T_\text{iter}(n,p)&=5T_\text{mul}(n,p)+2T_\text{add}(n)\\
&=\frac{15}2 n^2+15 n p+\frac{23}2 n-15 p^2+15 p-2\;.\end{aligned}$$ The number of Toffoli gates for the entire Newton procedure (without uncomputing the iterates) for $m$ iterations thus reads $$\begin{aligned}
T_\text{invsqrt}(n,m,p)&=T_\text{init}(n)+mT_\text{iter}(n,p)\\
&= n^2(\frac{15}2 m+3)+15 n p m+ n (\frac{23}2m+5)\\
&\phantom{={}}-15 p^2 m+15 p m-2m\;.\end{aligned}$$ Since each Newton iteration requires $3$ ancilla registers (which are cleaned up after each round) to produce the next iterate, the total number of qubits is $n(m+4)$, where one register holds the initial guess $x_0$.
Note that this is an upperbound on the required number of both qubits and Toffoli gates. Since Newton converges quadratically, there is no need to perform full additions and multiplications at each iteration. Rather, the number of bits $n$ used for the fixed point representation should be an (increasing) function of the Newton iteration.
The square root can be calculated using $$\sqrt{x} = x\cdot\frac 1{\sqrt x}\;,$$ i.e., at a cost of an additional multiplication into a new register. Note that this new register would be required anyway when copying out the result and running the entire computation in reverse, in order to clear registers holding intermediate results. Thus, the total number of logical qubits remains unchanged.
Arcsine {#sec:arcsine}
=======
While $\sin(x)$ and $\cos(x)$ are very easy to approximate using, e.g., polynomials, their inverses are not. The main difficulty arises near $\pm 1$, where $$\frac {d\arcsin(x)}{dx}=\frac 1{\sqrt{1-x^2}}$$ diverges. Therefore, it makes sense to use an alternative representation of $\arcsin(x)$ for larger values of $x$, e.g., $$\begin{aligned}
\arcsin(x) &= \frac{\pi}{2} - \arccos(x)\\
&=\frac{\pi}{2} - \arcsin\left(\sqrt{1-x^2}\right)\;.\end{aligned}$$ Applying the double-argument identity to the last expression yields $$\label{eqn:arcsin}
\arcsin(x) = \frac{\pi}{2} - 2\arcsin\left(\sqrt{\frac{1-x}2}\right)\;,$$ a very useful identity which was already used in a classical math library called Cephes [@moshier2000cephes]. We use the same partitioning of the interval, using a minimax polynomial to approximate $\arcsin(x)$ for $x\in[0,0.5)$, and the transformation in for $x\in[0.5,1]$. We use our inverse square root implementation to compute $\sqrt{z}$ for $$z = \frac{1-x}2\;,$$ which satisfies $z\in[0,0.25]$, for $x\in[0.5,1]$. Therefore, the fixed point position has to be chosen large, as the inverse square root diverges for small $x$. Luckily, the multiplication by $x$ after this computation takes care of the singularity and, since most bits of low-significance of $\frac 1{\sqrt x}$ will cause underflow for small $x$, we can get away with computing a shifted version of the inverse square root. This optimization reduces the number of extra bits required during the evaluation of the inverse square root.
It is worth noting that in many applications, evaluating $\arcsin(x)$ only on the interval $[0,0.5]$ may be sufficient. In such cases, the cost is much lower since this can be achieved using our parallel polynomial evaluation circuit. The Toffoli counts for this case can be found in Table \[tbl:funcs\].
Reversible implementation
-------------------------
The Arcsine is implemented as a combination of polynomial evaluation and the inverse square root to extend the polynomial approximation on $[0,0.5]$ to the entire domain $[0,1]$ employing the double-argument identity above. First, the (pseudo) sign-bit of $x$ is copied out and $x$ is conditionally inverted (modulo two’s-complement) to ensure $x\geq 0$. Since there are plenty of registers available, this can be achieved by conditionally initializing an extra register to $\Ket 1$ and then using a normal adder to increment $\overline x$ by one, where $\overline x$ denotes the bit- or one’s-complement of $x$. Since $x\in[0,1]$, one can determine whether $x<0.5$ using just one Toffoli gate (and 4 NOT gates). The result of this comparison is stored in an ancilla qubit denoted by $\Ket a$. $z=(1-x)/2$ can be computed using an adder (run in reverse) acting on $x$ shifted by one and a new register, after having initialized it to $0.5$ using a NOT gate. Then, conditioned on $\Ket{\overline a}$ (i.e., on $a$ being 0), this result is copied into the polynomial input register $\Ket{p_{\text{in}}}$ and, conditioned on $\Ket a$, $x$ is squared into $\Ket{p_\text{in}}$. After having applied our polynomial evaluation circuit (which uncomputes intermediate results) to this input, $\Ket{p_\text{in}}$ can be uncomputed again, followed by computing the square root of $z$. Then, the result of the polynomial evaluation must be multiplied by either $\sqrt z$ or $x$, which can be achieved using $2n$ controlled swaps and one multiplier. The final transformation of the result consists of an initialization to $\pi/2$ followed by a subtraction, both conditioned on $\Ket{\overline a}$, and a copy conditioned on $\Ket a$. Finally, the initial conditional inversion of $x$ can be undone after having (conditionally) inverted the output.
Resource estimates
------------------
Following this procedure, the Toffoli count for this arcsine implementation on $n$-bit numbers using $m$ Newton iterations for calculating $\sqrt z$ and a degree-$d$ polynomial to approximate $\arcsin(x)$ on $[0,0.5]$ can be written as $$\begin{aligned}
T_\text{arcsin}&=3T_{\text{inv}}+(2T_\text{poly}-T_\text{fma})\\
&\phantom{={}} +2T_\text{csquare}+T_\text{mul}+T_\text{cadd}\\
&\phantom{={}} +(2T_\text{invsqrt}+T_\text{mul})+5n+2\\
&\phantom{={}} +T_\text{add}\\
&=3T_\text{add}+2T_\text{poly}+3T_\text{mul}\\
&\phantom{={}} +T_\text{cadd}+2T_\text{invsqrt}+9n+2\\
&=d (3 n^2+n (6 p+7)-6 (p-1) p-2)\\
&\phantom{={}}+m (n (15 n+30 p+23)-30p(p-1) -4)\\
&\phantom{={}}+9 (n+1) p+\frac 92 n (n+1)\\
&\phantom{={}}+6 n^2+28n-9 p^2+2\end{aligned}$$
where $T_\text{inv}(n)$ denotes the Toffoli count for computing the two’s-complement of an $n$-bit number and $T_\text{csquare}(n,p)=T_\text{mul}(n,p)+2n$ is the number of Toffoli gates required to perform a conditional squaring operation. Furthermore, $2n$ Toffoli gates are needed to achieve the conditional $n$-bit swap operation (twice), and another $3n$ are used for (conditional) copies.
Results of the Reversible Simulation
====================================
All circuits were implemented at the gate level and tested using a reversible simulator extension to LIQ$Ui\Ket{}$. The results are presented in this section.
Piecewise polynomial approximation
----------------------------------
A summary of the required resource for implementing $\tanh(x)$, $\exp(-x^2)$, and $\sin(x)$ can be found in Tbl. \[tbl:funcs\]. For each function, one set of parameters was implemented reversibly at the level of Toffoli gates in order to verify the proposed circuits.
(Inverse) Square root {#inverse-square-root-1}
---------------------
The convergence of our reversible fast inverse square root implementation with the number of Newton iterations can be found in Fig. \[fig:invsqrterror\], where the bit sizes and point positions have been chosen such that the roundoff errors do not interfere significantly with the convergence. For all practical purposes, choosing between $3$ and $5$ Newton iterations should be sufficient. The effect of tuning the constants in the initial guess (see Eqn. \[eq:initguessconstant\]) can be seen when comparing Fig. \[fig:invsqrterrornoopt\] to Fig. \[fig:invsqrterror\]: The initial guess is obtained from the location of the first non-zero in the bit-representation of the input, which results in large rounding-effects for inputs close to an integer power of two. Tuning the initial guess results in almost uniform convergence, which allows to save an entire Newton iteration for a given $L_\infty$-error.
The square root converges better than the inverse square root for small values, which can be expected, since $$\sqrt x = x\cdot \frac 1{\sqrt x}$$ has a regularizing effect for small $x$. The error after $m$ Newton iterations when using $n$ bits for the fixed point representation is depicted in Fig. \[fig:sqrterror\]. Additionally, the initial guess could be improved by tuning the constants in Eqn. \[eq:initialguess\] such that the error is minimal after multiplying $x\cdot\frac 1{\sqrt x}$, instead of just optimizing for the inverse square root itself.
Arcsine {#arcsine-1}
-------
Our implementation of Arcsine uses both the polynomial evaluation and square root subroutines. The oscillatory behavior which can be seen in Fig. \[fig:arcsinerror\] is typical for minimax approximations. For $x>0.5$, the resolution is lower due to the wider range of $\frac 1{\sqrt x}$, which was accounted for by calculating a shifted version of the inverse square root. While this allows to save a few qubits (to the left of the binary point), the reduced number of qubits to the right of the binary point fail to resolve the numbers as well, which manifests itself by bit-noise for $x>0.5$ in Fig. \[fig:arcsinerror\]. The degrees of the minimax approximation were chosen to be $7$, $13$, and $17$ for $m=3,4,5$, respectively. Since $\arcsin(x)$ is an odd function, this amounts to evaluating a degree $3$, $6$, and $8$ polynomial in $x^2$, followed by a multiplication by $x$.
[c|c|c|c|c|c]{} Function & $L_\infty$ error & & & &\
$\tanh(x)$ & & & & &\
& $10^{-5}$ & & & &\
& & 3 & 15 & 136 & 12428\
& & 4 & 9 & 169 & 13768\
& & 5 & 7 & 201 & 15492\
& & 6 & 5 & 234 & 17544\
& $10^{-7}$ & & & &\
& & 3 & 50 & 166 & 27724\
& & 4 & 23 & 205 & 23095\
& & 5 & 14 & 244 & 23570\
& & 6 & 10 & 284 & 26037\
& $10^{-9}$ & & & &\
& & 3 & 162 & 192 & 77992\
& & 4 & 59 & 236 & 41646\
& & 5 & 30 & 281 & 35460\
& & 6 & 19 & 327 & 36578\
$\exp(-x^2)$ & & & & &\
& $10^{-5}$ & & & &\
& & 3 & 11 & 132 & 10884\
& & 4 & 7 & 163 & 12141\
& & 5 & 5 & 195 & 14038\
& & 6 & 4 & 226 & 15863\
& $10^{-7}$ & & & &\
& & 3 & 32 & 161 & 20504\
& & 4 & 15 & 199 & 19090\
& & 5 & 10 & 238 & 21180\
& & 6 & 7 & 276 & 23254\
& $10^{-9}$ & & & &\
& & 3 & 97 & 187 & 49032\
& & 4 & 36 & 231 & 32305\
& & 5 & 19 & 275 & 30234\
& & 6 & 12 & 319 & 31595\
$\sin(x)$ & & & & &\
& $10^{-5}$ & & & &\
& & 3 & 2 & 113 & 6188\
& & 4 & 2 & 141 & 7679\
& & 5 & 2 & 169 & 9170\
& & 6 & 2 & 197 & 10661\
& $10^{-7}$ & & & &\
& & 3 & 3 & 142 & 9444\
& & 4 & 2 & 176 & 11480\
& & 5 & 2 & 211 & 13720\
& & 6 & 2 & 246 & 15960\
& $10^{-9}$ & & & &\
& & 3 & 7 & 167 & 13432\
& & 4 & 3 & 207 & 15567\
& & 5 & 2 & 247 & 18322\
& & 6 & 2 & 288 & 21321\
$\exp(-x)$ & & & & &\
& $10^{-5}$ & & & &\
& & 3 & 11 & 116 & 8106\
& & 4 & 6 & 143 & 8625\
& & 5 & 5 & 171 & 10055\
& & 6 & 4 & 198 & 11245\
& $10^{-7}$ & & & &\
& & 3 & 31 & 149 & 17304\
& & 4 & 15 & 184 & 15690\
& & 5 & 9 & 220 & 16956\
& & 6 & 7 & 255 & 18662\
& $10^{-9}$ & & & &\
& & 3 & 97 & 175 & 45012\
& & 4 & 36 & 216 & 28302\
& & 5 & 19 & 257 & 25721\
& & 6 & 12 & 298 & 26452\
$\arcsin(x)$ & & & & &\
& $10^{-5}$ & & & &\
& & 3 & 2 & 105 & 4872\
& & 4 & 2 & 131 & 6038\
& & 5 & 2 & 157 & 7204\
& & 6 & 2 & 183 & 8370\
& $10^{-7}$ & & & &\
& & 3 & 3 & 134 & 7784\
& & 4 & 2 & 166 & 9419\
& & 5 & 2 & 199 & 11250\
& & 6 & 2 & 232 & 13081\
& $10^{-9}$ & & & &\
& & 3 & 6 & 159 & 11264\
& & 4 & 3 & 197 & 13138\
& & 5 & 3 & 236 & 15672\
& & 6 & 2 & 274 & 17938\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Polar codes attract more and more attention of researchers in recent years, since its capacity achieving property. However, their error-correction performance under successive cancellation (SC) decoding is inferior to other modern channel codes at short or moderate blocklengths. SC-Flip (SCF) decoding algorithm shows higher performance than SC decoding by identifying possibly erroneous decisions made in initial SC decoding and flipping them in the sequential decoding attempts. However, it performs not well when there are more than one erroneous decisions in a codeword. In this paper, we propose a path metric aided bit-flipping decoding algorithm to identify and correct more errors efficiently. In this algorithm, the bit-flipping list is generated based on both log likelihood ratio (LLR) based path metric and bit-flipping metric. The path metric is used to verify the effectiveness of bit-flipping. In order to reduce the decoding latency and computational complexity, its corresponding pipeline architecture is designed. By applying these decoding algorithm and pipeline architecture, an improvement on error-correction performance can be got up to 0.25dB compared with SCF decoding at frame error rate of $10^{-4}$, with low average decoding latency.'
author:
-
bibliography:
- './PMA\_SCF.bib'
title: |
Algorithm and Architecture for Path Metric Aided Bit-Flipping Decoding of Polar Codes\
[^1]
---
successive cancellation flip, path metric, bit-flipping metric, pipeline architecture, polar codes.
Introduction
============
Polar codes [@Arikan2008Channel] are the first channel codes proven to achieve the capacity of various communication channels and have been selected for the control channel in the 5G enhanced Mobile BroadBand (eMBB) scenario [@3GPPstandard]. However, for short to moderate blocklengths, the performance of successive cancellation (SC) decoding is worse than that of Turbo codes or low-density parity-check (LDPC) codes. To overcome this limitation, SC list (SCL) decoding [@Tal2012List] was introduced to improve the performance at the cost of increased computational complexity and decoding latency.
In recent researches, the successive cancellation flip (SCF) decoding proposed in [@Afisiadis2014A] was shown to be capable of providing error-correction performance close to that of SCL decoding with a small list size, while keeping the computational complexity close to that of SC. The idea of SCF decoder is to allow multiple subsequent decoding attempts to opportunistically correct the erroneous decision made in initial SC decoding by flipping the most unreliable bit. Modifications to SCF decoding are proposed in [@Giard2017Fast; @Condo2018Improved; @Furkan2018Bit-Flipping] to reduce the decoding latency and implementation complexity. However, these decoding methods focus on correcting the first error and can not identify more than one erroneous decisions, which limits their error-correction performance.
In order to enhance the performance of SCF decoding, several improvements have been proposed to correct more erroneous decisions [@Furkan2017Partitioned; @Zhang2017Progressive; @Fazeli2017Viterbi; @Chandesris2017An; @Chandesris2018Dynamic]. In [@Furkan2017Partitioned], they subdivide the codeword into several partitions, on which SCF is run individually. However, by adopting their method to correct more errors, the erroneous bits need to distribute just in different partitions evenly, which limits its correcting capability. In [@Zhang2017Progressive], they investigate the distribution of the first erroneous bit and restrict the search scope of flipping bits to a subset of information bits. By iteratively modifying the subset, their method can identify multiple incorrect bits. However, the reason of erroneous decisions is not only due to the transmitting capability of the subchannel itself, but also the condition of current channel noise. So we can not determine the positions of flipping bits by only considering the codeword itself.
The dynamic SCF (DSCF) decoding algorithm proposed in [@Chandesris2018Dynamic] shows a promising way to identify multiple erroneous decisions, while their method is not efficient to correct them. Different from their work, in our method, we generate the bit-flipping list based on both path metric and bit-flipping metric. The path metric of each decoding attempt can be used as a feedback to verify the effectiveness of bit-flipping attempt. Basing on effective bit-flipping attempt, we could identify multiple erroneous decisions step by step. In order to reduce the decoding latency and implementation complexity, a pipeline decoding architecture is designed.
The remainder of this work is organized as follows: in Section II, an overview of polar codes, SCF decoding, and DSCF decoding is presented. In Section III, the proposed decoding algorithm and its corresponding pipeline architecture are detailed. Section IV reports the simulation results, and then conclusions are drawn in Section V.
preliminary
===========
Polar Codes
-----------
Polar codes characterized by $(N,K,\mathcal{I})$ can achieve channel capacity via the phenomenon of channel polarization [@Arikan2008Channel]. The channel polarization theorem states that, as the blocklength $\textit{N}$ goes to infinity, a polarized subchannel becomes either a noiseless channel or a pure noise channel. By transmitting information bits over the reliable subchannels and transmitting frozen bits which are known by both transmitter and receiver over the unreliable subchannels, polar codes can achieve the channel capacity. Hence, constructing a polar code is equivalent to find the $\textit{K}$ most reliable subchannels over which the information bits are transmitted, with a set $\mathcal{I}$ indicating the locations of these subchannels. Many construction methods have been proposed to calculate the reliability of subchannels. In our work, we use the Gaussian approximation (GA) based density evolution method proposed in [@Wu2014Construction], since it is popular in the construction of polar codes for its good tradeoff between the complexity and performance.
The encoding process of a polar code can be represented with a matrix multiplication like: $$\begin{aligned}
\label{PASCF_EQ1}
x = u{G_N},\;{\rm{where}}\;{G_N} = B{\left[ {\begin{array}{*{20}{c}}
1 & 0 \\
1 & 1 \\
\end{array}} \right]^{ \otimes n}}
\end{aligned}$$ The vector $\textbf{u}$ hold the information bits denotes the source codeword to be encoded, vector $\textbf{x}$ denotes the encoded codeword and ${G_{N}}$ is the generator matrix, while $\otimes$ denotes the Kronecker product, $\textit{B}$ is a bit-reversal permutation matrix.
As for the decoding, we denote by $\textbf{y}$ the data received from the channel and use them as the decoder inputs. The decoder’s output is denoted by $\hat{u}^{N}_1$, where $\hat{u}_i$ is the estimate of the bit $u_i$ by hard decision. This hard decision is made according to the log likelihood ratio (LLR) $L_i=log(\frac{Pr(\rm{y},\hat{u}_{1}^{i-1})\mid u_{i}=0}{Pr(\rm{y},\hat{u}_{1}^{i-1})\mid u_{i}=1})$ by using the hard decision function $\textit{h}$: $$\begin{aligned}
\label{PASCF_EQ2}
{\hat u_i} = h({L_i}) = \left\{ {\begin{array}{ll}
{u_i}\;&{\rm{if}}\;i \notin \mathcal{I} \\
\frac{{1 - {\mathop{\rm sign}\nolimits} ({L_i})}}{2}\;&{\rm{if}}\;i \in \mathcal{I} \\
\end{array}} \right.
\end{aligned}$$ where $\mathop{\rm sign}(L_i) = \pm1$. At the same time, the LLRs at different calculation stage *l* are computed iteratively by follows: $$\label{PASCF_EQ3}
L_{l,i}= \left\{
\begin{array}
{l@{\quad \quad}l}
f(L_{l+1,i};L_{l+1,i+2^{l}}) & \text{if } \frac{i}{2^{l}}\text{is even} \\
g(\hat{s}_{l,i-2^{l}};L_{l+1,i-2^{l}};L_{l+1,i}) & \text{otherwise} \\
\end{array}
\right.$$ where $\hat{s}$ denotes the partial sum of $\hat{u}^{i-1}_{1}$. And in the LLR domain, the function $\textit{f}$ and $\textit{g}$ perform the following calculation for given inputs LLRs $L_a$ and $L_b$:
$$\begin{aligned}
\label{PASCF_EQ4}
f({L_a},{L_b})&=\log (\frac{{{e^{{L_a} + {L_b}}} + 1}}{{{e^{{L_a}}} + {e^{{L_b}}}}}) \\
\label{PASCF_EQ5}
g({L_a},{L_b},{u_s})&={( - 1)^{{u_s}}}{L_a} + {L_b}
\end{aligned}$$
Successive Cancellation Flip Decoding
-------------------------------------
The SCF decoding is a slightly-modified SC decoding algorithm, characterized by a number of extra decoding attempts, where several unreliable bits are flipped from its initial SC decoding. The decoding procedure of SCF is that, after the first SC decoding pass, the concatenated cyclic redundancy check (CRC) is verified. In case it matches, the decoding procedure stops and the estimated $\hat{u}^{N}_1$ is output. Otherwise, a list of positions of the least reliable estimated bits is built and then another SC decoding pass is launched. In this pass, once the location of the information bit that corresponds to the least reliable bit is reached, that estimated bit is flipped before subsequent SC decoding. Once an SC decoding pass has finished, the CRC is verified again. This procedure is repeated until the CRC pass or a predetermined maximum number $\textit{T}$ of decoding attempts is reached. However, since the concatenated CRC could not indicate the number of erroneous decisions and their positions, the performance of SCF decoding is limited by a hypothetical decoder, called SC-oracle decoder [@Afisiadis2014A], which can accurately avoid all first wrong decisions.
Dynamic SCF Decoding
--------------------
The DSCF decoding aimed to correct multiple erroneous bits was proposed in [@Chandesris2018Dynamic]. This decoding method is characterized by a bit-flipping list $\mathcal{L}_{flip}$, which updates after every decoding attempt. It contains $\textit{T}$ bit-flipping sets $\{ {\mathcal{E} _1}, \cdots ,{\mathcal{E} _{{\omega}}}\}$ with the highest probability to correct the trajectory of the SC decoding. The $\omega$ denotes the maximum number of bits that can be corrected by this list. They name it the noise order, which indicates the correcting capability of the bit-flipping list. By adopting this definition, the order of SCF decoding is order-1.
The DSCF decoding algorithm builds the bit-flipping list by using a new bit-flipping metric $M_\alpha$ [@Chandesris2017An], which takes into account the serial nature of the SC decoder. This metric has a much higher probability to find the first error that occurred during the sequential decoding process than the absolute value of LLRs. The method of calculating the probability of a flip set $\mathcal{E}_{\omega}$ with $\omega$ flipping bits to correct the trajectory of SC is close to that used to calculate $P({\mathcal{C}_i})=P({\hat u_i} \ne {u_i}|\hat u_1^{i - 1} = u_1^{i - 1})$ in [@Wu2014Construction]. By using this method, the probability $P(\mathcal{E}_{\omega})$ can be computed by the following expression:
$$\begin{aligned}
\label{PASCF_EQ6}
P(\mathcal{E}_{\omega}) = \prod\limits_{j \in {\mathcal{E} _\omega }} {{p_e}(\hat u{{[{\mathcal{E} _{\omega - 1}}]}_j}) \cdot \prod\limits_{\scriptstyle j < {i_\omega } \atop
\scriptstyle j \in \mathcal{I}\backslash {\mathcal{E} _\omega }} {(1 - {p_e}(\hat u{{[{\mathcal{E} _{\omega - 1}}]}_j}))} }
\end{aligned}$$
However, since the computation of ${p_e}(\hat u{{[{\mathcal{E} _{\omega - 1}}]}_j})$ is a hard task, it can be approximately replaced by ${q_e}(\hat u{{[{\mathcal{E} _{\omega - 1}}]}_j})= {1 \mathord{\left/
{\vphantom {1 {(1 + \exp (|{\rm{L}}{{[{\mathcal{E} _{\omega - 1}}]}_j}|))}}} \right.
\kern-\nulldelimiterspace} {(1 + \exp (|{\rm{L}}{{[{\mathcal{E} _{\omega - 1}}]}_j}|))}}$. By this way, they defined their bit-flipping metric as: = 0pt $$\begin{gathered}
\label{PASCF_EQ7}
{M_\alpha }({\mathcal{E} _\omega }) = \prod\limits_{j \in {\mathcal{E} _\omega }} {\left( {\frac{1}{{1 + \exp (\alpha |{\rm{L}}{{[{\mathcal{E} _{\omega - 1}}]}_j}|)}}} \right)} \cdot \\
\prod\limits_{\scriptstyle j < {i_\omega } \atop
\scriptstyle j \in \mathcal{I}\backslash {\mathcal{E} _\omega }} {\left( {\frac{1}{{1 + \exp ( - \alpha |{\rm{L}}{{[{\mathcal{E} _{\omega - 1}}]}_j}|)}}} \right)}
\end{gathered}$$ Specially for the initial SC decoding pass, the ${M_\alpha }(i)$ of each information bit can be calculated as: $${M_\alpha }(i) = \frac{1}{{1 + \exp (\alpha |{{\rm{L}}_i}|)}} \cdot \prod\limits_{\scriptstyle j < i \atop
\scriptstyle j \in \mathcal{I}} {\left( {\frac{1}{{1 + \exp ( - \alpha |{{\rm{L}}_j}|)}}} \right)}$$
The difference of procedure between SCF decoding and DSCF decoding lies in the updating of the bit-flipping list. For DSCF decoding, after each decoding attempt, new flipping bit would be added to the current flip set and its corresponding ${M_\alpha }$ would be computed. If the ${M_\alpha }$ is greater than the least one in the list, the bit-flipping set will be inserted to the list.
Improved Successive Cancellation Flip Decoding
==============================================
In this section, we propose a path metric aided bit-flipping decoding to improve the performance of SCF decoder to correct multiple erroneous decisions. And its corresponding pipeline architecture is designed to reduce decoding latency.
Path Metric Aided SCF Decoding Algorithm
----------------------------------------
In comparison of different bit-flipping decoding algorithms proposed in current researches, we find that for many wrong estimated codewords the first error is always in the initial bit-flipping list. However, the correct codeword can not be got at the last since that there are more than one errors caused by channel noise in a codeword or that the decoding algorithm can not find out all of the errors in the limited attempts. Based on this view, we make simulations to evaluate the correct ratio when the first error bit is in the initial bit-flipping list built by using the bit-flipping metric proposed in [@Chandesris2018Dynamic].
![The correct ratio of DSCF when the first error bit is in the bit-flipping list and the average rank of the first error bit in the bit-flipping list.[]{data-label="Fig.aver_correct"}](average_and_correct-eps-converted-to.pdf){width="8cm"}
From Fig.\[Fig.aver\_correct\], we can observe that the correct ratio is unsatisfactory at low ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ regime and that the performance of DSCF decoding is dependent on its ranking in the initial bit-flipping list. This means that the bit-flipping set at the top of the list has more probability to try more than one bit-flipping positions in the limited decoding attempts. When the bit-flipping set containing the accurate bit-flipping bits lies at the bottom of the list, it may be taken out of the list in the subsequent list updating or could not get enough attempts to find all of the errors before reach the predetermined $T$.
In the above simulations, we also found that if the first error position is in the initial bit-flipping list, its corresponding codeword almost has the smallest LLR-based path metric [@Balatsoukas2014LLR]. As shown in Fig.\[Fig.aver\_correct\], the set containing the accurate bit-flipping bit ranks top in the list sorted by LLR-based path metric. Hence, we introduce the LLR-based path metric as a feedback to the bit-flipping list generating.
The procedure of our Path Metric Aided SCF (PMA-SCF) decoding algorithm is that: after the first SC-decoding pass, the initial bit-flipping list is built based on the bit-flipping $M_\alpha$ value, and then the bit-flipping sets will be attempted one by one until the CRC matches or all bit-flipping sets have been attempted. During these decoding attempts, the LLR-based path metric of each attempt will be calculated, while a new bit-flipping list with order-2 will be built with its corresponding former bit-flipping set. This list will be sorted first by the path metric and then by the $M_\alpha$ value. That is to say, whether it has the priority to be a start point for correcting multiple errors is determined by the path metric, while whether a bit should be added to the bit-flipping set is determined by the $M_\alpha$ value. Then new decoding attempt launches according to this new list. The detail of this decoding algorithm is described in Algorithm.\[alg:PMA-SCF\], \[alg:bitflipdecode\] and \[alg:genlist\].
$(\hat u_1^N,L(y_1^N,\hat u_1^{i - 1}|{u_i}),PM_{init}) \leftarrow {\rm{SC}}(y_1^N,\mathcal{I},\varnothing)$ $(\mathcal{L}_{init},\mathcal{M}_{init}) \leftarrow$ Init$({L_{i \in \mathcal{I}}(y_1^N,\hat u_1^{i - 1}|{u_i})},T)$ $(\mathcal{L},\mathcal{M},\mathcal{P})\leftarrow(\mathcal{L}_{init},\mathcal{M}_{init},PM_{init})$ $(\mathcal{L},\mathcal{M},\mathcal{P})\leftarrow$BitFlip\_Decode$(y_1^N,\mathcal{I},\mathcal{L},\mathcal{M},\mathcal{P})$
$T\leftarrow$size\_of$(\mathcal{L})$ $(\hat u_1^N,\{ L{[{\mathcal{E}_j}]_i}\},PM_j) \leftarrow {\rm{SC}}(y_1^N,\mathcal{I},\mathcal{E}_j \in \mathcal{L})$ $(\mathcal{L},\mathcal{M}) \leftarrow$ Sort$(\mathcal{L}^{T}_{1},\mathcal{M}^{T}_{1},PM^{T}_{1})$ return $(\mathcal{L},\mathcal{M},PM^{T}_{1})$
$\mathcal{E} = {\mathcal{E}_j} \cup \{ i\} ;m = {M_\alpha }(\mathcal{E})$ return ($\mathcal{L},\mathcal{M}$)
{width="16cm"}
In Algorithm.\[alg:bitflipdecode\], $\mathcal{P}_m$ denotes the path metric of the decoding pass, whose bit-flipping set current decoding attempt extends. The function Sort firstly sorts the $\mathcal{L}_j$ by its $PM_{j}$, and then sorts the bit-flipping set $\mathcal{E}$ in list $\mathcal{L}_j$ by its ${M_\alpha }(\mathcal{E})$. All these sorted $\mathcal{L}_j$ constitute the new bit-flipping list $\mathcal{L}$.
Pipeline Architecture for Bit-flipping Decoding
-----------------------------------------------
By using the path metric as the feedback, it is inevitable to increase the decoding latency if we still use the decoding architecture of SC. In order to reduce the decoding latency, we design a pipeline architecture to realize parallel decoding of different attempts. Different from the parallel architecture adopted by SCL decoder, our pipeline architecture does not need too many processing elements to calculate the LLR data of different decoding attempts at the same time, since there is no data dependency between different attempts. As a result, the usage ratio of processing elements in our pipeline is much higher than that of SCL decoder.
The pipeline decoding is realized by splitting the command stream from its corresponding data. Since the different decoding attempts have the same decoding schedule, we can use only one set of command with several pointers to realize the control of different decoding attempts. As shown in Fig.\[Fig.pipeline\], there are four pointers indicating the current decoding stage of its corresponding decoding attempt. Each command in the command stream contains the decoding stage information and the type of decoding function ($f$ or $g$).
According to the pointers, the commands are fetched to the command FIFO, while the corresponding data of different attempts are fetched to the data FIFO. Then the processing elements controlled by the current commands work to process the data in the data FIFO. Each processing element could execute the $f$ or $g$ calculation and hard decision function $h$ according to the stage and function type. Since the data buffered in data FIFO may not be processed completely in one cycle of calculation, the remaining data will stay in the FIFO. Meanwhile, new data will be pushed into the FIFO when the data of former command have been processed, which will lead to a high peak-to-average ratio of the usage of data FIFO. In order to reduce the size and peak-to-average ratio of data FIFO, launch intervals are arranged among the sequential decoding attempts to avoid data with large scale to be pushed into data FIFO at the same time.
The calculation results of the processing elements are sent to corresponding internal LLR memory, while the hard decision results are sent to the path memory. Based on the hard decisions, the partial sums are calculated. Then they are fetched to the partial sum FIFO. An insertion sorter is adopted to calculate and sort the bit-flipping metric of each bit. Meanwhile, the path metrics and the CRC check are computed based on the hard decisions on-the-fly. Based on the bit-flip metrics and path metrics, the flip list is generated.
Besides, by applying the latency saving technique proposed in [@Giard2017PolarBear], the decoding latency can be farther reduced, since the different decoding attempts have the same start point in the decoding command stream. Due to page limitations, the trade-off among decoding latency, usage ratio of processing elements and memory requirements are omitted here. A more detailed presentation about this architecture will be given in the full version of this paper.
Simulation Results
==================
In this section, the frame error rate (FER) performance and the decoding latency of the proposed PMA-SCF decoding algorithm are evaluated via Monte-Carlo simulations. We make simulations based on the AFF3CT [@aff3ct] software, which is extended with our designed decoding algorithm. Specially, the transmissions are run on binary phase-shift keying (BPSK) modulation and additive white Gaussian noise (AWGN) channel. All polar codes are constructed targeting an ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ of 3.0dB. And all CRC-aided polar codes are concatenated with a 16-CRC with generator polynomial $g(D)\; = \;{D^{16}} + {D^{12}} + {D^5} + 1$. In this regard, the coding rate of these polar codes is ${R} = {{(K + 16)} \mathord{\left/
{\vphantom {{(K + 16)} N}} \right.
\kern-\nulldelimiterspace} N}$.
![FER performance comparison of our proposed decoding algorithm for polar codes of length $N\in\{ 256,512,1024\}$ and $R \in \{ 1/6,1/3,1/2,2/3\}$, with predetermined maximum attempts $T=10$.[]{data-label="Fig.coderate_compare"}](coderate-eps-converted-to.pdf){width="8cm"}
In Fig.\[Fig.coderate\_compare\], we compare the performance of PMA-SCF decoder for polar codes with different blocklength and rate. One can observe that the polar codes with low code rate have much better performance than that with high code rate, specially at the low ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ regime. The performance gap between different code rate narrows at the high ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ regime, which demonstrates the effectiveness of the proposed decoding algorithm to correct erroneous decisions. However, the performance gap between the $R=1/2$ curve and the $R=2/3$ curve does not narrow much, since there are too many erroneous bits in a codeword of code rate $R=2/3$, which beyond the correction capability of PMA-SCF decoder. Besides, the error-correction performance increases as the blocklength increases. It can be also observed that the gaps between different blocklengths narrow quickly at high ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ regime, since more wrong estimated codewords can be corrected by PMA-SCF decoding.
![FER performance of our proposed decoding algorithm for polar code PC(1024-512), compared with other bit-flipping methods and oracle-assisted SC-oracle decoder with order 1 (SCO-O1).[]{data-label="Fig.flip_compare"}](performance-eps-converted-to.pdf){width="8cm"}
Fig.\[Fig.flip\_compare\] depicts the FER performance of our proposed decoding algorithm for polar code PC(1024-528) against other bit-flipping decoding algorithms, where PSCF-P2 denotes the partitioned SCF decoding [@Furkan2017Partitioned] with divided partition P=2. In order to keep the same code rate, the concatenated CRC code for PSCF-P2 is 8-CRC, while 4-CRC for PSCF-P4. In Fig.\[Fig.flip\_compare\], all incarnations of SCF decoding have the same predetermined maximum attempts $T=10$. The FER performance of oracle-assisted SC-oracle decoder (SCO-O1) and SC decoder are used as the baseline for comparison, where SCO-O$k$ means that it can always correct the first $k$ erroneous decisions met by SC decoder, but no more errors can be corrected. It can be observed that our PMA-SCF decoding algorithm performs slightly better than SCO-O1 decoder in almost all cases, since its ability to correct higher-order errors. However, the performance gap between PMA-SCF and SCO-O1 curves narrows at high ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ regime and SCO-O1 outperforms PMA-SCF at ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}=3.5dB$. This is due to the fact that the number of multiple errors in one codeword decreases as the ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ increases and PMA-SCF could not accurately identify all first errors. Besides, one can observe that our proposed decoding algorithm outperforms the SCF decoding 0.25dB at FER of $10^{-4}$ with the same $T$ value and performs better than DSCF decoding at all ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ conditions.
![FER performance of PMA-SCF decoding algorithm with different predetermined maximum attempts $T \in \{ 10,20,50\}$ for PC(1024-512), compared with CA-SCL decoding with list size $L \in \{2,4,8\}$.[]{data-label="Fig.stateofart"}](highper-eps-converted-to.pdf){width="8cm"}
In order to evaluate the performance of PMA-SCF decoder with different predetermined maximum attempts $T$, they are compared with CRC-aided SCL decoding algorithm with list size $L \in \{2,4,8\}$ and 16-CRC for PC(1024-528). As shown in Fig.\[Fig.stateofart\], the FER performance of oracle-assisted SC-oracle decoder with order 1 (SCO-O1) and order 2 (SCO-O2) are used as the baseline for comparison. One can observe that the FER performance of PMA-SCF decoding algorithm with different predetermined maximum attempts are all between the SCO-O1 and SCO-2 curves, while only the CRC aided SCL decoding with list size $L=2$ (CA-SCL-L2) is worse than SCO-O1 and the CA-SCL-L8 is better than SCO-O2. The performance of PMA-SCF decoding with $T=20$ is similar to that of CA-SCL-L4 at almost all cases. Considering their decoding latency shown in Fig.\[Fig.decode\_latency\] and implementation complexity, the PMA-SCF decoding is more efficient than CA-SCL-L4 to get equivalent FER.
![Average decoding latency of various decoding algorithm for $PC(1024,512)$. $T=20$ for all SCF based decoders.[]{data-label="Fig.decode_latency"}](latency-eps-converted-to.pdf){width="8cm"}
In Fig.\[Fig.decode\_latency\], the decoding latency of our proposed decoding algorithm is evaluated, with respect to that of SCF, PSCF and DSCF. In this comparison, we use the clock cycles as the measurement, instead of the average number of attempts, since the adopting of pipeline architecture. The decoding latency of the SC decoder and the SCL decoder measured as that does in [@Giard2017PolarBear] are portrayed as the reference line. The average decoding latency at each ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ point is obtained by simulating $1 \times {10^8}$ frames. It can be observed that the decoding latency of SCF is the highest among all SCF based decoders at low ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ regime, since the absolute value of LLR is not efficient to be used as the bit-flipping metric. Compared with SCF decoding, the PSCF decoding has much lower decoding latency, since it may stop decoding when a partition fails before $T$ iterations. Our proposed PMA-SCF is about 24% above that of PSCF with $P=4$ at the worst ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}} = 1dB$, while it is up to ${\rm{1}}{\rm{.8}} \times$ faster than that of DSCF. It can also be observed that the decoding latency of our algorithm degrades quickly as the ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ increases, and approaches that of SCL at moderate ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$. At higher ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$, all SCF based decoding converge to the decoding latency of SC algorithm.
Conclusion
==========
In this paper, we propose the PMA-SCF decoding algorithm, that generates the bit-flipping list according to its bit-flipping metric and path metric, which provides an effective starting point to correct more erroneous decisions. The corresponding pipeline architecture is designed to reduce the decoding latency. We show that the average latency is much lower than current bit-flipping decoding method at the cost of increased memory. The simulation results show that our decoding algorithm can provide a performance improvement of up to 0.25dB at FER of $10^{-4}$ compared to SCF decoding, while decode up to ${\rm{1}}{\rm{.8}} \times$ faster than DSCF decoding at ${{{E_b}} \mathord{\left/
{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}} = 1dB$ point.
[^1]: This work was supported in part by the NSF of China (Grant No. 61874140).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
A. Bajravani\
\
\
\
A. Rastegar$^*$\
\
---
> **Abstract**
>
> = 0 mm In this paper we will try to introduce a good smoothness notion for a functor. We consider properties and conditions from geometry and algebraic geometry which we expect a smooth functor should have.\
> [: Abelian Category, First Order Deformations, Multicategory, Tangent Category, Topologizing Subcategory.\
> [**[Mathematics Subject Classification:]{}**]{} 14A20, 14A15, 14A22.]{}
Introduction
============
Nowadays noncommutative algebraic geometry is in the focus of many basic topics in mathematics and mathematical physics. In these fields, any under consideration space is an abelian category and a morphism between noncommutative spaces is a functor between abelian categories. So one may ask to generalize some aspects of morphisms between commutative spaces to morphisms between noncommutative ones. One of the important aspects in commutative case is the notion of smoothness of a morphism which is stated in some languages, for example: by lifting property as a universal language, by projectivity of relative cotangent sheaves as an algebraic language and by inducing a surjective morphism on tangent spaces as a geometric language.
In this paper, in order to generalize the notion of smooth morphism to a functor we propose three different approaches. A glance description for the first one is as follows: linear approximations of a space are important and powerful tools. They have geometric meaning and algebraic structures such as the vector space of the first order deformations of a space. So it is legitimate to consider functors which preserve linear approximations. On the other hand first order deformations are good candidates for linear approximations in categorical settings. These observations make it reasonable to consider functors which preserve first order deformations.\
The second one is motivated from both Schlessinger’s approach and simultaneous deformations. Briefly speaking, a simultaneous deformation is a deformation which deforms some ingredients of an object simultaneously. Deformations of morphims with nonconstant target, deformations of a couple $(X,\mathcal{L})$, in which X is a scheme and $\mathcal{L}$ is a line bundle on X, are examples of such deformations. Also we see that by this approach one can get a morphism of moduli spaces of some moduli families. We get this, by fixing a universal ring for objects which correspond to each other by a smooth functor. Theorem \[Th2\] connects this notion to the universal ring of an object. In $3.1$ and $3.2$ we describe geometrical setting and usage of this approach respectively.\
The third notion of smoothness comes from a basic reconstruction theorem of A. Rosenberg, influenced by ideas of A. Grothendieck. We think that this approach can be a source to translate other notions from commutative case to noncommutative one. In remarks \[rem2\] and \[rem3\] we notice that these three smoothness notions are independent of each other.\
Throughout this paper $\mathbf{Art}$ will denote the category of Artinian local $k$-algebras with quotient field $k$. By $\mathbf{Sets}$, we denote the category of sets which its morphisms are maps between sets. Let $F$ and $G$ be functors from $\mathbf{Art}$ to $\mathbf{Sets}$. For two functors $F,G: \mathbf{Art} \rightarrow \mathbf{Sets}$ the following is the notion of smoothness between morphisms of $F$ and $G$ which has been introduced in [@M.; @Sch.]:\
\
A morphism $D:F\rightarrow G$ between covariant functors $F$ and $G$ is said to be a smooth morphism of functors if for any surjective morphism $\alpha:B\rightarrow A$, with $\alpha \in \operatorname{Mor}(\textbf{Art})$, the morphism $$F(B)\rightarrow F(A)\underset{G(A)}{\times}G(B)$$ is a surjective map in $\mathbf{Sets}$.\
Note that this notion of smoothness is a notion for morphisms between special functors, i.e. functors from the category $\mathbf{Art}$ to the category $\mathbf{Sets}$, while the concepts for smoothness which we introduce in this paper are notions for functors, but not for morphisms between them.\
\
A functor $F:\textbf{Art}\rightarrow \mathbf{Sets}$ is said to be a deformation functor if it satisfies in definition 2.1. of [@M.; @Man.]. For a fixed field $k$ the schemes in this paper are schemes over the scheme $\operatorname{Spec}(k)$ otherwise it will be stated.
First Smoothness notion and some examples
=========================================
[**1.1 Definition:**]{} Let $M$ and $C$ be two categories. We say that the category $C$ is a multicategory over $M$ if there exists a functor $T:C\rightarrow M$, in which for any object $A$ of $M$, $T ^{-1}(A)$ is a full subcategory of $C$.\
Let $C$ and $\overline{C}$ be two multicategories over $M$ and $\overline{M}$ respectively. A morphism of multicategories $C$ and $\overline{C}$ is a couple $(u,\nu)$ of functors, with $u:C \rightarrow \overline{C}$ and $\nu:M\rightarrow \overline{M}$ such that the following diagram is commutative:\
$$\begin{array}{ccccc}
C &\overset{f}\rightarrow&M \\
u \downarrow& & \downarrow \nu\\
\overline{C}& \rightarrow & \overline{M}\\
\end{array}$$\
The category of modules over the category of rings and the category of sheaves of modules over the category of schemes are examples of multicategories.\
[**1.2 Definition:**]{} For a $S$-scheme $X$ and $A\in \mathbf{Art}$, we say that $\mathcal{X}$ is a $S$-deformation of $X$ over $A$ if there is a commutative diagram: $$\begin{array}{ccccc}
X & \rightarrow & \mathcal{X}\\
\downarrow & & \downarrow \\
S & \rightarrow & S\underset{k}{\times}A \\
\end{array}$$ in which $X$ is a closed subscheme of $\mathcal{X}$, the scheme $\mathcal{X}$ is flat over $S\underset{k}{\times}A$ and one has $X \cong S\underset{S\underset{k}{\times}A}{\times}\mathcal{X}$.\
Note that in the case $S=\operatorname{Spec}(k)$, we would have the usual deformation notion and as in the usual case the set of isomorphism classes of first order $S$-deformations of $X$ is a $k$-vector space. The addition of two deformations $(\mathcal{X}_{1},\mathcal{O}_{\mathcal{X}_{1}})$ and $(\mathcal{X}_{2},\mathcal{O}_{\mathcal{X}_{2}})$ is denoted by $(\mathcal{X}_{1}\underset{X}{\bigcup}\mathcal{X}_{2},\mathcal{O}_{\mathcal{X}_{1}}\underset{\mathcal{O}_{X}}{\times}\mathcal{O}_{\mathcal{X}_{2} })$.\
[**1.3 Definition:**]{} [**i)**]{} Let $C$ be a category. We say $C$ is a category with enough deformations, if for any object $c$ of $C$, one can associate a deformation functor. We will denote the associated deformation functor of $c$, by $D_{c}$. Moreover for any $c\in \operatorname{Obj}(C)$ let $D_{c}(k[\epsilon])$ be the tangent space of $c$, where $k[\epsilon]$ is the ring of dual numbers.\
[**ii)**]{} Let $C_{1}$ and $C_{2}$ be two multicategories with enough deformations over $\operatorname{Sch}/k$, and $(F,id)$ be a morphism between them. We say $F$ is a smooth functor if it has the following properties:\
[**1 :**]{} For any object $M$ of $C_{1}$, if $M_{1}$ is a deformation of $M$ in $C_{1}$ then $F(M_{1})$ is a deformation of $F(M)$ on $A$ in $C_{2}$.\
[**2 :**]{} The map $$\begin{array}{ccc}
D_{M}(k[\varepsilon])&\rightarrow&D_{F(M)}(k[\varepsilon])\\
\mathcal{X}\!\!\!\!\!\!\!\!\!\!&\mapsto&\!\!\!\!\!\!\!\!\!\!F(\mathcal{X})
\end{array}$$ is a morphism of tangent spaces.\
The following are examples of categories with enough deformations:\
1) Category of schemes over a field $k$.\
2) Category of coherent sheaves on a scheme $X$.\
3) Category of line bundles over a scheme.\
4) Category of algebras over a field $k$.\
We will need the following lemma to present an example of smooth functors:
\[lem1.1\] Let $X$, $X_{1}$, $X_{2}$ and $\mathcal{X}$ be schemes over a fixed scheme $S$. Assume that the following diagram of morphisms between schemes is a commutative diagram.\
.7500mm
(20,30)(30,90) (25,115)[(0,0)\[cc\][$X$]{}]{} (60,115)[(0,0)\[cc\][$X_1$]{}]{} (25,85)[(0,0)\[cc\][$X_2$]{}]{} (60,85)[(0,0)\[cc\][$\mathcal{X}$]{}]{} (52.25,115.5)[(1,0)[.07]{}]{} (31,115.5)[(1,0)[21.25]{}]{} (53.25,85.25)[(1,0)[.07]{}]{} (30.25,85.25)[(1,0)[23]{}]{} (25,92.25)[(0,-1)[.07]{}]{} (25,110.25)[(0,-1)[18]{}]{} (60,92.75)[(0,-1)[.07]{}]{} (60,110.5)[(0,-1)[17.75]{}]{} (40,120)[(0,0)\[cc\][$i_1$]{}]{} (40,80)[(0,0)\[cc\][$i_2$]{}]{} (66,100)[(0,0)\[cc\][$g$]{}]{}
\
If $i_{1}$ is homeomorphic on its image, then so is $i_2$.
[ See Lemma $(2.5)$ of [@K.; @Sch.]. ]{}
\[exam0\] Let $Y$ be a flat scheme over $S$. Then the fibered product by $Y$ over $S$ is smooth. More precisely, the functor: $$\begin{array}{ccc}
F:\operatorname{Sch}/S&\rightarrow&\operatorname{Sch}/Y\\
F(X)\!\!\!\!\!\!\!\!\!\!\!\!\!&=&X\underset{S}{\times}Y
\end{array}$$ is smooth.
Let $X$ be a closed subscheme of $\mathcal{X}$. Then $X\underset{S}{\times}Y$ is a closed subscheme of $\mathcal{X}\underset{S}{\times}Y$. To get the flatness of $\mathcal{X}\underset{S}{\times}Y$ over $S\underset{k}{\times}A$, it suffices to has flatness of $Y$ over $S$. It can also be verified easily that the isomorphism: $$(\mathcal{X}\underset{S}{\times}Y)\underset{S\underset{k}{\times}A}{\times}S\cong X\underset{S}{\times}Y$$ is valid. Therefore $\mathcal{X}\underset{S}{\times}Y$ is a $S$-deformation of $X\underset{S}{\times}Y$ if $\mathcal{X}$ is such a deformation of $X$. This verifies the first condition of item $(\mathbf{ii})$ of definition 1.3. To prove the second condition we need the following:
\[lem1.2\] Let $Y$, $X_{1}$ and $X_{2}$ be $S$-schemes. Assume that $X$ is a closed subscheme of $X_{1}$ and $X_{2}$. Then we have the following isomorphism:
$(X_{1}\underset{X}{\bigcup} X_{2})\underset{S}{\times}Y\cong
(X_{1}\underset{S}{\times}Y)\underset{X\underset{S}{\times}Y}{\bigcup}(X_{2}\underset{S}{\times}Y)$.
For simplicity we set: $$X_{1}\underset{X}{\cup}X_{2}=\mathcal{X} \qquad , \qquad
(X_{1}\underset{S}{\times}Y)\underset{X\underset{S}{\times}Y}{\bigcup}(X_{2}\underset{S}{\times}Y)=\mathcal{Z}$$ By universal property of $\mathcal{Z}$ we have a morphism $\theta:\mathcal{Z}\rightarrow\mathcal{X}\underset{S}{\times}Y$. We prove that $\theta$ is an isomorphism. Let $i_{1}:X_{1}\rightarrow \mathcal{X}$, $i_{2}: X_{2}\rightarrow \mathcal{X}$, $j_{1}:X_{1}\underset{S}{\times}Y\rightarrow \mathcal{Z}$ and $j_{2}:X_{2}\underset{S}{\times}Y\rightarrow \mathcal{Z}$ be the inclusion morphisms. Set theoretically we have: $$\begin{array}{cccc}
j_{1}(X_{1}\underset{S}{\times}Y)\bigcup j_{2}(X_{2}\underset{S}{\times}Y)&=&\mathcal{Z}& \qquad(\mathbf{\operatorname{I}})\\
i_{1}(X_{1})\bigcup i_{2}(X_{2})&=&\mathcal{X} & \qquad(\operatorname{II})
\end{array}$$ Now consider the following commutative diagrams:
0.50mm
(70,100)(0,40) (10,100)[(0,0)\[cc\][$X$]{}]{} (40,130)[(0,0)\[cc\][$X_1$]{}]{} (70,100)[(0,0)\[cc\][$\mathcal{X}$]{}]{} (40,70)[(0,0)\[cc\][$X_2$]{}]{} (34.75,127.25)[(1,1)[.14]{}]{} (13.75,103.5)(.067307692,.076121795)[312]{}[(0,1)[.076121795]{}]{} (67.25,105.75)[(1,-1)[.14]{}]{} (45.5,127.5)(.067337461,-.067337461)[323]{}[(0,-1)[.067337461]{}]{} (35.75,74.75)[(1,-1)[.14]{}]{} (13.5,97)(.067424242,-.067424242)[330]{}[(0,-1)[.067424242]{}]{} (66.25,97.5)[(1,1)[.14]{}]{} (44.5,74)(.067337461,.072755418)[323]{}[(0,1)[.072755418]{}]{} (20,120)[(0,0)\[cc\][$f$]{}]{} (58,120)[(0,0)\[cc\][$i_1$]{}]{} (58,80)[(0,0)\[cc\][$i_2$]{}]{} (20,80)[(0,0)\[cc\][$g$]{}]{}
\
.500mm
(138.5,100)(0,20) (23,77)[(0,0)\[cc\][$X\underset{S}{\times}Y$]{}]{} (57,113)[(0,0)\[cc\][$X_1\underset{S}{\times} Y$]{}]{} (57,42)[(0,0)\[cc\][$X_2\underset{S}{\times}Y$]{}]{} (130,115)[(0,0)\[cc\][$\mathcal{Z}$]{}]{} (130,42)[(0,0)\[cc\][$\mathcal{X}\underset{S}{\times}Y$]{}]{} (51.5,109)[(1,1)[.07]{}]{} (27.25,84.75)(.0337078652,.0337078652)[700]{}[(0,1)[.0337078652]{}]{} (51.5,49.3)[(1,-1)[.07]{}]{} (26.5,74.5)(.0337273992,-.0337273992)[700]{}[(0,-1)[.0337273992]{}]{} (120,115.5)[(1,0)[.07]{}]{} (75,115.5)[(1,0)[45]{}]{} (118,45.25)[(1,0)[.07]{}]{} (75,45.25)[(1,0)[40]{}]{} (129.5,51.25)[(0,-1)[.07]{}]{} (129.75,110.5)(-.03125,-7.40625)[8]{}[(0,-1)[7.40625]{}]{} (121.5,52.5)[(1,-1)[.07]{}]{} (62.5,111)(.0340253749,-.0337370242)[1734]{}[(1,0)[.0340253749]{}]{} (122.25,110)[(1,1)[.07]{}]{} (62.5,50)(.0337380011,.0338791643)[1771]{}[(0,1)[.0338791643]{}]{} (35,100)[(0,0)\[cc\][$g_1$]{}]{} (90,121)[(0,0)\[cc\][$j_1$]{}]{} (35,57.75)[(0,0)\[cc\][$g_2$]{}]{} (90,36)[(0,0)\[cc\][$h$]{}]{} (138,80)[(0,0)\[cc\][$\theta$]{}]{} (108,70)[(0,0)\[cc\][$e$]{}]{} (75,70)[(0,0)\[cc\][$j_2$]{}]{}
Let $z\in \mathcal{X}\underset{S}{\times}Y$, $\alpha=P_{\mathcal{X}}(z)\in \mathcal{X}$ and $\beta=P_{Y}(z)\in Y$ in which $P_{\mathcal{X}}$ and $P_{Y}$ are the first and second projections from $\mathcal{X} \underset{S}{\times}Y$ to $\mathcal{X}$ and $Y$ respectively. Then by relation $(\operatorname{II})$ one has $ \alpha\in i_{1}(X_{1})$ or $ \alpha\in i_{2}(X_{2})$. If $\alpha=i_{1}(\alpha_{1})\in i_{1}(X_{1})$, then $\alpha_{1}$ and $\beta$ go to the same element in S by $\eta_{X_{1}}$ and $\eta_{Y}$ in which $\eta_{X_{1}}:X_{1}\rightarrow S$ and $\eta_{Y}:Y\rightarrow S$ are the maps which make $X_{1}$ and $Y$ schemes over $S$. Therefore there exists an element $\gamma$ in $X_{1}\underset{S}{\times}Y$ such that $\overline{P}_{X_{1}}(\gamma)=\alpha_{1}$ and $\overline{P}_{Y}(\gamma)=\beta$ in which $\overline{P}_{X_{1}}$ and $\overline{P}_{Y}$ are the first and second projections from $X\underset{S}{\times}Y$ to $X_{1}$ and $Y$ respectively. By universal property of fibered products $\gamma$ belongs to $\mathcal{X}\underset{S}{\times}Y$ and $\theta(\gamma)=z$. The proof for the case $\alpha \in i_{2}(X)$ is similar. This implies that $\theta$ is surjective.\
For injectivity of $\theta$ assume that $\theta(z_{1})=\theta(z_{2})$. The relation $(\operatorname{I})$ implies that $z_{1}$ and $z_{2}$ belong to $\operatorname{im}(j_{1})\bigcup \operatorname{im}(j_{2})$. Set $z_{1}=j_{1}(c_{1})$ and $z_{2}=j_{2}(c_{2})$. There are two cases: if $z_{1}, z_{2} \in \operatorname{im}(j_{1})\cap \operatorname{im}(j_{2})$, then the lemma \[lem1.1\] implies $e(c_{1})\neq e(c_{2})$ when $c_{1}\neq c_{2}$. Now by commutativity of the subdiagram:
.7500mm
(30,30)(30,90) (20,115)[(0,0)\[cc\][$X_1\underset{S}{\times}Y$]{}]{} (70,114)[(0,0)\[cc\][$\mathcal{X} \underset{S}{\times}Y$]{}]{} (70,82)[(0,0)\[cc\][$\mathcal{Z}$]{}]{} (59,115.5)[(1,0)[.07]{}]{} (32.75,115.5)[(1,0)[26.25]{}]{} (70.25,108)[(0,1)[.07]{}]{} (70.25,86.5)[(0,1)[21.5]{}]{} (65.75,86.25)[(3,-2)[.07]{}]{} (28,110.25)(.0530196629,-.0337078652)[712]{}[(1,0)[.0530196629]{}]{} (39.25,95.25)[(0,0)\[cc\][$j_1$]{}]{} (75.25,98.25)[(0,0)\[cc\][$\theta$]{}]{}
we have $\theta(z_{1})\neq \theta(z_{2})$ when $z_{1}\neq z_{2}$.\
Otherwise assume that $z_{1}\in \operatorname{im}(j_{1})$ and $z_{2}\in \operatorname{im}(j_{2})- \operatorname{im}(j_{1})$. In this case one can see easily that $i_{1}\overline{P}_{X_{1}}(c_{1})=i_{2}q_{2}(c_{2})$ in which $q_{2}$ is the first projection from $X_{2}\underset{S}{\times}Y$ to $X_{2}$. Since $\mathcal{X}$ is the fibered sum of $X_{1}$ and $X_{2}$, there exists an element $x\in X$ such that $i_{1}f(x)=i_{2}g(x)$, $f(x)=\overline{P}_{X_{1}}(c_{1})$ and $g(x)=q_{2}(c_{2})$.\
Set $y=p_{2}e(c_{1})$ in which $p_{2}$ is the second projection from $\mathcal{X}\underset{S}{\times}Y$ to $Y$. By a diagram chasing we see that $x$ and $y$ go to the same element in S. This implies that there exists an element $\epsilon$ in $X\underset{S}{\times}Y$ which is mapped to $x$ and $y$ by first and second projections, respectively. Also it is easy to see that the equalities $g_{1}(x,y)=c_{1}$ and $g_{2}(x,y)=c_{2}$ are valid. Since $\mathcal{Z}$ is the fibered sum of $X_{1}\underset{S}{\times}Y$ and $X_{2}\underset{S}{\times}Y$ on $X\underset{S}{\times}Y$, we have $z_{1}=z_{2}$ which means that $\theta$ is injective. This together with the surjectivity of $\theta$ implies that $\theta$ is bijective. Continuity of $\theta$ and its inverse, follow by a diagram chasing.\
Finally we should prove that $\mathcal{O}_{\mathcal{X}\underset{S}{\times}Y}\cong \mathcal{O}_{Z}$. Since the claim is local, it is sufficient to prove it for affine schemes. Let $\mathcal{X}$ be an affine scheme, so $X_{1}$, $X_{2}$ and $X$ are affine schemes, since they are closed subschemes of $\mathcal{X}$ each one defined by a nilpotent sheaf of ideals. Set $\mathcal{X}=\operatorname{Spec}(A)$, $X_{1}=\operatorname{Spec}(A_{1})$, $X_{2}=\operatorname{Spec}(A_{2})$, $X=\operatorname{Spec}(A_{0})$, $Y=\operatorname{Spec}(B)$ and $S=\operatorname{Spec}(C)$. The isomorphism $\mathcal{O}_{\mathcal{X}\underset{S}{\times}Y}\cong \mathcal{O}_{Z}$ reduces to the following isomorphism: $$(A_{1}\underset{A_{0}}{\times}A_{2})\underset{C}{\otimes}B\cong
(A_{1}\underset{C}{\otimes}B)\underset{A_{0}\underset{C}{\otimes}B}{\times}(A_{2}\underset{C}{\otimes}B).$$ Define a morphism as follows: $$\begin{array}{ccc}
d:(A_{1}\underset{A_{0}}{\times}A_{2})\underset{C}{\otimes}B&\rightarrow&
(A_{1}\underset{C}{\otimes}B)\underset{A_{0}\underset{C}{\otimes}B}{\times}(A_{2}\underset{C}{\otimes}B)\\
d((a_{1},a_{2})\otimes b)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&=&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!(a_{1}\otimes b,a_{2}\otimes b).
\end{array}$$ By a simple commutative algebra argument it can be shown that this is in fact an isomorphism. This completes the proof of lemma.
This lemma shows that the fibered product functor, induces an additive homomorphism on tangent spaces. To check linearity with respect to scalar multiplication, take an element $a$ in the field $k$. Multiplication by $a$ is a ring homomorphism on $D$. This homomorphism induces a morphism from $S\underset{k}{\times}D$ to $S\underset{k}{\times}D$ and scalar multiplication on $t_{D_{X}}$, comes from composition of this map with $\pi$. In other words this gives a map from $\mathcal{X}\underset{S}{\times}Y$ into $\mathcal{X}\underset{S}{\times}Y$. These together give the linearity of homomorphism induced from $F$ with respect to scalar multiplication.\
This observation together with the lemma \[lem1.2\], give the smoothness of the fibered product functor.
\[lem1.3\] Let $X$ and $Y$ be arbitrary schemes and assume that there exist morphisms $h$ and $g$ from $\eta$ to $\eta_{1}$ and $\eta_{2}$, where $\eta$, $\eta_{1}$, $\eta_{2}$ are sheaves of $\mathcal{O}_{X}$-modules on the scheme $X$. Then for any morphism $f:X\rightarrow Y$ we have the following isomorphisms: $$\begin{array}{ccc}
f_{*}(\eta_{1}\underset{\eta}{\times}\eta_{2})\!\!\!\!\!&\cong &\!\!\!\!\!f_{*}(\eta_{1}) \underset{f_{*}(\eta)}{\times}
f_{*}(\eta_{2}) \\
f^{*}(\rho_{1}\underset{\rho}{\times}\rho_{2})\!\!\!\!\!&\cong&\!\!\!\!\!
f^{*}(\rho_{1})\underset{f^*(\rho)}{\times}
f^{*}(\rho_{2}).
\end{array}$$
[ For the first isomorphism, it is enough to consider the definition of direct image of sheaves.\
To prove the second one, assume that $(M_{i})_{i\in I}, (N_{i})_{i\in I}$ and $(P_{i})_{i \in I}$ are direct systems of modules over a directed set $I$. We have to prove that $$\lim_{i\in I}(M_{i}\underset{P_{i}}{\times}N_{i})\cong (\lim_{i\in
I}(M_{i}))\underset{(\lim_{i\in I}(P_{i}))}{\times}(\lim_{i\in I}(N_{i})).$$ The above isomorphism can be proved by elementary calculations and using elementary properties of direct limits.]{}
\[exam1\] Let $f:X\rightarrow Y$ be a flat morphism of schemes. Then $f_{*}$ and $f^{*}$ are smooth functors.
In fact let $\eta$ be a coherent sheaf on $X$ and $\eta_{1}\in \operatorname{Coh}(X\underset{k}{\times}D)$ be a deformation of $\eta$. By these assumptions we would have: $$(f_{*}(\eta))\underset{D}{\otimes}k=f_{*}(\eta_{1}\underset{D}{\otimes}k)=f_{*}(\eta).$$ Moreover $f_{*}(\eta_{1})$ is flat on $D$, because $\eta$ is flat on $D$. This implies that $f_{*}$ satisfies in the first condition of smoothness. The second one is the first isomorphism of lemma \[lem1.3\]. Therefore $f_{*}$ is smooth. Smoothness of $f^{*}$ is similar to that of $f_{*}$.
Assuming this notion of smoothness we can generalize another aspect of geometry to categories.
[**1.9 Definition:**]{} Let $C$ be a category with enough deformations. We define the tangent category of $C$, denoted by $TC$, as follows: $$\begin{array}{ccc}
\operatorname{Obj}(TC)\!\!\!\!\!\!\!\!\!\!&:=&\underset{c\in\operatorname{Obj}(C)}{\bigcup} T_{c}C\\
\operatorname{Mor}_{TC}(\upsilon,\omega)&:=&\!\!\!\!\!\!\!\!\!\!\operatorname{Mor}(V,W)
\end{array}$$ which by $T_{c}C$, we mean the tangent space of $D_{c}$. Moreover $\upsilon$ and $\omega$ are first order deformations of $V$ and $W$.
\[rem1\] (i) It is easy to see that a smooth functor induces a covariant functor on the tangent categories.\
(ii) Let $C$ be an abelian category. Then its tangent category is also abelian.
The following is a well known suggestion of A. Grothendieck: Instead of working with a space, it is enough to work on the category of quasi coherent sheaves on this space. This suggestion was formalized and proved by P. Gabriel for noetherian schemes and in its general form by A. Rosenberg. To do this, Rosenberg associates a locally ringed space to an abelian category $A$. In a special case he gets the following:
\[Th1.1\] Let $(X,\mathcal{O}_{X})$ be a locally ringed space and let $A=\operatorname{QCoh}(X)$. Then $$(\operatorname{Spec}(A),\mathcal{O}_{ \operatorname{Spec}(A)})=(X,\mathcal{O}_{X})$$ where $\operatorname{Spec}(A)$ is the ringed space which is constructed from an abelian category by A. Rosenberg.
[ See Theorem $(A.2)$ of [@A.; @L.; @R]. ]{}
The definition of tangent category and theorem 4 motivates the following questions which the authors could not find any positive or negative answer to them until yet.\
[**Question 1:**]{} For a fixed scheme $X$ consider $T\operatorname{QCoh}(X)$ and $TX$, the tangent category of category of quasi coherent sheaves on $X$ and the tangent bundle of $X$ respectively. Can $TX$ be recovered from $T\operatorname{QCoh}(X)$ by Rosenberg construction?\
**Question 2:** Let $\mathcal{M}$ be a moduli family with moduli space $M$. Consider $\mathcal{M}$ as a category and consider its tangent category $T\mathcal{M}$. Is there a reconstruction from $T\mathcal{M}$ to $TM$?
Second Smoothness Notion
========================
**Definition 3.1 :** Let $F:\operatorname{Sch}/k\rightarrow \operatorname{Sch}/k$ be a functor with the following property:\
For any scheme X and an algebra $A\in \operatorname{Obj}(\textbf{Art})$, $F(\mathcal{X})$ is a deformation of $F(X)$ over $A$ if $\mathcal{X}$ is a deformation of $X$ over $A$.\
We say $F$ is smooth at $X$, if the morphism of functors $$\Theta_{X}:D_{X}\rightarrow D_{F(X)}$$ is a smooth morphism of functors in the sense of Schlessinger (See [@M.; @Sch.]). $F$ is said to be smooth if for any object $X$ of $\operatorname{Sch}/k$, the morphism of functors $\Theta_{X}$ is smooth.\
The following lemma describes more properties of smooth functors.
\[lem2.1\] $(a)$ Assume that $C_{1}$, $C_{2}$ and $C_{3}$ are multicategories over the category $\operatorname{Sch}/k$. Let $F_{1}:C_{1}\rightarrow C_{2}$ and $F_{2}:C_{2}\rightarrow C_{3}$ be smooth functors with the first notion. Then so is their composition.\
$(b)$ Let $F_{1}:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ and $F_{2}:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ be smooth functors with second notion. Then so is their composition.\
$(c)$ Let $F:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ and $G:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ be functors to which $F$ and $GoF$ are smooth with second notion. Then $G$ is a smooth functor.\
$(d)$ Let $F,G,H: \operatorname{Sch}/k\rightarrow \operatorname{Sch}/k$ be smooth functors in the sense of second notion with morphisms of functors $F\rightarrow G$ and $H\rightarrow G$ between them. Then the functor $F\underset{G}{\times}H$ is smooth functor with the second one.
Part $(a)$ of lemma is trivial.\
$(b)$ Let $X\in \operatorname{Sch}/k$ and $B\rightarrow A$ be a surjective morphism in $\mathbf{Art}$. By smoothness of $F_{1}$, $F_{2}$ and by remark $2.4$ of [@M.; @Sch.], there exists a surjective map $$\Theta_{F_{2}(X),F_{2}oF_{1}(X)}: D_{F_{2}oF_{1}(X)}(B)\underset{D_{F_{2}oF_{1}(X)}(A)}{\times}D_{X}(A)\rightarrow D_{F_{1}(X)}(B)\underset{D_{F_{1}(X)}(A)}{\times}D_{X}(A)$$ such that we have $$\Theta_{X,F_{2}oF_{1}(X)}=\Theta_{F_{2}(X),F_{2}oF_{1}(X)}o\Theta_{X,F_{2}(X)}$$ in which $\Theta_{X,F_{2}(X)}$ is the surjective map induced by smoothness of $F_{2}$. From this equality it follows the map $\Theta_{X,F_{2}oF_{1}(X)}$ is surjective immediately.\
$(c)$ For a scheme $X$ in the category $\operatorname{Sch}/k$ consider a surjective morphism $B\rightarrow A$ in $\mathbf{Art}$. By smoothness of $F$, the morphism $D_{X}\rightarrow D_{F(X)}$ is a surjective morphism of functors. Now apply proposition $(2.5)$ of $\cite{M. Sch.}$ to finish the proof.\
$(d)$ Let $X\in \operatorname{Sch}/k$ and $B\rightarrow A$ be a surjective morphism in $\mathbf{Art}$. Consider the following commutative diagram:
.7500mm
(30,30)(30,90) (20,115)[(0,0)\[cc\][$D_{X}$]{}]{} (70,115)[(0,0)\[cc\][$D_{F(X)}$]{}]{} (70,80)[(0,0)\[cc\][$D_{G(X)}$]{}]{} (59,115.5)[(1,0)[.07]{}]{} (32.75,115.5)[(1,0)[26.25]{}]{} (70.25,108)[(0,1)[.07]{}]{} (70.25,86.5)[(0,1)[21.5]{}]{} (61.75,86.25)[(3,-2)[.07]{}]{} (24,110.25)(.0530196629,-.0337078652)[712]{}[(1,0)[.0530196629]{}]{} (39.25,95.25)[(0,0)\[cc\][$$]{}]{} (75.25,98.25)[(0,0)\[cc\][$$]{}]{}
Since the morphisms of functors $D_{X}\rightarrow D_{F(X)}$ and $D_{X}\rightarrow D_{G(X)}$ are smooth morphisms of functors, proposition $2.5(iii)$ of $\cite{M. Sch.}$ implies that $D_{F(X)}\rightarrow D_{G(X)}$ is a smooth morphism of functors. Similarly $D_{H(X)}\rightarrow D_{G(X)}$ is a smooth morphism of functors. Again by $2.5(iv)$ of $\cite{M. Sch.}$, the morphism of functors: $$D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}\rightarrow D_{H(X)}$$ is a smooth morphism of functors. Since in the diagram:
.7500mm
(30,30)(30,90) (25,115)[(0,0)\[cc\][$D_{X}$]{}]{} (81,113)[(0,0)\[cc\][$D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}$]{}]{} (81,80)[(0,0)\[cc\][$D_{H(X)}$]{}]{} (59,115.5)[(1,0)[.07]{}]{} (32.75,115.5)[(1,0)[26.25]{}]{} (80.25,108)[(0,1)[.07]{}]{} (80.25,86.5)[(0,1)[21.5]{}]{} (71.75,86.25)[(3,-2)[.07]{}]{} (34,110.25)(.0530196629,-.0337078652)[712]{}[(1,0)[.0530196629]{}]{} (39.25,95.25)[(0,0)\[cc\][$$]{}]{} (75.25,98.25)[(0,0)\[cc\][$$]{}]{}
the morphisms $D_{X}\rightarrow D_{H(X)}$ and $D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}$ are smooth morphisms of functors, part $(c)$ of this lemma implies that $D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}$ is smooth. This completes the proof.
\[rem22\] $\textbf{(i)}$ The same proof works to generalize part $(c)$ of lemma \[lem2.1\] as follows:\
$(\acute{c})$ Let $F:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ and $G:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ be functors with $GoF$ smooth and $F$ surjective in the level of deformations in the sense that for any $X\in \operatorname{Sch}/k$ and any $A\in \operatorname{Obj}(\mathbf{Art})$ the morphism $D_{X}(A)\rightarrow D_{F(X)}(A)$ is surjective in $\mathbf{Art}$. Then $G$ is smooth.\
$\textbf{(ii)}$ One may ask to find a criterion to determine smoothness of a functor. We could not get a complete answer to this question. But by the following fact, one may answer the question at least partially:\
A functor $F:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ is not smooth at $X$ if there exists an algebra $A\in \mathbf{Art}$ such that the map $D_{X}(A)\rightarrow D_{F(X)}(A)$ is not surjective in $\mathbf{Art}$, (See [@M.; @Sch.]).
Theorem \[Th2\] relates the second smoothness notion to the hull of deformation functors. Recall the hull of a functor is defined in [@M.; @Sch.]. We need the following:
\[lem2.2\] Let $F: \mathbf{Art} \rightarrow \operatorname{Sets}$ be a functor. Then its hulls are non-canonically isomorphic if there exist.
[ See Proposition $2.9$ of [@M.; @Sch.]. ]{}
\[Th2\] Let $F:\operatorname{Sch}/k\rightarrow \operatorname{Sch}/k$ be a functor and for a scheme $X$ the functor $F$ has the following properties:\
$(a)$ $F(\mathcal{X})$ is a deformation of $F(X)$ if $\mathcal{X}$ is a deformation of $X$.\
$(b)$ The functor $F$ induces isomorphism on tangent spaces.\
Then $F$ is smooth at $X$ if and only if $(R,F(\xi))$ is a hull of $D_{F(X)}$ whenever $(R,\xi)$ is a hull of $D_{X}$.
[ To prove the Theorem it is enough to apply $(b), (c)$ of lemma \[lem2.1\], and lemma \[lem2.2\] to the functors $$\Theta_{X}:D_{X}\rightarrow D_{F(X)} \quad,\quad h_{R,X}:h_{R}\rightarrow D_{X} \quad,\quad h_{R,F(X)}:h_{R}\rightarrow D_{F(X)}.$$ ]{}
For a scheme $X$ let:
{pairs $(\mathcal{X},\Omega_{\mathcal{X}/k})$ which $\mathcal{X}$ is an infinitesimal deformation of $X$ over $A$ }
be the isomorphism classes of fibered deformations of $X$.\
In the following example we use this notion of deformations of schemes.
\[exam2\] The functor defined by: $$\begin{array}{ccc}
F:\operatorname{Sch}/k &\rightarrow& \operatorname{QCoh}\\
F(X)\!\!\!\!\!\!\!\!\!\!&=&\Omega_{X/k}
\end{array}$$ is a smooth functor.
Note that if one considers deformations of $\Omega_{X/k}$ as usual case, the above functor will not be smooth. The usual deformation of $ \Omega_{X/k}$ can be described as simultaneous deformation of an object, and differential forms on that object. Also this observation is valid for $TX$ and $\omega_{X}$ instead of $\Omega_{X}$.
\[rem2\] The first and second smoothness notions are in general different. Note that a functor which is smooth with the second notion induces surjective maps on tangent spaces. Since the morphism induced on tangent spaces with first notion of smoothness is not necessarily surjective, a functor which is smooth in the sense of first notion is not necessarily smooth with the sense of second notion. Also a functor which is smooth in the sense of second notion can not be necessarily smooth with the first notion in general. In fact the map induced on tangent spaces by second notion is not necessarily a linear map. It is easy to see that the example \[exam2\] is smooth with both of the notions, but examples \[exam0\] and \[exam1\] are smooth just in the sense of first one.
A Geometric interpretation
--------------------------
Let $F$ be a smooth functor at $X$. By theorem \[Th2\], $X$ and $F(X)$ have the same universal rings and this can be interpreted as we are deforming $X$ and $F(X)$ simultaneously. Therefore we have an algebraic language for simultaneous deformations. The example \[exam2\] can be interpreted as follows: we are deforming a geometric space and an ingredient of that space, e.g. the structure sheaf of the space or its sheaf of relative differential forms, and these operations are smooth.
Relation with smoothness of a morphism
--------------------------------------
Let $\mathcal{M}$ be a moduli family of algebro - geometric objects with a variety $M$ as its fine moduli space and suppose $Y(m)\rightarrow M $ is the fiber on $m\in M$. With this assumptions we would have the following bijections: $$\begin{array}{ccc}
T_{m,M}&\cong&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\mbox{Hom}(\operatorname{Spec}(k[\epsilon]),M)\\
&\cong&\{\mbox{classes of first order deformations of X over A} \}
\end{array}$$ In fact these bijections states that why deformations are important in geometric usages. Now suppose we have two moduli families $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ with varieties $M_{1}$ and $M_{2}$ as their fine moduli spaces. Also describe $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ as categories in which there exists a smooth functor $F$ between them. In this setting, if we have a morphism between them, induced from $F$, then it is a smooth morphism.
Third Smoothness Notion
=======================
This notion of smoothness is completely motivated from Rosenberg’s reconstruction theorem, Theorem $(A.2)$ of [@A.; @L.; @R]. For this notion of smoothness we do not use deformation theory.
**3.1 Definition:** Let $F:C_{1} \rightarrow C_{2}$ be a functor between abelian categories such that there exists a morphism $$f:\operatorname{Spec}(C_{1})\rightarrow \operatorname{Spec}(C_{2})$$ induced by the functor $F$. We say $F$ is a smooth functor if $f$ is a smooth morphism of schemes.
\[rem3\] $(a)$ Since this smoothness notion uses a language completely different from the two previous ones, it does not imply non of them and vice versa. We did not verified this claim with details but it is not so legitimate to expect that this smoothness implies the previous ones, because deformation theory is not consistent with the Rosenberg construction. This observation together with the remark \[rem2\] show that these three notions are independent of each other, having nice geometric and algebraic meaning in their own rights separately.\
$(b)$ It seems that a functor of abelian categories induces a morphism of schemes in rarely cases. But the cases in which this happens are the cases of enough importance to consider them. Here we mention some cases which this happens.\
**(i)** Let $f:X \rightarrow \operatorname{Spec}(k)$ be a morphism of finite type between schemes. Then it can be shown $f$ is induced by $$f_{*}:\operatorname{QCoh}(X) \rightarrow
\operatorname{QCoh}(\operatorname{Spec}(k))$$ by Rosenberg’s construction. This example is important because it can be a source of motivation, to translate notions from commutative case to noncommutative one.\
**(ii)** Also the following result of Rosenberg is worth to note:\
Let $A$ be an abelian category.\
(a) For any topologizing subcategory $T$ of $A$, the inclusion functor $T\rightarrow A$ induces an embedding $\operatorname{Spec}(T) \rightarrow
\operatorname{Spec}(A)$.\
(b) For any exact localization $Q:A \rightarrow A/S $ and for any $P \in
\operatorname{Spec}(A)$, either $P \in \operatorname{Obj}(S)$ or $Q(P)\in \operatorname{Spec}(A/S)$; hence $Q$ induces an injective map from $\operatorname{Spec}(A)-\operatorname{Spec}(S)$ to $\operatorname{Spec}(A/S)$.
[ See Proposition $(A.0.3)$ of [@A.; @L.; @R]. ]{}
**Acknowledgements:** The authors are grateful for referee/s carefully reading of the paper, notable remarks and valuable suggestions about it.
[99]{} J. Harris, I. Marrison, Moduli of Curves, Graduate Texts in Mathematics, Springer-Verlag, 1994. R. Hartshorne, Deformation Theory, Springer-Verlag, 2010. R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, Springer-Verlag, 1977. W. Lowen , M. V. Bergh, Deformation theory of Abelian categories, Trans. AMS, v.358, n.12, p.5441-5483, 2006. M. Manetti, Extended deformation functors, arxiv:math.AG/9910071 v2 16Mar2001. H. Matsumura, Commutative Ring Theory, Cambridge University Press, 1986. A. L. Rosenberg, Noncommutative schemes, Compositio Mathematica **112**: 93-125, 1998. M. Schlessinger, Functors of Artin rings, Trans. AMS **130**, 1968, 208-222. K. Schwede, Gluing schemes and a scheme without closed points, unpublished, K.Schwede, math.stanford.edu E. Sernesi, An Overview of Classical Deformation Theory, Notes from seminars Algebraic geometry 2000/2001, Univ. La Sapienza. E. Sernesi, Deformations of schemes, Series: Grundlehren der Mathematicien Wissenchaften, Vol.334, Springer-Verlag, 2006.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Fei Miao, Quanyan Zhu, Miroslav Pajic, and\
George J. Pappas, [^1][^2].
title: '**Coding Schemes for Securing Cyber-Physical Systems Against Stealthy Data Injection Attacks**'
---
[^1]: This material is based on research sponsored by DARPA under agreement number FA8750-12-2-0247. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Part of he results in this work appeared at the 53rd Conference on Decision and Control, Los Angeles, CA, USA, December 2014 [@code_cdc14].
[^2]: F. Miao and G. J. Pappas are with the Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA 19014. Q. Zhu is with the Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA 11201. M. Pajic is with the Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA 27708. Email: {miaofei, pappasg}@seas.upenn.edu,{quanyan.zhu}@nyu.edu, {miroslav.pajic}@duke.edu
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A mean field SDW analysis of pseudogap in the underdoped cuprates is proposed on the basis of the $t-t^{\prime}-U$ Hubbard model. The of our theory is consistent with the experiment quite well within the of the present experimental measurement. Therefore we that the pseudogap phenomenon in the underdoped cuprates can be well explained within the mean field approximation.'
address:
- 'Department of Physics, Anhui University, Hefei 230039, People’s Republic of China'
- 'Department of Physics, University of Science and Technology of China, Hefei 230026, People’s Republic of China'
author:
- Ping Lou
- 'Hang-sheng Wu'
title: Study of pseudogap in underdoped cuprate
---
22 cm 15.5 cm
[**1. Introduction**]{}
A large body of experimental investigations have indicated that underdoped superconductors exhibit intriguing properties at temperatures above the superconducting transition temperature $T_c$. Most notably, the underdoped cuprates exhibit a pseudogap behavior below a characteristic temperature $T^\ast$ which can be well above the superconducting transition temperature $T_c$. The so-called means a partial gap. “An example of such a partial gap would be a situation where, within the band theory approximation, some regions of the Fermi surface gapped while other parts retain their conducting prope rties and with the gapped portion diminishes and the materials become more ”(quoted from Ref. [@1]). What is the origin behind it? A number of scenarios like pair well above $T_c$ [@Randeria92a; @Emery; @Ranninger], spin–charge separation [@Lee; @Dai], spin-density wave (SDW) or fluctuations [@Pines; @Kampf] have been proposed as possible origins of these pseudogap phenomena. However, no consensus has been reached so far, which one is correct in these microscopic theories. It should be noted that these theories of the pseudogap are all beyond the mean-field approximation.
In present paper, we propose a mean field SDW analysis for understanding the pseudogap phenomena in the underdoped cuprates. Our aim is to examine to what extend can we interpret this phenomenon within mean field theory.
[**2. Electronic band structure**]{}
Soon after the discovery of the cuprate superconductors, the electronic band of the cuprates has been calculated by the local density approximation (LDA) band calculation [@Massidda; @Hamada; @Freeman; @Pickett]. The result of the electronic band structure of the cuprates of the LDA band calculation is consistent with the later angle resolved photoemission experiment [@Campuzanoet; @Dessau; @King; @Shen]. The electronic band structure of the cuprates can be well fitted by a tight-binding model, which is written as $$\overline{\varepsilon}_{\bf k}=-2t(\cos k_x+\cos k_y)-4t^{\prime}\cos k_x\cos k_y.
\label{1}$$ Where $t$ is nearest-neighbor, $t^\prime$ is next-to-nearest-neighbor. In this paper we consider $t>0$ and $t^{\prime}<0$ only. Energy contour lines for the electronic band structure (\[1\]) are shown in Fig.$\;$\[fig1\]. There are two different saddle points locate at the $\overline{M}$ points \[($\pm\pi$, 0) and (0, $\pm\pi$)\] of the Brillouin zone. The energy contour line with energy $\overline{\varepsilon}_{s}=4t^{\prime}$ pass through the saddle points.
For convenience, we choice $\overline{M}$ as the new origin of the ${\bf k}$-space and take the energy $\overline{\varepsilon}_{s}=4t^{\prime}$ at the saddle point $\overline{M}$ as zero. Then the dispersion (\[1\]) is reexpressed in the form $$\begin{aligned}
\varepsilon_{\bf k}&=&\overline{\varepsilon}_{\bf k}-4t^{\prime}\nonumber\\
&=&-2t(-\cos k_x+\cos k_y)+4t^{\prime}(\cos k_x\cos k_y-1).
\label{2} \end{aligned}$$ If without specific statement, we keep this usage later.
We replot in the period Brillouin zone the energy contour lines passing through the saddle points. As shown in Fig.$\;$\[fig2\], there are two different regions: ${\bf I+I}^\prime$ and ${\bf II}$. In the region ${\bf II}$, $\varepsilon_{\bf k}<0$, in the regions ${\bf I+I}^\prime$, $\varepsilon_{\bf k}>0$. The area of the region ${\bf I+I}^\prime$ is larger than that of the region ${\bf II}$. When the region ${\bf II}$ is shifted by the vector ${\bf Q}=(\pi,\pi)$, it coincides with the region ${\bf I}$. The region ${\bf I}'$ is called as the necklace region, which has following features. Firstly, when ${\bf k}$ locates in a bubble, ${\bf k+Q}$ will locate in another one, both $\varepsilon_{\bf k}$ and $\varepsilon_{\bf k+Q}$ are larger then zero. On the other hand, in the regions outside the necklace region ${\bf I}'$, both sign of the $\varepsilon_{\bf k}$ and $\varepsilon_{\bf k+Q}$ are always opposite. For example, when ${\bf k}$ locates in ${\bf I}$, $\varepsilon_{\bf k}>0$, then ${\bf k+Q}$ will locate in ${\bf II}$, $\varepsilon_{\bf k+Q}<0$. Secondly, in the overdoping regime, the Fermi surface entirely lies outside the necklace region (as shown in Fig.$\;$\[fig3\]). But for the underdoping case, only part of the Fermi surface lies outside the necklace region, and further, with decreased doping the portion outside the necklace region increases (as shown in Fig.$\;$\[fig3\]).
It is interest to note the fact that when $t^\prime=0$, the necklace region and said of the band structure of the cuprates disappears.
[**3. Mean-field theory**]{}
The starting point of our calculation is the Hubbard model. In the momentum representation, the $t-t^{\prime}-U$ Hubbard model can be written as [@Fradkin] $$H=\sum_{{\bf k}\sigma}(\varepsilon_{{\bf k}}-\mu)a_{{\bf k}\sigma}^{\dagger}a_{{\bf k}\sigma}-\;\frac{U}{2N}\sum_{\bf q}\sum_{{\bf k}\sigma{\bf k'}\sigma'}a_{{\bf k+q}\sigma}^{\dagger}\sigma a_{{\bf k}\sigma}a_{{\bf k'}\sigma'}^{\dagger}\sigma' a_{{\bf k'+q}\sigma'}.
\label{3}$$ Here a term $\frac{1}{2}NU$ has been omitted. $U$ is the local Coulomb repulsion. $a_{{\bf k}\sigma}(a_{{\bf k}\sigma}^{\dagger})$ is the annihilation (creation) operator for the electron with momentum ${\bf k}$ and spin $\sigma$. $\mu$ is the chemical potential. $\varepsilon_{{\bf k}}$ is given by Eq.$\;$(\[2\]). All the momentum summations extend over the Brillouin zone. Considering commensurate SDW state and using the mean-field approximation, the Hamiltonian reduces $$H={\sum_{{\bf k}\sigma}}'(\varepsilon_{{\bf k}}-\mu)a_{{\bf k}\sigma}^{\dagger}a_{{\bf k}\sigma}+{\sum_{{\bf k}\sigma}}'(\varepsilon_{\bf k+Q}-\mu)a_{{\bf k+Q}\sigma}^{\dagger}a_{{\bf k+Q}\sigma}-\Delta {\sum_{{\bf k}\sigma}}'(a_{{\bf k+Q}\sigma}^{\dagger}\sigma a_{{\bf k}\sigma}+h.c).
\label{4}$$ Here $\sum_{\bf k}'$ means that the sum extends over the magnetic Brillouin zone (shown in by the thick square). The term $\frac{N}{2U} \Delta^{2}$ has been omitted. The order parameter $\Delta$ is given by $$\Delta=\frac{2U}{N}{\sum_{{\bf k}\sigma}}'<a_{{\bf k}+{\bf Q}\sigma}^{\dagger}\sigma a_{{\bf k}\sigma}>.$$
By the following canonical transformation
$$\begin{aligned}
\alpha_{\bf k\sigma} & = & u_{\bf k}a_{{\bf k}\sigma}-v_{\bf k}\sigma a_{{\bf k+Q}\sigma}\ , \nonumber \\
\gamma_{{\bf k}\sigma}& = & v_{\bf k}\sigma a_{{\bf k}\sigma}+u_{\bf k}\sigma a_{{\bf k+Q}\sigma}\ ,
\label{canon}\end{aligned}$$
the Hamiltonian (\[4\]) is diagonalised as $$H={\sum_{{\bf k}\sigma}}'(\varepsilon_{1}({\bf k})\alpha^{\dagger}_{{\bf k}\sigma}\alpha_{{\bf k}\sigma}+\varepsilon_{2}({\bf k})\gamma^{\dagger}_{{\bf k}\sigma}\gamma_{{\bf k}\sigma}),$$ in which, $$\varepsilon_{1}({\bf k})=\frac{\varepsilon_{{\bf k}}+\varepsilon_{\bf k+Q}}{2}-\mu+\sqrt{(\frac{\varepsilon_{{\bf k}}-\varepsilon_{\bf k+Q}}{2})^2+\Delta^2},\label{8}$$ $$\varepsilon_{2}({\bf k})=\frac{\varepsilon_{{\bf k}}+\varepsilon_{\bf k+Q}}{2}-\mu-\sqrt{(\frac{\varepsilon_{{\bf k}}-\varepsilon_{\bf k+Q}}{2})^2+\Delta^2},\label{9}$$ $$\Delta=\frac{U}{N}{\sum_{{\bf k}}}'
\frac{\Delta}{E({\bf k})}
(\tanh (\frac{\varepsilon_{1}({\bf k})}{2T})-
\tanh (\frac{\varepsilon_{2}({\bf k})}{2T}))\label{10}$$ and $$E({\bf k})=\sqrt{(\frac{\varepsilon_{{\bf k}}-\varepsilon_{\bf k+Q}}{2})^2+\Delta^2}.
\label{11}$$ Here $\varepsilon_{1}({\bf k})$ and $\varepsilon_{2}({\bf k})$ are energy dispersions of the quasiparticles. For the hole doping system, the Fermi surface lies inside the lower band ($\varepsilon_{2}({\bf k})$). The pseudogap is given by
$$\begin{aligned}
\Delta_{PS}(\phi)&=&|\varepsilon_{2}({\bf k})|\nonumber\\
&=&
\mu-4t^{\prime}(\cos k_x\cos k_y-1)+\sqrt{4t^2 (\cos k_x-\cos k_y)^2+\Delta^2}.
\label{12}\end{aligned}$$
In Fig.$\;$\[fig5\] we plot the part of the magnetic Brillouin zone of the Fig.$\;$\[fig4\]. The light curve represents the Fermi surface. $k_x$- and $k_y$-axis are parallel with $\overline{M}\Gamma$ and $\overline{M}$X, respectively. In Eq.$\;$(\[12\]), ${\bf k}=(k_x, k_y)$ is the wave vector of the Fermi surface, i.e. $\varepsilon_{{\bf k}}-\mu=0$. $\phi=\arctan (k_x/k_y)$ is polar angle of the wave vector ${\bf k}$. For convenience, we take $\phi^{\prime}=\arctan(k_x/(\pi-k_y))$ as variable instead of the $\phi$ in the following calculations.
[**4. Results**]{}
In this section, we analyse the angular dependence of the pseudogap $\Delta_{PS}(\phi')$ along the Fermi surface. Ouing to the symmetry of the energy spectrum $\varepsilon_{2}({\bf k})$, our analysis can be limited only in the interval $0\leq\phi'\leq45^\circ$.
By solving Eqs.$\;$(\[8\]), (\[9\]), (\[10\]), (\[11\]) and (\[12\]) numerically, we compute $\Delta_{PS}(\phi')$ at $T=0$ K in the underdoping regime ($\mu>0$). In the computation, we choose , $t'/t=-0.18$, $U/t=0.8$ and the hole doping concentration x=0.13. The results are plotted as $\Delta_{PS}(\phi')$ versus $\phi^{\prime}$ curve in Fig.$\;$\[fig6\]. It shows that there is strong angular dependence of the $\Delta_{PS}(\phi')$, as one moves along the Fermi surface from $\phi^{\prime}=0$ (i.e. near the saddle point $\overline{M}$, or at the hot spot) to $\phi^{\prime}=45^\circ$ (i.e. cold spot). At first, we see the maximum pseudogap at $\phi^{\prime}=0$. As we go into the necklace region, the pseudogap drops quickly and, at approximately $18^\circ$, drops down to 2 meV. And then, the pseudogap decreases monotonously to $\Delta_{PS}(45^\circ)$. Experimental measurement that only a portion of the Fermi surface near the saddle point $\overline{M}$ becomes gapped while in other parts, the pseudogap is equal to zero [@1; @Norman]. However, the of the pseudogap data is rather larger[^1]$^)$. It is impossible to say certainly that along the part of the Fermi surface near the cold point, the pseudogap is real zero or only a small quantity. Keeping this fact in mind, we conclude that the general structure of the pseudogap along the Fermi surface, shown in Fig.$\;$\[fig6\], captures the main feature of experiment [@Norman][^2]$^)$.
The dependence of $\Delta_{PS}(0)$ on the hole doping concentration is shown in Fig.$\;$\[fig7\]. It shows that $\Delta_{PS}(0)$ increase with the decrease of hole doping. In Fig.$\;$\[fig8\], we plot $\Delta\phi'$ versus the hole doping concentration curve. Here, $\Delta\phi'$ is the interval $\phi^{\prime}$ (measured from $\phi^{\prime}=45^\circ$), defined by the requirement that the value $\Delta_{PS}(\phi')$ is less than a proper chosen value (say, 2 meV in Fig.$\;$\[fig6\] and \[fig8\]). It can be seen from Fig.$\;$\[fig8\] that the length of the Fermi arc, along which the pseudogap less than 2 meV, increase with the increase of doping. It implies that as doping increase, the portion of the Fermi surface destroyed by the pseudogap decreases. The prediction discribed above is consistent with the experiment [@Norman]$^{2)}$.
[**5. Concluding remarks**]{}
It is of interest to note that the situation is entirely different if $t^{\prime}=0$. For in this case, Eq.$\;$(\[12\]) reduces to $$\Delta_{PS}(\phi^{\prime})=\mu+\sqrt{\mu^{2}+\Delta^{2}}.\label{13}$$ It is in contradiction with the experiment [@1; @Norman], for the pseudogap along the Fermi surface, according to (\[13\]), is constant.
Now, it is clearly that the peculiarity of the band structure of the cuprate plays an important role in understanding the pseudogap phenomenon in underdoped cuprate. This is the reason why our mean field SDW analysis of the , based on the $t-t^{\prime}-U$ Hubbard model, meets with success.
The mean-field solution has an antiferromagnetic long-range order. At sufficient doping concentration, the spin long-range order will be removed by fluctuations but there are still short-range orderings. We assume implicitly in our theory that the structure, at least near the saddle point ($\pi,0$), is not sensitive to the order and will survive in underdoped region, leading to the pseudogap .
One of the authors (H. S. Wu) would like to thank Prof. Z. Y. Weng for very valuable discussion.
T. Timusk and B. Statt, Rep. Prog. Phys. [**61**]{}, 61 (1999).
M. Randeria et al., Phys. Rev. Lett. [**69**]{}, 2001 (1992). V.J. Emery and S.A. Kivelson, Nature [**374**]{}, 434 (1995). J. Ranninger and J. M. Robin, Phys. Rev. B[**53**]{}, 11961 (1996). P. A. Lee, N. Nagaosa, T. K. Ng and X.-G. Wen, Phys. Rev. B [**57**]{}, 6003 (1998). Xi Dai and Zhao-Bin Su, Phys. Rev. Lett. [**81**]{}, 2136 (1997).
D. Pines, preprint cond–mat/9702187.
A. Kampf and J. R. Schrieffer, Phys. Rev. B [**41**]{}, 6399 (1990). J. Yu, S. Massidda, A. J. Freeman, and D. D. Koelling, Phys. Lett. A [**122**]{}, 203 (1987). S. Massidda, N. Hamada, J. Yu, and A. J. Freeman, Physica C [**157**]{}, 571 (1989). A. J. Freeman, and J. Yu, Physica B [**150**]{}, 50 (1988). W. E. Pickett, Rev. Mod. Phys. [**61**]{}, 433 (1989). J. C. Campuzanoet et al., Phys. Rev. Lett. [**64**]{}, 2308 (1990). D. S. Dessau et al., Phys. Rev. Lett. [**71**]{}, 2781 (1993). D. M. King et al., Phys. Rev. Lett. [**73**]{}, 3298 (1994). Z.-X. Shen, and D. S. Dessau, Physica. Reports. [**253**]{}, 1 (1995). E. Fradkin: Field Theories of Condensed Matter Systems (Addison-Wesley Company,1991), Chapter. 3. M. R. Norman et al., preprint cond–mat/9710163.
[**Figure Captions:**]{}
[Fig.$\,$\[fig1\].]{} The Brillouin zone and energy contour lines: The $\Gamma$ point is at the middle of the Brillouin zone and the $\overline{M}$ points \[($\pm\pi$, 0) and (0, $\pm\pi$)\] are midway along the edges. The curves are the energy contour lines ($t^{\prime}/t=-0.16$) with energy , which are from inside to outside.
[Fig.$\;$\[fig2\].]{} The period Brillouin zone: The solid curves are the energy contour lines with $\varepsilon_{\bf k}=0$. When the region ${\bf II}$ is shifted by the vector ${\bf Q}=(\pi,\pi)$, it coincides with the region ${\bf I}$. In regions ${\bf I+I'}$, $\varepsilon_{\bf k}>0$. In the region ${\bf II}$, $\varepsilon_{\bf k}<0$. The region ${\bf I}'$ is called as the necklace region.
[Fig.$\;$\[fig3\].]{} The Fermi surfaces ($t^{\prime}/t=-0.16$) in the quarter of Brillouin zone: The light curve represents the Fermi surface. 1 and 2 for the overdoping. 3 and 4 for the underdoping. The heavy dashed and the solid curve represent the necklace region boundary.
[Fig.$\;$\[fig4\].]{} Our choice of the Brillouin zone. The heavy rectangle is our choice of the Brillouin zone boundary and the origin is at the $\overline{M}$. The heavy square is the magnetic Brillouin zone boundary.
[Fig.$\;$\[fig5\].]{} This figure is the part of the magnetic Brillouin zone of the Fig. 4. The light curve represents the Fermi surface for the underdoping region ($t^{\prime}/t=-0.16$). The dashed and the solid curve represent the necklace region boundary. The heavy solid lines are the magnetic Brillouin zone boundary. $k_x$- and $k_y$-axis are parallel with $\overline{M}\Gamma$ and $\overline{M}$X, respectively. $\phi=\arctan (k_x/k_y)$. $\phi^{\prime}=\arctan(k_x/(\pi-k_y))$.
[Fig.$\;$\[fig6\].]{} The angle dependence of the pseudogap located at $\phi^{\prime}$, $\Delta_{PS}(\phi')$, for the hole doping concentration x=0.13 ($t^{\prime}/t=-0.18$ and $U/t=0.8$). The $\Delta\phi'$ is the region where the values of $\Delta_{PS}(\phi')$ are all smaller than 2 meV.
[Fig.$\;$\[fig7\].]{} The hole doping concentration dependence of the pseudogap located at , $\Delta_{PS}(0)$, for $t^{\prime}/t=-0.18$ and $U/t=0.8$. The x indicates the hole doping . The solid curve represents the pseudogap in the underdoping region.
[Fig.$\;$\[fig8\].]{} The hole doping concentration dependence of the region where the values of $\Delta_{PS}(\phi')$ are all smaller than 2 meV, $\Delta\phi'$, for the underdoping region ($t^{\prime}/t=-0.18$ and $U/t=0.8$). The x indicates the hole doping concentration.
[^1]: $^)$See, for example, Fig.$\;$\[fig8\] of paper [@1]
[^2]: $^)$Ref. also the review article [@1] and the papers listing in it
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider a single Brownian particle in a spatially symmetric, periodic system far from thermal equilibrium. This setup can be readily realized experimentally. Upon application of an external static force $F$, the average particle velocity is negative for $F>0$ and positive for $F<0$ (absolute negative mobility).'
address: 'Universität Augsburg, Theoretische Physik I, Universitätsstr. 1, D-86135 Augsburg, Germany'
author:
- 'Ralf Eichhorn, Peter Reimann, and Peter Hänggi'
title: Brownian motion exhibiting absolute negative mobility
---
PACS: 05.40.-a, 05.60-k, 02.50Ey
When a system at rest is perturbed by a static force, we expect that it responds by moving into the direction of that force. The rather surprising opposite behavior in the form of a permanent motion against a (not too large) static force of whatever direction is called [*absolute negative mobility*]{} (ANM) [@ban72; @rei99]. If the unperturbed system is at thermal equilibrium then ANM is impossible since it could be exploited to construct a perpetuum mobile of the second kind. Legendary but quite complex non-equilibrium systems which do exhibit ANM are “donkeys” [@cle01]. In this Letter we demonstrate that even a simple, structureless Brownian particle can exhibit ANM away from thermal equilibrium in a setup which can be easily realized experimentally — a scenario hitherto commonly considered as impossible.
Going [*in medias res*]{}, let us consider the overdamped 2-dim. Brownian motion $$\begin{aligned}
\eta\,\dot x(t) & = & -\partial_x V(x(t),y(t))+\xi_x(t)\nonumber\\
\eta\,\dot y(t) & = & -\partial_y V(x(t),y(t))+\xi_y(t)+F\ ,
\label{10}\end{aligned}$$ where $\eta$ is the viscous friction coefficient and $V(x,y)$ is the hard-wall potential from Fig. 1a, confining the Brownian motion to a “corridor” along the $y$-axis with [*symmetric*]{}, [*periodic*]{} obstacles. Further, $\xi_x(t)$ are thermal fluctuations, modeled by unbiased Gaussian white noise with $\langle\xi_x(t)\,\xi_x(s)\rangle = 2\eta\, k_B T\,\delta(t-s)$, where $k_B$ denotes Boltzmann’s constant and $T$ the temperature. If $\xi_y(t)$ is an independent, second thermal white noise source then (\[10\]) is an equilibrium system and the average particle current $$\langle\dot y\rangle :=
\left\langle \lim_{t\to\infty}\frac{y(t)-y(t_0)}{t-t_0}\right\rangle
\label{20}$$ always runs into the direction of the static force $F$. Fig. 1b depicts the corresponding $\langle\dot y\rangle$-$F$-characteristics for the simplest non equilibrium model, namely a [*symmetric*]{} dichotomous noise $\xi_y(t)$, with the promised ANM as its most outstanding feature.
Our first remark is that the so-called ratchet effect [@rei00] is characterized by a current $\langle\dot y\rangle$ which is non-zero for $F=0$ and does not change its direction within an entire neighborhood of $F=0$. This effect thus inevitably involves some kind of asymmetry (for $F=0$), whereas our present system is perfectly symmetric. Second, so-called [*differential*]{} negative mobility (or resistance) [@bal95] is typified by a negative slope of the $\langle\dot y\rangle$-$F$-characteristics [*away*]{} from $F=0$. Thus, in both cases the salient feature of ANM is absent, namely a current $\langle\dot y\rangle$ which is always opposite to the (not too large) force $F$, independently of whether $F$ is positive or negative. Finally, we mention that ANM has also been observed in semiconductor devices [@ban72] and in models for coupled Brownian motors [@rei99; @cle01]. However, in the first case it has an entirely quantum mechanical origin and in the second case it is a genuine collective effect, without leaving room for any kind of [*classical, single-particle*]{} counterpart. Accordingly, the respective physical mechanisms are completely different from ours.
Returning to the ANM in Fig. 1b, its origin can be understood as follows: Consider a (moderately large) time interval $\tau$ during which the dichotomous noise $\xi_y(t)$ in (\[10\]) is constant and $F_{tot}:=\xi_y(t)+F>0$. When starting in one of the “corners” between the right “corridor wall” and any of the adjacent obstacles (see Fig. 1a), the particle first closely follows the right “corridor wall” until it hits the next obstacle, then “slides down the back” of that obstacle, and afterwards performs a “free fall” in the $y$-direction. Since the lateral extension of the obstacles $b$ exceeds half the corridor width $B/2$ the particle then hits with a high probability $q$ the next obstacle on its way and ends up by being trapped in the corresponding (left) “corner”. In order to avoid this trap, the particle has to thermally diffuse at least over a distance $b-(B-b)=2b-B$ in the positive $x$-direction during its “free fall” in the $y$-direction. With [*increasing*]{} force $F_{tot}$, the available time and therefore the probability $p:=1-q$ of such a diffusive displacement [decreases]{}, implying that the particle travels on the average a [*shorter*]{} distance along the $y$-axis during the time $\tau$. Since an analogous consideration applies for time-intervals $\tau$ with $F_{tot}=\xi_y(t)+F<0$, the particle motion on the average acquires a bias in the direction opposite to the static force $F$, i.e. it exhibits ANM.
In order to quantify this argument, we first note that the above mentioned probability $p$ of avoiding a trap (for $F_{tot}>0$) can be approximated as [@f1] $$p(F_{tot})=\frac{1}{2}-\frac{1}{2}\,\mbox{erf}
\left(\frac{2b-B}{\sqrt{2Lk_BT}}\sqrt{F_{tot}}\right)
\label{30}$$ where $\mbox{erf}(x):=2\pi^{-1/2}\int_0^x e^{-y^2}\, dy$. With probability $p$, a particle thus covers in addition to the “basic distance” of approximately $3L/2$ another period $L$ (see Fig. 1a). It then avoids the second trap on its way with approximately the same (relative) probability $p$ as in (\[30\]), i.e. a second period $L$ is covered with (absolute) probability $p^2$ etc., see Fig. 1a. If the maximal traveling distance (avoiding all traps) is of the form $(3/2+N)L$ with $N\in{\Bbb N}$, the average traveling distance $\Delta y(\tau,F_{tot})$ thus follows as $L[3/2+p+p^2+...+p^N]$. Neglecting that the “free traveling speed” $v_y:=F_{tot}/\eta$ is slightly reduced when the particle “slides down the back” of an obstacle, we obtain $(3/2+N)L=v_y\tau$ and hence $$\Delta y (\tau,F_{tot}) = L\left\{\frac{1}{2}+
\frac{1-[p(F_{tot})]^{\frac{F_{tot}\tau}{\eta\, L}-\frac{1}{2}}}{1-p(F_{tot})}
\right\} \ .
\label{40}$$ This expression remains a decent interpolation even if $v_y\tau$ is not of the form $(3/2+N)L$. Symmetrically, for $F_{tot}<0$ the average traveling distance is $-\Delta y (\tau, -F_{tot})$, implying for the net average current (\[20\]) the approximation $$\langle\dot y\rangle =
\frac{\int_0^\infty d\tau \, \rho(\tau) [\Delta y (\tau, A+F)- \Delta y (\tau,A-F)]}
{2\, \int_0^\infty d\tau \, \rho(\tau)\, \tau}\ ,
\label{50}$$ where $\pm A$ are the two states of the dichotomous noise $\xi_y(t)$ and $\rho(\tau) = \gamma e^{-\gamma\tau}$ is the distribution of sojourn times, i.e. $\gamma$ is the flip rate.
The agreement of our analytic prediction (\[50\]) with the simulations in Fig. 1b is remarkably good in view of the various underlying approximations. In particular, our assumption that the particle covers at least a distance $3L/2$ during the time $\tau$ renders (\[40\]) doubtful unless $v_y\tau=F_{tot}\tau/\eta>3L/2$. To fulfill this condition for all the forces $F_{tot}=A\pm F$ notably contributing in (\[50\]) requires that $A-|F| > 3\gamma\eta L/2$. Indeed, for $A < 3\gamma\eta L/2$ ANM is found to disappear in numerical simulations of (\[10\]).
On the other hand, ANM is expected to subsist for numerous generalizations of our original model (\[10\]). Fig. 2 exemplifies a setup with a 2-dim. array of obstacles. For symmetry reasons, the current (\[20\]) remains exactly the same as in Fig. 1, but the parallelization now admits to simultaneously transport a much larger number of particles. Such a device can be readily realized by a modification of those studied experimentally in [@exp] and theoretically in [@the]. We emphasize that while these modifications of the experimental setups are straightforward, the physics, however, is completely different. A further step towards a realistic experimental system is achieved by choosing $$\xi_y(t)=\xi_{th}(t)+f(t) \ ,
\label{60}$$ where $\xi_{th}(t)$ is another thermal white noise like $\xi_x(t)$ (but statistically independent) and $f(t)$ switches [*periodically*]{} between $\pm A$ with $\rho(\tau)=\delta(\tau-\tau_{ac})$. For not too weak driving $f(t)$ and with the appropriately adjusted definition $F_{tot}:=f(t)+F$, the corrections in (\[30\]), (\[40\]) due to the thermal noise $\xi_{th}(t)$ are small and thus (\[50\]) remains a valid approximation provided $A-|F| > 3\eta L/2\tau_{ac}$.
An experimental realization of the system (\[10\]), (\[60\]) along the lines of [@exp; @bec] is presently under construction in the labs of C. Bechinger and P. Leiderer. Henceforth, we focus on such experimentally realistic parameter values in our quantitative examples, see Fig. 3. In particular, the agreement of (\[50\]) with the numerical simulations in Fig. 3a for $\tau_{ac}=1\, s$ is again rather good in the parameter range $|F| < 0.14\, pN$ compatible with $A-|F| > 3\eta L/2\tau_{ac}$.
In (\[40\]), we have completely neglected the possibility that a trapped particle may escape from the trap due to the ambient thermal noise. This is justified as long as $\tau$ is much smaller than the mean escape time $\tau_{esc}(F_{tot})$ out of a trap. Turning to the opposite case $\tau\gg\tau_{esc}(F_{tot})$, we start by calculating the time which a particle needs to advance by one period $L$ along the $y$-axis for $F_{tot}>0$: This time is approximately $L/v_y$ if the trap within such a period is avoided and $L/v_y+\tau_{esc} (F_{tot})$ otherwise. The respective probabilities are $p$ and $1-p$, approximated by (\[30\]), i.e. the average time to cover one period $L$ is $p\, L/v_y + [1-p]\, [L/v_y+\tau_{esc} (F_{tot})]$. The resulting average traveling distance during the (large) time $\tau\gg\tau_{esc}(F_{tot})$ is $$\Delta y(\tau, F_{tot})=
\frac{\tau \,L}{\frac{L\eta}{F_{tot}}+[1-p(F_{tot})]\tau_{esc}(F_{tot})} \ .
\label{70}$$ Thus, if those large $\tau$ dominate, $\Delta y/\tau$ becomes independent of $\tau$ and (\[50\]) can be rewritten as $$\langle\dot y\rangle = \frac{1}{2}\left[
\frac{\Delta y (\tau, A+F)}{\tau}- \frac{\Delta y (\tau,A-F)}{\tau}\right] \ ,
\label{80}$$ independent of $\rho(\tau)$. For small $F_{tot}>0$, the first term in the denominator of (\[70\]) dominates and hence $\Delta y$ [*increases*]{} in the expected linear response manner with increasing $F_{tot}$. As $F_{tot}$ gets larger, $1-p$ approaches $1$ (cf. (\[30\])) and the escape time $\tau_{esc}$ increases very fast (cf. (\[90\]) below), implying the existence of a maximum and a subsequent [*decay*]{} of $\Delta y$. As a consequence of this increasing “stickiness” of the traps with increasing $F_{tot}$ [@ban72; @bal95], we recover once again ANM in (\[80\]) provided $A$ is sufficiently large.
Focusing on the model (\[10\]), (\[60\]), one can approximate $\tau_{esc}(F_{tot})$ by the mean first passage time from $x=0$ to $x=b/\sin\theta$ of the auxiliary 1-dim. dynamics $\eta \dot x(t)=- F_{tot}\cos\theta + \xi_x(t)$ with a reflecting boundary at $x=0$, reading [@han90] $$\begin{aligned}
& & \tau_{esc}(F_{tot})
= \frac{b^2\, \eta}{k_BT}\,\frac{e^\alpha -\alpha-1}{\alpha^2 \sin^2\theta}
\nonumber\\
& & \alpha := b\, F_{tot}\cot\theta/k_BT \ .
\label{90}\end{aligned}$$ The agreement of (\[70\])-(\[90\]) with the numerical simulations in Fig. 3a for $\tau_{ac}=25\, s$ is quite satisfactory. In particular, the predicted ANM is indeed recovered.
We emphasize that the basic physical origins of ANM are completely different in the small- and large-$\tau$ regimes as quantified by (\[40\]), (\[50\]) and (\[70\]), (\[80\]), respectively: In the former case, escapes out of the traps are negligible, while transient “first-trapping events” after each jump of $F_{tot}$ provide the crucial mechanism for ANM. In the latter case, these transients are negligible, while the “re-escape events” are now at the origin of ANM. This remarkable feature that two completely different physical mechanisms both support one and the same phenomenon, suggests that ANM will also be present in the so far disregarded intermediate-$\tau$ regime. Furthermore, on the basis of our above physical insight an immediate educated guess is to add up (\[40\]) and (\[70\]) and then evaluate (\[50\]). Both these predictions are nicely confirmed by Fig. 3b.
A more sophisticated analysis can again be based on our usual assumption that a particle always closely passes by the leftmost edge of any obstacle attached to the “right corridor wall” when $F_{tot}>0$, as indicated in Fig. 1a. Consequently, the traveling times through any period $L$ are governed by one and the same probability distribution $\psi (t)$, independent of the particle’s past (Markov property). Similarly as in (\[70\]), this distribution is approximately given by $$\psi(t)=p\,\delta(t-\tau_1)+
(1-p)\,\Theta (t-\tau_1)
\frac{e^{-(t-\tau_1)/\tau_{esc}}}{\tau_{esc}} \ ,
\label{110}$$ where $\Theta(t)$ is the Heaviside function and $\tau_1:= L/v_y$. In this way, the original 2-dim. problem can be approximately reduced to a 1-dim., uni-directional hopping process characterized by $\psi(t)$. Such processes have been analyzed in detail in the context of renewal theory [@cox]. Along these lines, we obtain for the Laplace transformed displacement $\Delta \tilde y(s,F_{tot})
:=\int_0^\infty dt\, \Delta y(t,F_{tot})\, e^{-ts}$ the result $$\Delta \tilde y(s,F_{tot})=\frac{L}{s}\,
\frac{\tilde\psi(s)}{1-\tilde\psi(s)}\,
\frac{e^{\tau_1 s} -1}{\tau_1 s}
\label{100}$$ where $\tilde \psi(s)$ is the Laplace transform of $\psi(t)$. While the first two factors on the right hand side of (\[100\]) are well known [@cox], the last factor accounts for the fact that the particle actually proceeds continuously rather than in discrete jumps of length $L$. Moreover, after the Laplace back-transformation of (\[100\]), a final transformation $\Delta y(\tau,F_{tot})\mapsto \Delta y(\tau -3L/2v_y,F_{tot}) + 3L/2$ is required since the “basic distance” $3L/2$, which the particle covers before encountering the first trap (see Fig. 1a and below (\[30\])), is not yet taken into account by (\[100\]). For very small and large $\tau$-values one then recovers our previous results (\[40\]) and (\[70\]), respectively, while for more general $\tau$-values, a numerical evaluation of the Laplace back-transformation is necessary. A more detailed derivation of these analytical results will be presented elsewhere. A typical example is depicted in Fig. 3b, in good agreement with the numerical simulations.
In conclusion, we have demonstrated that a single, classical Brownian particle in a periodic, symmetric 2-dim. potential landscape can exhibit the [*prima facie*]{} quite astonishing phenomenon of absolute negative mobility under suitable far from equilibrium conditions. In general, the effect is simultaneously supported by two completely different physical mechanisms and, in contrast to [@ban72], is not restricted to adiabatically slow non-equilibrium perturbations. The phenomenon is moreover robust against modifications of the potential. It can occur in practically any potential landscape that provides “traps” with increasing “stickiness” as external force strengths increase. The setup discussed here is particularly suitable for an experimental realization along the lines of [@exp; @the; @bec].
This work has been supported by the DFG-Sachbeihilfe HA1517/13-4 and the Graduiertenkolleg GRK283.
T. J. Banys, I. V. Parshelyunas, and Y. K. Pozhela, Sov. Phys. Semicond. [**5**]{}, 1727 (1972); V. V. Pavlovich and E. M. Epstein, [*ibid.*]{} [**10**]{}, 1196 (1976); B. J. Keay [*et al.*]{} Phys. Rev. Lett. [**75**]{}, 4102 (1995); R. Aguado and G. Platero, Phys. Rev. B [**55**]{}, 12860 (1997); L. Hartmann, M. Grifoni, and P. Hänggi, Europhys. Lett. [**38**]{}, 497 (1997); I. Goychuk, E. Petrov, and V. May, Phys. Lett. A [**238**]{}, 59 (1998).
P. Reimann, R. Kawai, C. Van den Broeck, and P. Hänggi, Europhys. Lett. [**45**]{}, 545 (1999); P. Reimann, C. Van den Broeck, and R. Kawai, Phys. Rev. E [**60**]{}, 6402 (1999); J. Buceta, J. M. Parrondo, C. Van den Broeck, and F. J. de la Rubia, [*ibid.*]{} [**61**]{}, 6287 (2000); S. E. Mangioni, R. R. Deza, and H. S. Wio, [*ibid.*]{} [**63**]{}, 041115 (2001).
B. Cleuren and C. Van den Broeck, Europhys. Lett. [**54**]{}, 1 (2001).
For reviews see: P. Hänggi and R. Bartussek, Lect. Notes in Phys. [**467**]{}, 294 (1996); R. D. Astumian and M. Bier, Biophys. J. [**70**]{}, 637 (1996); R. D. Astumian, Science [**276**]{}, 917 (1997); F. Jülicher, A. Ajdari, and J. Prost, Rev. Mod. Phys. [**69**]{}, 1269 (1997); P. Reimann, Phys. Rep. (in press), see also cond-mat/0010237.
S. R. White and M. Barma, J. Phys. A [**17**]{}, 2995 (1984); G. A. Griess and P. Serwer, Biopolymers [**29**]{}, 1863 (1990); V. Balakrishnan and C. Van den Broeck, Physica A [**217**]{}, 1 (1995); G. Cecchi and M. O. Magnasco, Phys. Rev. Lett. [**76**]{}, 1968 (1996); G. W. Slater, H. L. Guo, and G. I. Nixon [*ibid.*]{}, [**78**]{}, 1170 (1997).
After drifting for a time $t$ along the $y$-axis with speed $v_y=F_{tot}/\eta$, the thermal diffusion along the $x$-axis is approximately captured (for not too large $t$) by a Gaussian distribution with variance $\sigma^2=2Dt$. With Einstein’s relation $D=k_BT/\eta$ and observing that neighboring obstacles have an overlap $2b-B$ (in $x$-direction) and an approximate distance $L/2$ (in $y$-direction) eq. (\[30\]) follows.
W. D. Volkmuth and R. H. Austin, Nature [**358**]{}, 600 (1992); J. Rousselet, L. Salome, A. Ajdari, and J. Prost, [*ibid*]{} [**370**]{}, 446 (1994); L. P. Faucheux and A. Libchaber, J. Chem. Soc. Faraday Trans. [**91**]{}, 3163 (1995); L. Gorre-Talini, J. P. Spatz, and P. Silberzan, Chaos [**8**]{}, 650 (1998); P. Serwer and G. A. Griess, J. Chromatogr. B [**722**]{}, 179 (1999); A. van Oudenaarden and S. G. Boxer, Science [**285**]{}, 1046 (1999); J. S. Bader [*et al.*]{}, Proc. Natl. Acad. Sci. USA [**96**]{}, 13165 (1999).
G. I. Nixon and G. W. Slater, Phys. Rev. E [**53**]{}, 4969 (1996); D. Ertas, Phys. Rev. Lett. [**80**]{}, 1548 (1998); T. A. J. Duke and R. H. Austin [*ibid.*]{} [**80**]{}, 1552 (1998) 1552; I. Derényi and R. D. Astumian, Phys. Rev. E [**58**]{}, 7781 (1998).
Q.-H. Wei, C. Bechinger, D. Rudhardt, and P. Leiderer, Phys. Rev. Lett. [**81**]{}, 2606 (1998); R. Bubeck, C. Bechinger, S. Neser, and P. Leiderer, [*ibid.*]{} [**82**]{}, 3364 (1999); C. Bechinger, M. Brunner, and P. Leiderer, [*ibid.*]{} [**86**]{}, 930 (2001).
P. Hänggi, P. Talkner, and M. Borkovec, Rev. Mod. Phys. [**62**]{}, 251 (1990).
D. R. Cox, [*Renewal Theory*]{}, Methuen & Co., London 1962
=8.5cm
=8.5cm
=8.5cm
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present a novel approach to deformable object manipulation that does not rely on highly-accurate modeling. The key contribution of this paper is to formulate the task as a Multi-Armed Bandit problem, with each arm representing a model of the deformable object. To “pull” an arm and evaluate its utility, we use the arm’s model to generate a velocity command for the gripper(s) holding the object and execute it. As the task proceeds and the object deforms, the utility of each model can change. Our framework estimates these changes and balances exploration of the model set with exploitation of high-utility models. We also propose an approach based on Kalman Filtering for Non-stationary Multi-armed Normal Bandits (KF-MANB) to leverage the coupling between models to learn more from each arm pull. We demonstrate that our method outperforms previous methods on synthetic trials, and performs competitively on several manipulation tasks in simulation.'
author:
- '[Dale M$^c$Conachie]{}'
- Dmitry Berenson
bibliography:
- 'references.bib'
- 'all.bib'
title: 'Bandit-Based Model Selection for Deformable Object Manipulation'
---
Introduction
============
One of the primary challenges in manipulating deformable objects is the difficulty of modeling and simulating them. The most common simulation methods use Mass-Spring models [@Gibson1997; @Essahbi2012], which are generally not accurate for large deformations [@Maris2010], and Finite-Element models [@Muller2002; @Bathe2006], which require significant tuning and are very sensitive to the discretization of the object. Approaches like [@Schulman2016; @Huang2015] bypass this challenge by using offline demonstrations to teach the robot specific manipulation tasks; however, when a new task is attempted a new training set needs to be generated. In our application we are interested in a way to manipulate a deformable object without a high-fidelity model or training set available *a priori*. For instance, imagine a robot encountering a new piece of clothing for a new task. While it may have models for previously-seen clothes or training sets for previous tasks, there is no guarantee that those models or training sets are appropriate for the new task. Also, depending on the state of the clothing different models may be most useful at different times in the manipulation task.
Rather than assuming we have a high-fidelity model of a deformable object interacting with its environment, our approach is to have multiple models available for use, any one of which may be useful at a given time. We do not assume these models are correct, we simply treat the models as having some measurable *utility* to the task. The *utility* of a given model is the expected reduction in task error when using this model to generate robot motion. As the task proceeds, the utility of a given model may change, making other models more suitable for the current part of the task. However, without testing a model’s prediction, we do not know its true utility. Testing every model in the set is impractical, as all models would need to be tested at every step, and performing a test changes the state of the object and may drive it into a local minimum. The key question is then which model should be selected for testing at a given time.
The central contribution of this paper is framing the model selection problem as a Multi-Armed Bandit (MAB) problem where the goal is to find the model that has the highest utility for a given task. An arm represents a single model of the deformable object; to “pull” an arm is to use the arm’s model to generate and execute a velocity command for the robot. The reward received is the reduction in task error after executing the command. In order to determine which model has the highest utility we need to explore the model space, however we also want to exploit the information we have gained by using models that we estimate to have high utility. One of the primary challenges in performing this exploration versus exploitation trade-off is that our models are inherently coupled and non-stationary; performing an action changes the state of the system which can change the utility of every model, as well as the reward of pulling each arm. While there is work that frames robust trajectory selection as a MAB problem [@Koval2015], we are not aware of any previous work which either 1) frames model selection for deformable objects as a MAB problem; or 2) addresses the coupling between arms for non-stationary MAB problems.
In our experiments, we show how to formulate a MAB problem with coupled arms for Jacobian-based models. We perform our experiments on three synthetic systems, and on three deformable object manipulation tasks in the Bullet [@Coumans2010] simulator. We demonstrate that formulating model selection as a MAB problem is able to successfully perform all three manipulation tasks. We also show that our proposed MAB algorithm outperforms previous MAB methods on synthetic trials, and performs competitively on the manipulation tasks.
Related Work
============
*Deformable Object Modeling*: One of the key challenges in manipulating deformable objects is the difficulty inherent in modeling and simulating them. While there has been some progress towards online modeling of deformable objects [@JochenLang2002; @Cretu2008a] these methods rely on a time consuming training phase for each object to be modeled. Of particular interest are Jacobian-based models such as [@Berenson2013] and [@Navarro-Alarcon2013]. In these models we assume that there is some function $F : {{SE(3)}^{G}}\rightarrow {\mathbb{R}}^N$ which maps a configuration of ${G}$ robot grippers ${{q}}\in {{SE(3)}^{G}}$ to a parameterization of the deformable object ${\mathcal{P}}\in {\mathbb{R}}^N$, where $N$ is the dimensionality of the parameterization of the deformable object. These models are then linearized by calculating an approximation of the the Jacobian of $F$: $$\begin{aligned}
{\mathcal{P}}&= F({{q}}) \\
\frac{\partial {\mathcal{P}}}{\partial t} &= \frac{\partial F({{q}})}{\partial {{q}}} \frac{\partial {{q}}}{\partial t} \\
{\dot {\mathcal{P}}}&= J(q) {\dot {{q}}}\enspace .{\addtocounter{equation}{1}\tag{\theequation}}\label{eqn:jacobian}\end{aligned}$$
Computation of an exact Jacobian $J({{q}})$ at a given configuration ${{q}}$ is often computationally intractable and requires high-fidelity models and simulators, so instead approximations are frequently used. A shared characteristic of these approximations is some reliance on tuned parameters. This tuning process can be tedious, and in some cases needs to be done on a per-task basis.
In this paper we consider two types of approximate Jacobian models. The first approximation we use is a *diminishing-rigidity Jacobian* [@Berenson2013] which assumes that points on the deformable object that are near a gripper move “almost rigidly” with respect to the gripper while points that are further away move “less rigidly”. This approximation uses deformability parameters to control how quickly the rigidity decreases with distance. The second approximation we use is an *adaptive Jacobian* [@Navarro-Alarcon2013] which uses online estimation to approximate $J({{q}})$. Adaptive Jacobian models rely on a learning rate to control how quickly the estimation changes from one timestep to the next.
*Model Selection*: In order to accomplish a given manipulation task, we need to determine which type of model to use at the current time to compute the next velocity command, as well as how to set the model parameters. Frequently this selection is done manually, however, there are methods designed to make these determinations automatically. Machine learning techniques such as [@Maron1994; @Sparks2015] rely on supervised training data in order to intelligently search for the best regression or classification model, however, it is unclear how to acquire such training data for the task at hand without having already performed the task. The most directly applicable methods come from the Multi-Armed Bandit (MAB) literature [@Auer2002; @Gittins2011; @Whittle1988]. In this framework there are multiple actions we can take, each of which provides us with some reward according to an unknown probability distribution. The problem then is to determine which action to take (which arm to pull) at each time step in order to maximize reward.
The MAB approach is well-studied for problems where the reward distributions are *stationary*; i.e. the distributions do not change over time [@Auer2002; @Agrawal2012]. This is not the case for deformable object manipulation; consider the situation where the object is far away from the goal versus the object being at the goal. In the first case there is a possibility of an action moving the object closer to the goal and thus achieving a positive reward; however, in the second case any motion would, at best, give zero reward. Recent work [@Granmo2010] on non-stationary MAB problems offer promising results that utilize independent Kalman filters as the basis for the estimation of a non-stationary reward distribution for each arm. This algorithm (KF-MANB) provides a Bayesian estimate of the reward distribution at each timestep, assuming that the reward is normally distributed. KF-MANB then performs Thompson sampling [@Agrawal2012] to select which arm to pull, choosing each in proportion to the belief that it is the optimal arm. We build on this approach in this paper to produce a method that also accounts for dependencies between arms by approximating the coupling between arms at each timestep.
For the tasks we address, the reward distributions are both non-stationary as well as *dependent*. Because all arms are operating on the same physical system, pulling one arm both gives us information about the distributions over other arms, as well as changing the future reward distributions of all arms. While work has been done on dependent bandits [@Pandey2007; @Langford2008], we are not aware of any work addressing the combination of non-stationary and dependent bandits. Our method for model selection is inspired by KF-MANB, however we directly use coupling between models in order to form a joint reward distribution over all models. This enables a pull of a single arm to provide information about all arms, and thus we spend less time exploring the model space and more time exploiting useful models to perform the manipulation task.
Problem Statement
=================
Let the robot be represented by a set of ${G}$ grippers with configuration ${{q}}\in {{SE(3)}^{G}}$. We assume that the robot configuration can be measured exactly; in this work we assume the robot to be a set of free floating grippers; in practice we can track the motion of these with inverse kinematics on a real robot. We use the Lie algebra [@Murray1994] of ${SE(3)}$ to represent robot gripper velocities. This is the tangent space of ${SE(3)}$, denoted as ${\mathfrak{se}(3)}$. The velocity of a single gripper ${g}$ is then ${\dot {q}}_{g}= \begin{bmatrix}v_g^T & \omega_g^T\end{bmatrix}^T \in {\mathfrak{se}(3)}$ where $v_g$ and $\omega_g$ are the translational and rotational components of the gripper velocity. We define the velocity of the entire robot to be ${\dot {{q}}}= \begin{bmatrix}{\dot {{q}}}_1^T & \dots & {\dot {{q}}}_{G}^T \end{bmatrix}^T \in {{\mathfrak{se}(3)}^{G}}$. We define the inner product of two gripper velocities ${\dot {q}}_1, {\dot {{q}}}_2 \in {\mathfrak{se}(3)}$ to be $\langle {\dot {q}}_1, {\dot {q}}_2 \rangle = \langle {\dot {q}}_1, {\dot {q}}_1 \rangle_{c} = v_1^T v_2 + c \omega_1^T \omega_2$, where $c$ is a non-negative scaling factor relating rotational and translational velocities.
The configuration of a deformable object is a set ${\mathcal{P}}\subset {\mathbb{R}}^3$ of ${P}$ points. We assume that we have a method of sensing ${\mathcal{P}}$. To measure the norm of a deformable object velocity ${\dot {\mathcal{P}}}= \begin{bmatrix} {\dot {\mathcal{P}}}_1^T & \dots & {\dot {\mathcal{P}}}_{P}^T \end{bmatrix}^T \in {{\mathbb{R}}^{{3{P}}}}$ we will use a weighted Euclidean norm $$\| {\dot {\mathcal{P}}}\|^2_{W}= \sum_{i = 1}^{P}w_i {\dot {\mathcal{P}}}_i^T {\dot {\mathcal{P}}}_i = {\dot {\mathcal{P}}}^T \operatorname*{diag}{({W})} {\dot {\mathcal{P}}}$$ where $W = \begin{bmatrix}w_1 & \dots & w_{P}\end{bmatrix}^T \in {\mathbb{R}}^{P}$ is a set of non-negative weights. The rest of the environment is denoted ${\mathcal{O}}$ and is assumed to be both static, and known exactly.
Let a *deformation model* be defined as a function ${\phi}: {{\mathfrak{se}(3)}^{G}}\rightarrow {{\mathbb{R}}^{{3{P}}}}$ which maps a change in robot configuration ${\dot {{q}}}$ to a change in object configuration ${\dot {\mathcal{P}}}$. Let ${\mathcal{M}}$ be a set of ${M}$ deformable models which satisfy this definition. Each model is associated with a robot command function ${\psi}: {{\mathbb{R}}^{{3{P}}}}\times {\mathbb{R}}^{P}\rightarrow {{\mathfrak{se}(3)}^{G}}$ which maps a desired deformable object velocity ${\dot {\mathcal{P}}}$ and weight ${W}$ (Sec. \[sec:desired\_direction\]) to a robot velocity command ${\dot {{q}}}$. ${\phi}$ and ${\psi}$ also take the object and robot configuration $({\mathcal{P}},{{q}})$ as additional input, however this is omitted for clarity. When a model ${m}$ is selected for testing, the model generates a gripper command $${\dot {{q}}}_{{m}}(t) = {\psi}_{m}({\dot {\mathcal{P}}}(t), {W}(t))
\label{eqn:robotvelocity}$$ which is then executed for one unit of time, moving the deformable object to configuration ${\mathcal{P}}(t+1)$.
The problem we address in this paper is which model ${m}\in {\mathcal{M}}$ to select in order to to move ${G}$ grippers such that the points in ${\mathcal{P}}$ align as closely as possible with some task-defined set of ${T}$ target points ${\mathcal{T}}\subset {\mathbb{R}}^3$, while avoiding gripper collision and excessive stretching of the deformable object. Each task defines a function ${\rho}$ which measures the alignment error between ${\mathcal{P}}$ and ${\mathcal{T}}$. The method we present is a local method which picks a single model ${m}_{*}$ at each timestep to treat as the true model. This model is then used to reduce error as much as possible while avoiding collision and excessive stretching. $${m}_* = \operatorname*{argmin}_{{m}\in {\mathcal{M}}} {\rho}({\mathcal{T}}, {\mathcal{P}}(t+1))
\label{eqn:modelselection}$$ We show that this problem can be treated as an instance of the multi-arm non-stationary dependent bandit problem.
Bandit-Based Model Selection
============================
The primary difficulty with solving directly is that the effectiveness of a particular model in minimizing error is unknown. It may be the case that no model in the set produces the optimal option, however, this does not prevent a model from being useful. In particular the *utility* of a model may change from one task to another, and from one configuration to another as the deformable object changes shape, and moves in and out of contact with the environment. We start by defining the utility ${u}_{m}(t) \in {\mathbb{R}}$ of a model as the expected improvement in task error ${\rho}$ if model ${m}$ is used to generate a robot command at time $t$. If we know which model has the highest utility then we can solve . This leads to a classic exploration versus exploitation trade-off where we need to explore the space of models in order to learn which one is the most useful, while also exploiting the knowledge we have already gained. The multi-armed bandit framework is explicitly designed to handle this trade-off.
In the MAB framework, each arm represents a model in ${\mathcal{M}}$; to pull arm ${m}$ is to command the grippers with velocity ${\dot {{q}}}_{m}(t)$ (Eq. \[eqn:robotvelocity\]) for 1 unit of time. We then define the *reward* ${r}_{m}(t+1)$ after taking action ${\dot {{q}}}_{m}(t)$ as the improvement in error $${r}_{m}(t+1) = {\rho}(t) - {\rho}(t+1) = {u}_{m}(t) + {w}\label{eqn:observedreward}$$ where ${w}$ is a zero-mean noise term. The goal is to pick a sequence of arm pulls to minimize total expected regret ${R}(T_f)$ over some (possibly infinite) horizon $T_f$ $$E[{R}(T_f)] = \sum_{t=1}^{T_f} (E[{r}^*(t)] - E[{r}(t)])
\label{eqn:totalregret}$$ where ${r}^*(t)$ is the reward of the best model at time $t$. The next section describes how to use bandit-based model selection for deformable object manipulation.
MAB Formulation for Deformable Object Manipulation
==================================================
[r]{}[0.6]{}
$t \gets 0$ ${D}\gets$ GeodesicDistanceMatrix$({\mathcal{P}}_{relaxed})$ ${\mathcal{M}}\gets$ InitializeModels$({D})$ InitialzeBanditAlgorithm() ${\mathcal{P}}(0) \gets$ SensePoints() ${{q}}(0) \gets$ SenseRobotConfig() ${m}\gets $ SelectArmUsingBanditAlgorithm()
${\mathcal{T}}\gets$ GetTargets() ${\dot {\mathcal{P}}}_e, {W}_e \gets$ ErrorCorrection$({\mathcal{P}}(t), {\mathcal{T}})$ ${\dot {\mathcal{P}}}_s, {W}_s \gets$ StretchingCorrection$({D}, \lambda, {\mathcal{P}}(t))$ ${\dot {\mathcal{P}}}_d, {W}_d \gets$ CombineTerms$({\dot {\mathcal{P}}}_e, {W}_e, {\dot {\mathcal{P}}}_s, {W}_s)$
${\dot {{q}}}_d \gets {\psi}_m({\dot {\mathcal{P}}}_d, {W}_d)$ ${\dot {{q}}}\gets$ ObstacleRepulsion$({\dot {{q}}}_d, {\mathcal{O}}, \beta)$ CommandConfiguration$({{q}}(t) + {\dot {{q}}})$
${\mathcal{P}}(t + 1) \gets$ SensePoints$()$ ${{q}}(t + 1) \gets$ SenseRobotConfig$()$ UpdateBanditAlgorithm$()$
$t \gets t + 1$
\[alg:mainloop\]
Our algorithm (Alg. \[alg:mainloop\]) can be broken down into four major sections and an initialization block. In the initialization block we pre-compute the geodesic distance between every pair of points in ${\mathcal{P}}$ when the deformable object is in its “natural” or “relaxed” state and store the result in ${D}$. These distances are used to construct the deformation models (Sec. \[sec:jacobian\_models\]), as well as to avoid overstretching the object (Sec. \[sec:desired\_direction\]). At each iteration we: 1) pick a model to use to achieve the desired direction (Sec. \[sec:bandit\_algorithms\]); 2) compute the task-defined desired direction to move the deformable object (Sec. \[sec:desired\_direction\]); 3) generate a velocity command using the chosen model (Sec. \[sec:jacobian\_models\]); 4) modify the command to avoid obstacles (Sec. \[sec:desired\_direction\]); and 5) update bandit algorithm parameters (Sec. \[sec:bandit\_algorithms\]).
Algorithms for MAB {#sec:bandit_algorithms}
------------------
Previous solutions [@Auer2002; @Granmo2010] to minimizing assume that rewards for each arm are normally and independently distributed and then estimate the mean and variance of each Gaussian distribution. We test three algorithms in our experiments: Upper Confidence Bound for normally distributed bandits UCB1-Normal, Kalman Filter Based Solution to Non-Stationary Multi-arm Normal Bandits (KF-MANB), and our extension of KF-MANB, Kalman Filter Based Solution to Non-Stationary Multi-arm Normal Dependent Bandit (KF-MANDB).
*UCB1-Normal*: The UCB1-Normal algorithm [@Auer2002] treats each arm (model) as independent, estimating an optimistic Upper Confidence Bound (UCB) for the utility of each model. The model with the highest UCB is used to command the robot at each timestep. This algorithm assumes that the utility of each model is stationary, gradually shifting from exploration to exploitation as more information is gained. While our problem is non-stationary and dependant, we use UCB1-Normal as a baseline algorithm to compare against due to its prevalence in previous work. The algorithm is shown in App. \[apx:ucb1normal\] for reference.
*KF-MANB*: The Kalman Filter Based Solution to Non-Stationary Multi-arm Bandit (KF-MANB) algorithm [@Granmo2010] uses independent Kalman filters to estimate the utility distribution of each model, and then uses Thompson sampling [@Agrawal2012] to chose which model to use at each timestep. Because this algorithm explicitly allows for non-stationary reward distributions, it is able to “switch” between models much faster than UCB1-Normal. The KF-MANB algorithm is shown in App. \[apx:ucb1normal\] for reference.
*KF-MANDB*: We also propose a variant of KF-MANB, replacing the independent Kalman filters with a single joint Kalman filter. This enables us to capture the correlations between models, allowing us to learn more from each pull. We start by defining utility as a linear system with Gaussian noise with process model ${u}(t+1) = {u}(t) + {v}$ and observation model ${{r}}(t) = C{u}(t) + {w}$ where ${u}(t)$ is our current estimate of the relative utility of each model, while ${v}$ and ${w}$ are zero-mean Gaussian noise terms. $C$ is a row vector with a 1 in the column of the model we used and zeros elsewhere. The variance on ${w}$ is defined as ${\sigma_{obs}}^2 {\eta}^2$. ${\eta}$ is a tuning parameter to scale the covariance to match the reward scale of the specific task, while ${\sigma_{obs}}$ controls how much we believe each new observation.
To define the process noise ${v}$ we want to leverage correlations between models; if two model predictions are similar, the utility of these models is likely correlated. To measure the similarity between two models $i$ and $j$ we use the angle between their gripper velocity commands ${\dot {{q}}}_{i}$ and ${\dot {{q}}}_{j}$. This similarity is then used to directly construct a covariance matrix for each arm pull: $$\begin{split}
{v}&\sim {\mathcal{N}\left(0,{\sigma_{tr}}^2 {\eta}^2 ({\xi}{\Sigma}+ \left(1 - {\xi}\right) {\mathbf{I}})\right)}\\
{\Sigma}_{i,j} & = \frac{\langle {\dot {{q}}}_{i}, {\dot {{q}}}_{j} \rangle}{\| {\dot {{q}}}_{i} \| \| {\dot {{q}}}_{j} \|} = \cos \theta_{i,j} \enspace.
\label{eqn:processnoise}
\end{split}$$ ${\sigma_{tr}}$ is the standard Kalman Filter transition noise factor tuning parameter. ${\xi}\in [0,1]$ is the correlation strength factor; larger ${\xi}$ gives more weight to the arm correlation, while smaller ${\xi}$ gives lower weight. When ${\xi}$ is zero then KF-MANDB will have the same update rule as KF-MANB, thus we can view KF-MANDB as a generalizion of KF-MANB, allowing for correlation between arms.
After estimating the utility of each model and the noise parameters at the current timestep, these values are then passed into a Kalman filter which estimates a new joint distribution. The next step is the same as KF-MANB; we draw a sample from the resulting distribution, then use the model that yields the largest sample to generate the next robot command. In this way we automatically switch between exploration and exploitation as the system evolves; if we are uncertain of the utility of our models then we are more likely to choose different models from one timstep to the next. If we believe that we have accurate estimates of utility, then we are more likely to choose the model with the highest utility.
Determining ${\dot {{q}}}$ {#sec:desired_direction}
--------------------------
### Error Correction
[r]{}[0.5]{}
{width="1.8in"}
\[fig:error\_examples\]
${\dot {\mathcal{P}}}_e \gets \boldsymbol 0_{{{3{P}}}\times 1}$, ${W}_e \gets \boldsymbol 0_{{P}\times 1}$ $k \gets \operatorname*{argmin}_{j \in \{ 1,2,\dots,{P}\}} \| {\mathcal{T}}_i - {\mathcal{P}}_j \|$ ${\dot {\mathcal{P}}}_{e,k} \gets {\dot {\mathcal{P}}}_{e,k} + {\mathcal{T}}_i - {\mathcal{P}}_k$ ${W}_{e,k} \gets \max ({W}_{e,k}, \| {\mathcal{T}}_i - {\mathcal{P}}_k \|)$ $\{ \dot {\mathcal{P}}_e, {W}_e \}$
\[alg:error\_correction\]
$E \gets$ EuclidianDistanceMatrix$({\mathcal{P}})$ ${\dot {\mathcal{P}}}_s \gets \boldsymbol 0_{3{P}\times 1}$, ${W}_s \gets \boldsymbol 0_{{P}\times 1}$ $\Delta \gets E - {D}$ $v \gets \Delta_{i,j}({\mathcal{P}}_j - {\mathcal{P}}_i)$ ${\dot {\mathcal{P}}}_{s,i} \gets {\dot {\mathcal{P}}}_{s,i} + \frac{1}{2}v$ ${\dot {\mathcal{P}}}_{s,j} \gets {\dot {\mathcal{P}}}_{s,j} - \frac{1}{2}v$ ${W}_{s,i} \gets \max ({W}_{s,i}, \Delta_{i,j})$ ${W}_{s,j} \gets \max ({W}_{s,j}, \Delta_{i,j})$ $\{ {\dot {\mathcal{P}}}_s, {W}_s \}$
\[alg:stretching\_correction\]
We build on previous work [@Berenson2013], splitting the desired deformable object movement into two parts: an error correction part and a stretching correction part. When defining the direction we want to move the deformable object to minimize error we calculate two values; which direction to move the deformable object points ${\dot {\mathcal{P}}}_e$ and the importance of moving each deformable object point ${W}_e$. This is analogous to computing the gradient of error, as well as an “importance factor” for each part of the gradient. We need these weights to be able to differentiate between points of the object where the error function is a plateau versus points where the error function is at a local minimum (Fig. \[fig:error\_examples\]). Typically this is achieved using a Hessian, however our error function does not have a second derivative at many points. We use the `ErrorCorrection` (Alg. \[alg:error\_correction\]) function to calculate these values. Each target point ${\mathcal{T}}_i \in {\mathcal{T}}$ defines a potential field, pulling the nearest point on the deformable object ${\mathcal{P}}_k$ towards ${\mathcal{T}}_i$. ${W}_e$ is set to the maximum distance ${\mathcal{P}}_k$ is being pulled by any target point. This allows ${W}_e$ to be insensitive to changes in discretization.
### Stretching Correction
Our algorithm for stretching correction is similar to that found in [@Berenson2013], with the addition of a weighting term ${W}_s$, and a change in how we combine the two terms. We use the `StretchingCorrection` function (Alg. \[alg:stretching\_correction\]) to compute ${\dot {\mathcal{P}}}_s$ and ${W}_s$ based on a task-defined stretching threshold $\lambda \geq 0$. First we compute the distance between every two points on the object and store the result in $E$. We then compare $E$ to $D$ which contains the relaxed lengths between every pair of points. If any two points are stretched by more than $\lambda$, we attempt to move the points closer to each other. We use the same strategy for setting the importance of this stretching correction ${W}_s$ as we use for error correction. When combining stretching correction and error correction terms (Alg. \[alg:combine\_terms\]) we prioritize stretching correction, accepting only the portion of the error correction that is orthogonal to the stretching correction term for each point.
### Obstacle Avoidance
[r]{}[0.5]{}
${\dot {\mathcal{P}}}_{d,i} \gets {\dot {\mathcal{P}}}_{s,i} + \left( {\dot {\mathcal{P}}}_{e,i} - \operatorname*{Proj}_{{\dot {\mathcal{P}}}_{s,i}} {\dot {\mathcal{P}}}_{e,i} \right)$ ${W}_{d,i} \gets {W}_{s,i} + {W}_{e,i}$ $\{ {\dot {\mathcal{P}}}_d, {W}_d \}$
\[alg:combine\_terms\]
$J_{p^g}, \dot x _{p^g}, d_g \gets$ Proximity$({\mathcal{O}}, {g})$ $\gamma \gets e^{-\beta d_g}$ ${\dot {{q}}}_{c,g} \gets J_{p^g}^+ \dot x_{p^g}$ ${\dot {{q}}}_{c,g} \gets \frac{{{{\dot {q}}_ {\ifthenelse{\isempty{o}}{\textrm{max}}{\text{max},o}}
}}}{\| {\dot {{q}}}_{c,g} \|} {\dot {{q}}}_{c,g}$ ${\dot {{q}}}_{{g}} \gets \gamma \left( {\dot {{q}}}_{c,g} + \left( {\mathbf{I}}- J_{p^g}^+ J_{p^g} \right) {\dot {{q}}}_{g}\right) + (1-\gamma){\dot {{q}}}_{g}$ ${\dot {{q}}}$
\[alg:obstaclerepulsion\]
In order to guarantee that the grippers do not collide with any obstacles, we use the same strategy from [@Berenson2013], smoothly switching between collision avoidance and other objectives (see Alg. \[alg:obstaclerepulsion\]). For every gripper ${g}$ and an obstacle set ${\mathcal{O}}$ we find the distance $d_{g}$ to the nearest obstacle, a unit vector $\dot x_{p_{g}}$ pointing from the obstacle to the nearest point on the gripper, and a Jacobian $J_{p^{g}}$ between the gripper’s DOF and the point on the gripper. The `Proximity` function is shown in Appendix \[apx:obstacle\_proximity\]. $\beta > 0$ sets the rate at which we change between servoing and collision avoidance objectives. ${{{\dot {q}}_ {\ifthenelse{\isempty{o}}{\textrm{max}}{\text{max},o}}
}}> 0$ is an internal parameter that sets how quickly we move the robot away from obstacles.
Jacobian Models {#sec:jacobian_models}
---------------
Every model must define a prediction function ${\phi}({\dot {{q}}})$ and has an associated robot command function ${\psi}({\dot {\mathcal{P}}}, {W})$. This paper focuses on Jacobian-based models whose basic formulation Eq. directly defines the deformation model ${\phi}$ $${\phi}({\dot {{q}}}) = J {\dot {{q}}}.\enspace$$ When defining the robot command function ${\psi}$, we use the weights ${W}$ to focus the robot motion on the important part of ${\dot {\mathcal{P}}}$. This is done by using a weighted norm in a standard minimization problem $${\psi}({\dot {\mathcal{P}}}, {W}) = \operatorname*{argmin}_{{\dot {{q}}}} \| J {\dot {{q}}}- {\dot {\mathcal{P}}}\|^2_{{W}} \mbox{ s.t. } \| {\dot {{q}}}\|^2 < {{{\dot {q}}_ {\ifthenelse{\isempty{e}}{\textrm{max}}{\text{max},e}}
}}^2. \enspace
\label{eqn:jacobianbackwardfunction}$$ We also need to ensure that the grippers do not move too quickly, so we add the constraint that the robot moves no more than ${{{\dot {q}}_ {\ifthenelse{\isempty{e}}{\textrm{max}}{\text{max},e}}
}}> 0$. To solve we use the Gurobi [@Gurobi2016] optimizer. We use two different Jacobian approximation methods in our model set; a diminishing rigidity Jacobian, and an adaptive Jacobian, which are described below.
### Diminishing Rigidity Jacobian
The key assumption used by this method [@Berenson2013] is *diminishing rigidity*: the closer a gripper is to a particular part of the deformable object, the more that part of the object moves in the same way that the gripper does (i.e. more “rigidly”). The further away a given point on the object is, the less rigidly it behaves; the less it moves when the gripper moves. Details of how to construct a diminishing rigidity Jacobian are in Appendix \[apx:diminishing\_rigidity\]. This approximation depends on two parameters $k_{trans}$ and $k_{rot}$ which control how the translational and rotational rigidity scales with distance. Small values entail very rigid objects; high values entail very deformable objects.
### Adaptive Jacobian
A different approach is taken in [@Navarro-Alarcon2013], instead using online estimation to approximate $J(q)$. In this formulation we start with some estimate of the Jacobian $\tilde J(0)$ at time $t = 0$ and then use the Broyden update rule [@Broyden1965] to update $\tilde J(t)$ at each timestep $t$ $$\tilde J(t) = \tilde J(t-1) + \Gamma \frac{\left( {\dot {\mathcal{P}}}(t) - \tilde J(t-1) {\dot {{q}}}(t) \right)}{{\dot {{q}}}(t)^T {\dot {{q}}}(t)} {\dot {{q}}}(t)^T \enspace.$$ This update rule depends on a update rate $\Gamma \in (0, 1]$ which controls how quickly the estimate shifts between timesteps.
Experiments and Results
=======================
We test our method on three synthetic tests and three deformable object manipulation tasks in simulation. The synthetic tasks show that the principles we use to estimate the coupling between models are reasonable; while the simulated tasks show that our method is effective at performing deformable object manipulation tasks.
Synthetic Tests
---------------
For the synthetic tests, we set up an underactuated system that is representative of manipulating a deformable object with configuration $y \in {\mathbb{R}}^n$ and control input $\dot x \in {\mathbb{R}}^m$ such that $m < n$ and $\dot y = J \dot x$. To construct the Jacobian of this system we start with $J = \begin{bmatrix}{\mathbf{I}}_{m \times m} \\ \mathbf{0}_{(n-m) \times m} \end{bmatrix}$ and add uniform noise drawn from $[-0.1, 0.1]$ to each element of $J$. The system configuration starts at $\begin{bmatrix}10 & \dots & 10\end{bmatrix}^T$ with the target configuration set to the origin. Error is defined as ${\rho}(t) = \| y(t) \|$, and the desired direction to move the system at each timestep is $\dot y_d(t) = - y(t)$. These tasks have no obstacles or stretching, thus $\beta, \lambda,$ and ${{{\dot {q}}_ {\ifthenelse{\isempty{o}}{\textrm{max}}{\text{max},o}}
}}$ are unused. Rather than setting the utility noise scale ${\eta}$ *a priori*, we use an annealing filter $${\eta}(t+1) = \max(10^{-10}, 0.9 {\eta}(t) + 0.1 |{r}(t+1)|) \enspace.
\label{eqn:jacobian_minimization}$$ This enables us to track the changing available reward as the system gets closer to the target. All other parameters are shown in App \[apx:param\_table\].
To generate a model for the model set we start with the true Jacobian $J$ and add uniform noise drawn from $[-0.025, 0.025]$ to each element of $J$. For an individual trial, each bandit algorithm uses the same $J$ and the same model set. Each bandit algorithm receives the same random number stream during a trial, ensuring that a more favourable stream doesn’t bias results. We ran one small test using a $3 \times 2$ Jacobian with 10 arms in order to yield results that are easily visualised. The second and third tests are representative of the scale of the simulation experiments, using the same number of models and similar sizes of Jacobian as are used in simulation. A single trial consists of 1000 pulls (1000 commanded actions); each test was performed 100 times to generate statistically significant results. Our results in Table \[tab:synthetic\_results\] show that KF-MANDB clearly performs the best for all three tests.
[cccccc]{} \# of Models & $n$ & $m$ & UCB1-Normal & KF-MANB & KF-MANDB\
10 & 3 & 2 & 4.41 \[1.65\] & 3.62 \[1.73\] & 2.99 \[1.40\]\
60 & 147 & 6 & 5.57 \[1.37\] & 4.89 \[1.32\] & 4.53 \[1.42\]\
60 & 6075 & 12 & 4.21 \[0.64\] & 3.30 \[0.56\] & 2.56 \[0.54\]\
Simulation Trials
-----------------
We now demonstrate the effectiveness of multi-arm bandit techniques on three example tasks, show how to encode those tasks for use in our framework, and discuss experimental results. The first task shows how our method can be applied to a rope, with the goal of winding the rope around a cylinder in the environment. The second and third tasks show the method applied to cloth. In the second task, two grippers manipulate the cloth so that it covers a table. In the third task, we perform a two-stage coverage task, covering portions of two different cylinders. In all three tasks, the alignment error ${\rho}({\mathcal{P}}, {\mathcal{T}})$ is measured as the sum of the distances between every point in ${\mathcal{T}}$ and the closest point in ${\mathcal{P}}$ in meters. Figure \[fig:simulation\_task\_screenshots\] shows the target points in red, and the deformable object in green. The video accompanying this paper shows the task executions.
All experiments were conducted in the open-source Bullet simulator [@Coumans2010], with additional wrapper code developed at UC Berkeley. The rope is modeled as a series of 49 small capsules linked together by springs and is 1.225m long. The cloth is modeled as a triangle mesh of size $0.5\text{m} \times 0.5\text{m}$ for the table coverage task, and size $0.5\text{m} \times 0.625\text{m}$ for the two-stage coverage task. We emphasize that our method does not have access to the model of the deformable object or the simulation parameters. The simulator is used as a “black box” for testing.
We use models generated using the same parameters for all three tasks with a total of 60 models: 49 diminishing rigidity models with rotation and translational deformability values $k_{trans}$ and $k_{rot}$ ranging from 0 to 24 in steps of 4, as well as 11 adaptive Jacobian models with learning rates $\Gamma$ ranging from $1$ to $10^{-10}$ in multiples of 10. All adaptive Jacobian models are initialized with the same starting values; we use the diminishing rigidity Jacobian for this seed with $k_{trans}=k_{rot}=10$ for the rope experiment and $k_{trans}=k_{rot}=14$ for the cloth experiments to match the best model found in [@Berenson2013]. We use the same strategy for setting ${\eta}$ as we use for the synthetic tests. App \[apx:param\_table\] shows all other parameters.
We evaluate results for the MAB algorithms as well as using each of the models in the set for the entire task. To calculate regret for each MAB algorithm, we create copies of the simulator at every timestep and simulate the gripper command, then measure the resulting reward ${r}_{m}(t)$ for each model. The reward of the best model ${r}^*(t)$ is then the maximum of individual rewards. As KF-MANB and KF-MANDB are not deterministic algorithms, each task is performed 10 times for these methods. All tests are run on an Intel Xeon E5-2683 v4 processor with 64 GB of RAM. UCB1-Normal and KF-MANB solve Eq. once per timestep, while KF-MANDB solves it for every model in ${\mathcal{M}}$. Computation times for each test are shown in their respective sections.
![Sequence of snapshots showing the execution of the simulated experiments using the KF-MANDB algorithm. The rope and cloth are shown in green, the grippers is shown in blue, and the target points are shown in red. The bottom row additionally shows ${\dot {\mathcal{P}}}_d$ as green rays with red tips.[]{data-label="fig:simulation_task_screenshots"}](CombinedImages){width="\textwidth"}
*Winding a Rope Around a Cylinder*: In the first example task, a single gripper holds a rope that is lying on a table. The task is to wind the rope around a cylinder which is also on the table (see Fig. \[fig:simulation\_task\_screenshots\]). Our results (Fig. \[fig:ropecylinder\_results\]) show that at the start of the task all the individual models perform nearly identically, starting to split at 2 seconds (when the gripper first approaches the cylinder) and again at 6 seconds. Despite our model set containing models that are unable to perform the task, our formulation is able to successfully perform the task using all three bandit algorithms. Interestingly, while KF-MANDB outperforms UCB1-Normal and KF-MANB in terms of regret, all three algorithms produce very similar results. Solving Eq. at each iteration requires an average of 17.3 ms (std. dev. 5.5 ms) for a single model, and 239.5 ms (std. dev. 153.7 ms) for 60 models.
\
*Spreading a Cloth Across a Table*: The second scenario we consider is spreading a cloth across a table. In this scenario two grippers hold the rectangular cloth at two corners and the task is to cover the top of the table with the cloth. All of the models are able to perform the task (see Fig. \[fig:clothtable\_results\]), however, many single-model runs are slower than the bandit methods at completing the task, showing the advantage of the bandit methods. When comparing between the bandit methods, both error and total regret indicate no performance difference between the methods. Solving Eq. at each iteration requires an average of 89.5 ms (std. dev. 82.4 ms) for a single model, and 605.1 ms (std. dev. 514.3 ms) for 60 models.
\
*Two-Part Coverage Task*: In this experiment, we consider a two-part task. The first part of the task is to cover the top of a cylinder similar to our second scenario. The second part of the task is to cover the far side of a second cylinder. For this task the `GetTargets` function used previously pulls the cloth directly into the second cylinder. The collision avoidance term then negates any motion in that direction causing the grippers to stop moving. To deal with this, we discretize the free space using a voxel grid, and then use Dijkstra’s algorithm to find a collision free path between each cover point and every point in free space. We use the result from Dijkstra’s algorithm to define a vector field that pulls the nearest (as defined by Dijkstra’s) deformable object point $p_k$ along the shortest collision free path to the target point. This task is the most complex of the three (see Fig. \[fig:clothwafr\_results\]); many models are unable to perform the task at all, becoming stuck early in the task. We also observe that both KF-MANB and KF-MANDB show a preference for some models over others. Two interesting trials using KF-MANDB stand out; in the first the grippers end up on opposite sides of the second cylinder, in this configuration the physics engine has difficulty resolving the scene and allows the cloth to be pulled straight through the second cylinder. In the other trial the cloth is pulled off of the first cylinder, however KF-MANDB is able to recover, moving the cloth back onto the first cylinder. KF-MANDB and UCB1-Normal are able to perform the task significantly faster than KF-MANB, though all MAB methods complete the task using our formulation. Solving Eq. at each iteration requires an average of 102.6 ms (std. dev. 30.6 ms) for a single model, and 565.5 ms (std. dev. 389.8 ms) for 60 models.
\
Conclusion
==========
We have formulated model selection for deformable object manipulation as a MAB problem. Our formulation enables the application of existing MAB algorithms to deformable object manipulation as well as introduces a novel *utility* metric to measure how useful a model is at performing a given task. We have also presented Kalman Filtering for Non-stationary Multi-arm Normal Dependent Bandits (KF-MANDB) to leverage coupling between dependent bandits to learn more from each arm pull. Our experiments show how to perform several interesting tasks for rope and cloth using our method.
One notable result we observe is that finding and exploiting the best model is less important than avoiding poor models for extended periods of time; in all of the experiments UCB1-Normal never leaves its initial exploration phase, however it is able to successfully perform each task. We believe this is due to many models being able to provide commands that have a positive dot-product with the correct direction of motion.
One limitation of KF-MANDB is handling bifurcations; when very small differences in command sent to the robot cause large differences in the result the assumption of coupling between models in KF-MANDB does not hold. In future work we seek to explore how to overcome this limitation, as well as using the predictive accuracy of each model as an additional measure of model coupling.
Acknowledgements
================
This work was supported in part by NSF grants IIS-1656101 and IIS-1551219. We gratefully acknowledge Calder Phillips-Grafflin for his assistance with Bullet.
MAB Algorithm Blocks
====================
UCB1-Normal {#apx:ucb1normal}
-----------
Reproduced from [@Auer2002].\
\
**Loop**: For each $n = 1, 2, \dots$
- If there is a machine which has been played less than $\ceil*{8 \log n}$ times then play this machine. If multiple machines qualify, we play the machine that has been played less, selecting the machine with the lower index in the case of a tie.
- Otherwise play machine $j$ that maximizes $$\bar x_j + \sqrt{16 \cdot \frac{q_j -n_j \bar x_j^2}{n_j - 1} \cdot \frac{\ln(n-1)}{n_j}}$$ where $\bar x_j$ is the average reward obtained from machine $j$, $q_j$ is the sum of squared rewards obtained from machine $j$, and $n_j$ is the number of times machine $j$ has been played so far.
- Update $\bar x_j$ and $q_j$ with the obtained reward $x_j$.
KF-MANB {#apx:kfmanb}
-------
Number of bandit arms $L$; Observation noise $\sigma_{ob}^2$; Transition noise $\sigma_{tr}^2$. $\mu_q[1] = \mu_2[1] = \dots = \mu_L[1] = A$; $\sigma_1[1] = \sigma_2[1] = \dots = \sigma_L[1] = B$; *\# Typically, $A$ can be set to $0$, with $B$ being sufficiently large*
Diminishing Rigidity Jacobian Construction {#apx:diminishing_rigidity}
==========================================
For every point $p_i \in {\mathcal{P}}$ and every gripper ${g}$ we construct a Jacobian $J_{rigid}(q,i,g)$ such that if $p_i$ was rigidly attached to the gripper $g$ then $$\dot p_i = J_{\mathit{rigid}}(q,i,g) \dot {{q}}_{{g}} =
\begin{bmatrix}J_{trans}(q,i,g) & & & J_{rot}(q,i,g)\end{bmatrix}
\dot {{q}}_{{g}} \enspace .$$ Let $D_{i,g}$ be a measure of the distance between gripper $g$ and point $p_i$. Then the translational rigidity of point $p_i$ with respect to gripper $g$ is defined as $$w_{trans}(i,g) = e^{-k_{trans}D_{i,g}}$$ and the rotational rigidity is defined as $$w_{rot}(i,g) = e^{-k_{rot}D_{i,g}}.$$ To construct an approximate Jacobian $\tilde J(q)$ for a single point we combine the rigid Jacobians with their respective rigidity values $$\tilde J(q,i,g) = \begin{bmatrix}w_{trans}(i,g) J_{trans}(q,i,g) & & w_{rot}(i,g) J_{rot}(q,i,g)\end{bmatrix} \enspace ,$$ and then combine the results into a single matrix $$\tilde J(q) =
\begin{bmatrix}
\tilde J(q,1,1) & \tilde J(q,1,2) & \dots & \tilde J(q, 1, G) \\
\tilde J(q,2,1) & \ddots \\
\vdots \\
\tilde J(q,P,1)
\end{bmatrix} \enspace .$$
Obstacle Proximity Algorithm {#apx:obstacle_proximity}
============================
$d_{g}\gets \infty$ $p^{g}, p^o \gets$ ClosestPoints$({g}, o)$ $v \gets p^{g}- p^o$ $d_{g}\gets \| v \|$ $\dot x_{p^{g}} \gets \frac{v}{\| v \|}$ $J_{p^{g}} \gets$ GripperPointJacobian$({g}, p^{g})$ $\{J_{p^{g}}, x_{p^{g}}, d_{g}\}$
Experiment Parameter Values {#apx:param_table}
===========================
[lcccccc]{} & & & & &\
${\mathfrak{se}(3)}$ inner product constant & $c$ & - & 0.0025 & 0.0025 & 0.0025\
Servoing max gripper velocity & ${{{\dot {q}}_ {\ifthenelse{\isempty{e}}{\textrm{max}}{\text{max},e}}
}}$ & 0.1 & 0.2 & 0.2 & 0.2\
Obstacle avoidance max gripper velocity & ${{{\dot {q}}_ {\ifthenelse{\isempty{o}}{\textrm{max}}{\text{max},o}}
}}$ & - & 0.2 & 0.2 & 0.2\
Obstacle avoidance scale factor & $\beta$ & - & 200 & 1000 & 1000\
Stretching correction scale factor & $\lambda$ & - & 0.005 & 0.03 & 0.03\
[lcccccc]{} & & & & &\
& ${\xi}$ & 0.9 & 0.9 & 0.9 & 0.9\
Transition noise factor & ${\sigma_{tr}}^2$ & 1 & 0.1 & 0.1 & 0.1\
Observation noise factor & ${\sigma_{obs}}^2$ & 1 & 0.01 & 0.01 & 0.01\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Classify simple games into sixteen “types” in terms of the four conventional axioms: monotonicity, properness, strongness, and nonweakness. Further classify them into sixty-four classes in terms of finiteness (existence of a finite carrier) and algorithmic computability. For each such class, we either show that it is empty or give an example of a game belonging to it. We observe that if a type contains an infinite game, then it contains both computable ones and noncomputable ones. This strongly suggests that computability is logically, as well as conceptually, unrelated to the conventional axioms.
*Journal of Economic Literature* Classifications: C71, C69, D71, D90.
*Keywords:* Voting games, axiomatic method, complete independence, Turing computability, multi-criterion decision-making.
author:
- |
Masahiro Kumabe\
Faculty of Liberal Arts, The Open University of Japan\
2-11 Wakaba, Mihama-ku, Chiba City, 261-8586 Japan
- |
H. Reiju Mihara[^1]\
Kagawa University Library\
Takamatsu 760-8525, Japan
date: February 2011
title: 'Computability of simple games: A complete investigation of the sixty-four possibilities[^2]'
---
Introduction
============
Shortly after proposing four “independent” axioms characterizing simple majority rule [@may52], @may53 made a complete investigation of the axioms. By a “*complete investigation* of the four axioms,” we mean an investigation of all the sixteen ($2^4$) classes (of rules), formed by classifying all the rules in terms of whether they satisfy each axiom.[^3] In particular, May showed that the four axioms are “completely independent” in the sense that each of the sixteen classes is nonempty.
In this paper, we provide a *complete investigation* of six axioms for simple games. A *(simple) game*[^4] is a coalitional game that assigns either 1 or 0 to each coalition—those assigned 1 are winning coalitions and those assigned 0 are losing coalitions. Among the six axioms, four are conventional: *monotonicity*, *properness*, *strongness*, and *nonweakness*. These axioms classify games into sixteen ($2^4$) classes, which we call *(conventional) types*. The other two are *finiteness* (existence of a finite carrier) and *computability*, which is the focus of this paper. The results of the investigation (of all the $2^4\times 2^2=64$ classes) are summarized in Table \[theresults\] in Section \[overview\].[^5]
To present what we can observe from Table \[theresults\], we define what we mean by an axiom (namely computability) being independent of others (namely the four conventional axioms): We say that “computability is *independent of* the four axioms (*within* a class of games)” if for each of the sixteen types, there is a computable game of that type (in that class) if and only if there is a noncomputable game of that type (in that class).[^6] Put differently, if computability is *not* independent of the four axioms within a certain class, then for some type $t$, there are type $t$ games in the class, but they are all computable or all noncomputable.
One of our main findings is (Proposition \[prop:indep\]) that *computability is independent of the four conventional axioms within the class of *infinite* games*. (The analogue of Proposition \[prop:indep\] does not hold for the class of *finite* games. This is because all finite games are computable.) In fact, we come close to saying that computability is independent of the four conventional axioms (within the class of *all* games). The conditions for the independence are satisfied for fifteen out of the sixteen types. The only exception is type 2, consisting exclusively of dictatorial (hence computable) games. This strongly suggests that computability is logically, as well as conceptually, unrelated to the conventional axioms.[^7] In other words, as far as compatibility with the conventional axioms are concerned, computability is almost nonrestrictive.
The rest of the Introduction gives a brief background. The companion paper [@kumabe-m08jme] gives further discussion.
One can think of simple games as representing voting methods or multi-criterion decision rules. They have been central to the study of social choice [e.g., @peleg02hbscw; @kumabe-m1008geb]. For this reason, the paper can be viewed as a contribution to the foundations of *computability analysis of social choice*, which studies algorithmic properties of social decision-making.[^8] The importance of computability in social choice theory would be unarguable. First, the use of the language by social choice theorists suggests the importance: for example, @arrow63 uses words such as “*process or rule*” or “*procedure*.” Second, there is a normative reason: computability of social choice rules formalizes the notion of “due process.”
We consider an infinite set of “players.” Roughly speaking, a simple game is *computable* if there is a Turing program (finite algorithm) that can decide from any description (by integer) of each coalition whether it is winning or losing. Since each member of a coalition should be describable, we assume that the set $N$ of (the names of) players is countable, say, $N={\mathbb{N}}=\{0,1,2, \dots \}$. Each coalition is described by a Turing program that can decide for the name of each player whether she is in the coalition. Note that there are infinitely many Turing programs that describes the same coalition. Since each Turing program has its code number (Gödel number), the coalitions describable in this manner are describable by an integer, as desired. (Such coalitions are called *recursive* coalitions.)
@kumabe-m08jme give three interpretations of *countably many players*: (i) generations of people extending into the indefinite future, (ii) finitely many *persons* facing countably many *states* of the world [@mihara97et], and (iii) attributes or *criteria* in multi-criterion decision-making.
Examples of multi-criterion decisions include (a) forming a team to perform a particular task [@kumabe-m08jme],[^9] (b) granting tenure to junior faculty members at academic institutions [@alnajjar-af06], and (c) deciding whether a certain act is legal [@kumabe-m07csg64]. In these examples, there are potentially infinitely many criteria or contingencies on which decisions can be based.
Framework
=========
Simple games {#notions}
------------
Let $N={\mathbb{N}}=\{0,1,2, \dots \}$ be a countable set of (the names of) players. Intuitively, a simple game describes in a crude manner the power distribution among observable (or describable) subsets of players. Such sets are called *coalitions*. In this paper, we define a **coalition** to be a recursive (algorithmically decidable) set; it is a set of players for which there is a Turing program (algorithm) that can decide for the name of each player whether she is in the set.[^10] Note that **the class ${\mathrm{REC}}$ of (recursive) coalitions** forms a **Boolean algebra**; that is, it includes $N$ and is closed under union, intersection, and complementation.
Formally, a **(simple) game** is a collection $\omega\subseteq{\mathrm{REC}}$ of (recursive) coalitions. We will be explicit when we require that $N\in \omega$. The coalitions in $\omega$ are said to be **winning**. The coalitions not in $\omega$ are said to be **losing**. One can regard a simple game as a function from REC to $\{0,1\}$, assigning the value 1 or 0 to each coalition, depending on whether it is winning or losing.
We introduce from the theory of cooperative games a few basic notions of simple games [@peleg02hbscw; @weber94]. A simple game $\omega$ is said to be **monotonic** if for all coalitions $S$ and $T$, the conditions $S\in \omega$ and $T\supseteq S$ imply $T\in\omega$. $\omega$ is **proper** if for all recursive coalitions $S$, $S\in\omega$ implies $S^c:=N\setminus S\notin\omega$. $\omega$ is **strong** if for all coalitions $S$, $S\notin\omega$ implies $S^c\in\omega$. $\omega$ is **weak** if $\omega=\emptyset$ or the intersection $\bigcap_{S\in\omega}S=\bigcap\omega$ of the winning coalitions is nonempty. The members of $\bigcap_{S\in\omega}S$ are called **veto players**; they are the players that belong to all winning coalitions. (The set $\bigcap_{S\in\omega}S$ of veto players may or may not be observable.) $\omega$ is **dictatorial** if there exists some $i_0$ (called a **dictator**) in $N$ such that $\omega=\{\,S\in{\mathrm{REC}}: i_0\in
S\,\}$. Note that a dictator is a veto player, but a veto player is not necessarily a dictator. It is immediate to prove the following well-known lemmas:
\[weakisproper\] If a simple game is weak, it is proper.
\[strongweakisdic\] A simple game is dictatorial if and only if it is strong and weak.
A **carrier** of a simple game $\omega$ is a coalition $S\subseteq N$ such that for all coalitions $T$, we have $T\in\omega$ iff $S\cap T\in \omega$. We observe that if $S$ is a carrier, then so is any coalition $S'\supseteq S$. Slightly abusing the word, we sometimes say a game is **finite** if it has a finite carrier; otherwise, it is **infinite**.
The computability notion {#comp:notion}
------------------------
**Notation**. A *partial function (of $n$ variables)* is a function (into natural numbers) whose domain is a subset of ${\mathbb{N}}^n$. For a partial function $\psi$, $\psi(x)\downarrow$ means $\psi(x)$ is defined; $\psi(x)\uparrow$ means $\psi(x)$ is undefined. For $k\in {\mathbb{N}}$, let $\varphi_k(\cdot)$ be the *$k$th partial recursive function* (of one variable)—it is the partial function (of one variable) computed by the Turing program with code (Gödel) number $k$.$\|$
First, we represent each recursive coalition by a characteristic index ($\Delta_0$-index). A number $e$ is a **characteristic index** for a coalition $S$ if $\varphi_e$ is the characteristic function for $S$.[^11] Intuitively, a characteristic index for a coalition describes the coalition by a Turing program that can decide its membership.
Next, we introduce an indicator for a game. It assigns the value 1 or 0 to each number representing a coalition, depending on whether the coalition is winning or losing. When a number does not represent a recursive coalition, the value is undefined. Given a simple game $\omega$, its **$\delta$-indicator** is the partial function $\delta_\omega$ on ${\mathbb{N}}$ defined by $$\label{d:eq}
\delta_\omega(e)=\left\{
\begin{array}{ll}
1 & \mbox{if $e$ is a characteristic index for a recursive
set in $\omega$}, \\
0 & \mbox{if $e$ is a characteristic index for a recursive
set not in $\omega$}, \\
\uparrow & \mbox{if $e$ is not a characteristic
index for any recursive set}.
\end{array}
\right.$$ Note that $\delta_\omega$ is well-defined since each $e\in{\mathbb{N}}$ can be a characteristic index for at most one set. If $e$ and $e'$ are characteristic indices for the same coalition, then the definition implies $\delta_\omega(e)=\delta_\omega(e')$.
Finally, we introduce the notion of *($\delta$)-computable* games. The condition requires existence of a Turing program that correctly answers whether a coalition is winning or losing, from any one of infinitely many characteristic indices for the coalition.
A game $\omega$ is ($\delta$)-**computable** if $\delta_\omega$ has an extension to a partial recursive function.[^12]
Among various notions of computability that we could conceive of, this notion is the only one that we find [@mihara04mss] defensible.[^13]
Overview of the Results {#overview}
=======================
This section gives a summary of the results in Sections \[finitecarriers\]–\[nofinitecarriers\].
We classify games into sixty-four ($2^4\times 2^2$) classes as shown in Table \[theresults\], in terms of their **(conventional) types** (with respect to the conventional axioms of monotonicity, properness, strongness, and nonweakness), *finiteness* (existence of a finite carrier), and $\delta$-*computability*. For each of the 64 classes, we ask whether there exists a game in the class. The answers are given in Sections \[finitecarriers\]–\[nofinitecarriers\].[^14] Table \[theresults\] summarizes the answers.[^15]
------------- ----- ------------ -- ------ ------------
Types Non Computable Non Computable
1 $(++++)$ no yes yes yes
2 $(+++-)$ no *yes* *no* *no*
3 $(++-+)$ no yes yes yes
4 $(++--)$ no yes yes yes
5 $(+-++)$ no yes yes yes
6 $(+-+-)$ no no no no
7 $(+--+)$ no yes yes yes
8 $(+---)$ no no no no
9 $(-+++)$ no yes yes yes
10 $(-++-)$ no no no no
11 $(-+-+)$ no yes yes yes
12 $(-+--)$ no yes yes yes
13 $(--++)$ no yes yes yes
14 $(--+-)$ no no no no
15 $(---+)$ no yes yes yes
16 $(----)$ no no no no
------------- ----- ------------ -- ------ ------------
: Existence of Games in Different Classes
\[theresults\]
We are mainly interested in the relation of computability to the four conventional axioms. What can we observe from Table \[theresults\]? For example, we can see that *there is a computable game of type 2 $(+++-)$, but not a noncomputable game of the same type*. (In fact, type 2 consists of dictatorial games.) This means that computability is *not* “independent of” the four axioms in the following sense: there is a nonempty type consisting only of computable games or only of noncomputable games.
*For each of the other fifteen types, however, there is a computable game of that type if and only if there is a noncomputable game of that type*. Hence, we could almost say that computability is “unrelated to” the four axioms. In fact, if we restrict our attention to the infinite games (games without a finite carrier), we can say this:
\[prop:indep\] The axiom $\delta$-computability is *independent of* monotonicity, properness, strongness, and nonweakness within the class of infinite games in the following sense: for each of the $2^4=16$ types, there exists a computable infinite game of that type if and only if there exists a noncomputable infinite game of that type.
We leave this section with two interesting observations involving the last three (instead of two as in Proposition \[prop:indep\]) columns of the table: From the rows corresponding to types 6, 8, 10, 14, 16, we conclude that *if there does not exist a finite computable game of a particular type, then there does not exist a game of that type*. From the other rows except row 2, we conclude that *if there exists an infinite (non)computable game of a particular type, then there exists a finite computable game of that type*.
Preliminary Results {#prelim}
===================
This section gives a sufficient condition and a necessary condition for a game to be computable. It also introduces notation needed in Sections \[finitecarriers\]–\[nofinitecarriers\].
**Notation**. We identify a natural number $k$ with the finite set $\{0,1,2,\ldots,k-1\}$, which is an initial segment of ${\mathbb{N}}$. Given a coalition $S\subseteq N$, we write $S\cap k$ to represent the coalition $\{i\in S: i<k\}$ consisting of the members of $S$ whose name is less than $k$. We call $S\cap k$ the **$k$-initial segment of $S$**, and view it either as a subset of ${\mathbb{N}}$ or as the string $S[k]$ of length $k$ of 0’s and 1’s (representing the restriction of its characteristic function to $\{0,1,2,\ldots,k-1\}$).$\|$
Consider a simple game. A string $\tau$ (of 0’s and 1’s) of length $k\geq 0$ is **winning determining** if any coalition $G\in{\mathrm{REC}}$ extending $\tau$ (in the sense that $\tau$ is an initial segment of $G$, i.e., $G\cap k=\tau$) is winning; $\tau$ is **losing determining** if any coalition $G\in{\mathrm{REC}}$ extending $\tau$ is losing. A string is **determining** if it is either winning determining or losing determining.
First, *to construct computable games*, we use the following proposition, which simply restates the “if” direction of Theorem 4 in @kumabe-m08jme. In particular, *finite games are computable*. As seen in Section \[overview\], whether a game is finite is an important criterion for classifying games in this paper.
\[delta0det2\] Let $T_0$ and $T_1$ be recursively enumerable sets of (nonempty) strings such that any coalition has an initial segment in $T_0$ or in $T_1$ but not both. Let $\omega$ be the simple game defined by $S\in \omega$ if and only if $S$ has an initial segment in $T_1$. Then $T_1$ consists only of winning determining strings, $T_0$ consists only of losing determining strings (so $S\notin \omega$ if and only if $S$ has an initial segment in $T_0$), and $\omega$ is $\delta$-computable.
Second, *to construct noncomputable games*, we use the following proposition [@kumabe-m08jme Proposition 3]. Here, the number $k-1$ may be greater than the greatest element, if any, of $S$:
\[cutprop\] Suppose that a $\delta$-computable simple game is given. If a coalition $S$ is winning, then it has an initial segment $S[k]$ (for some $k\in {\mathbb{N}}$) that is winning determining. If $S$ is losing, then it has an initial segment $S[k]$ that is losing determining.
**Notation**. Let $\alpha$ and $\beta$ be strings (of 0’s and 1’s). Then $\alpha^c$ denotes the string of the length $|\alpha|$ such that $\alpha^c(i)=1-\alpha(i)$ for each $i<|\alpha|$; for example, $0110100100^c=1001011011$. Occasionally, a string $\alpha$ is identified with the set $\{i: \alpha(i)=1\}$. (Note however that $\alpha^c$ is occasionally identified with the set $\{i: \alpha(i)=0\}$, but never with the set $\{i: \alpha(i)=1\}^c$.) $\alpha\beta$ (or $\alpha*\beta$) denotes the concatenation of $\alpha$ followed by $\beta$. $\alpha[k]$ denotes the prefix (initial segment) of $\alpha$ of length $k$. $\alpha\subseteq \beta$ means that $\alpha$ is a prefix of $\beta$ ($\beta$ extends $\alpha$); $\alpha \subseteq A$, where $A$ is a set, means that $\alpha$ is an initial segment of $A$ (i.e, $\alpha$ is equal to the initial segment $A[k]$, for some $k$.) Strings $\alpha$ and $\beta$ are **incompatible** if neither $\alpha\subseteq \beta$ nor $\beta\subseteq\alpha$ (i.e., there is $k< \min\{|\alpha|,|\beta|\}$ such that $\alpha(k)\neq \beta(k)$).$\|$
Finite Games {#finitecarriers}
============
We start with the class of finite games (games having a finite carrier). Any game in this class is $\delta$-computable.
In the following, for each of the eleven conventional types (with respect to monotonicity, properness, strongness, and nonweakness) not shown to be empty so far (footnote \[emptytypes\]), we give an example of a finite game of that type by exhibiting finite sets $T_0$ and $T_1$ satisfying the condition of Proposition \[delta0det2\].
1. $(++++)$ A monotonic, proper, strong, nonweak game. Let $T_0=\{00, 010, 100\}$ and $T_1=\{11, 011, 101\}$.
2. [$(+++-)$]{} A monotonic, proper, strong, weak game. Let $T_0=\{0\}$ and $T_1=\{1\}$. Player $0$ is a dictator.
3. [$(++-+)$]{} A monotonic, proper, nonstrong, nonweak game. Let $T_0=\{00, 010, 0110, 100, 1010\}$ and $T_1=\{11, 1011, 0111\}$.
4. [$(++--)$]{} A monotonic, proper, nonstrong, weak game. Let $T_0=\{0, 10\}$ and $T_1=\{11\}$.
5. [$(+-++)$]{} A monotonic, nonproper, strong, nonweak game. Let $T_0=\{00\}$ and $T_1=\{1, 01\}$.
6. [$(+--+)$]{} A monotonic, nonproper, nonstrong, nonweak game. Let $T_0=\{00,100,0110,0100\}$ and $T_1=\{11,101,0101,0111\}$.
7. [$(-+++)$]{} A nonmonotonic, proper, strong, nonweak game. Let $T_0=\{1\}$ and $T_1=\{0\}$.
8. [$(-+-+)$]{} A nonmonotonic, proper, nonstrong, nonweak game. Let $T_0=\{1, 01\}$ and $T_1=\{00\}$.
9. [$(-+--)$]{} A nonmonotonic, proper, nonstrong, weak game. Let $T_0=\{1, 00\}$ and $T_1=\{01\}$.
10. [$(--++)$]{} A nonmonotonic, nonproper, strong, nonweak game. Let $T_0=\{10\}$ and $T_1=\{0, 11\}$.
11. [$(---+)$]{} A nonmonotonic, nonproper, nonstrong, nonweak game. Let $T_0=\{01, 10\}$ and $T_1=\{00, 11\}$.
Infinite Games {#nofinitecarriers}
==============
We consider infinite games (games without finite carriers) in this section.
Noncomputable games {#infinitenoncomp}
-------------------
We first give examples of infinite *noncomputable* simple games. Proposition \[cutprop\] implies that all *computable* games (that have both winning and losing coalitions) belong to the class of games that have both finite winning coalitions and cofinite losing coalitions. To show that variety is not lost even if we restrict our games to this class, all the examples are chosen from the class. The examples in this section are based on the following lemma.
\[typicalnoncomp\] Let $A$ be a recursive set. Let $T_0$ and $T_1$ be recursively enumerable, nonempty sets of (nonempty) strings such that any coalition has an initial segment in $T_0$ or in $T_1$ but not both. Let $\omega$ be the simple game defined by $S\in \omega$ if and only if either $S=A$ or \[$S \ne A^c$ and $S$ has an initial segment in $T_1$\]. Then we have the following:\
$S\notin\omega$ if and only if either $S=A^c$ or \[$S \ne A$ and $S$ has an initial segment in $T_0$\].[^16]\
$\omega$ has a finite winning coalition and a cofinite losing coalition.\
Suppose further that either $A$ is infinite and has an initial segment in $T_0$ or $A^c$ is infinite and has an initial segment in $T_1$. Then $\omega$ is $\delta$-noncomputable (hence infinite).
(i) From the definition of $\omega$ and the assumption that any coalition $S$ has a initial segment in $T_0$ or $T_1$ but not both, we have $$\begin{aligned}
S\notin \omega & \iff & \textrm{$S\neq A$ and [$S=A^c$ or $S$ has no initial segment in~$T_1$]} \\
& \iff & \textrm{[$S\neq A$ and $S=A^c$] or} \\
& & \textrm{[$S\neq A$ and $S$ has no initial segment in~$T_1$]} \\
& \iff & \textrm{[$S=A^c$] or [$S\neq A$ and $S$ has an initial segment in~$T_0$].}\end{aligned}$$
(ii) Choose a string $\alpha$ from the nonempty set $T_1$. Let $\beta=\alpha*A(|\alpha|)$. Then $\beta\ne A^c$ since $\beta(|\alpha|)=A(|\alpha|)\neq A^c(|\alpha|)$. Since $\beta$ has the prefix (initial segment) $\alpha\in T_1$, $\beta\in \omega$ by the definition of $\omega$. We have obtained a finite winning coalition, namely $\beta$. To obtain a cofinite losing coalition, choose $\alpha\in T_0$ and let $\beta=\alpha*A^c(|\alpha|)$. Then by (i), $B:=\{i: \textrm{$\beta(i)=1$ or $\beta(i)\uparrow$} \}$ is a cofinite losing set.
(iii) Suppose $A$ is infinite and has an initial segment $A[k]$ in $T_0$. Suppose $\omega$ is $\delta$-computable. Then, by Proposition \[cutprop\], the winning coalition $A$ has an initial segment $A[k']$ that is a winning determining string. Let $\hat{k}=\max\{k, k'\}$. Then on the one hand, $A[\hat{k}]$, which is different from $A$ and has an initial segment in $T_0$, is losing by (i). On the other hand, $A[\hat{k}]$ is winning since it extends the winning determining string $A[k']$. We have obtained a contradiction. The case where $A^c$ is infinite and has an initial segment in $T_1$ is similar.
For each conventional type $t$ not shown to be empty so far (there are ten such types; footnote \[emptytypes\]), we can construct an example of an infinite noncomputable game $\omega^t$ of that type as follows: Let $T_0$ and $T_1$ be those sets in the example for type $t$ in Section \[finitecarriers\]. Let $A$ be the infinite set represented by $\tau*1111\ldots$ (i.e., $i\notin A$ iff $i<|\tau|$ and $\tau(i)=0$), where $\tau$ is any string belonging to $T_0$. (For $t=7$, we also require $\tau\neq 0100$.) For $t\ne 5$, let $\omega^t$ be the game $\omega$ defined by Lemma \[typicalnoncomp\]. For $t=5$, define $\omega^5$ by $S\in \omega^5$ if and only if $S=A$ or $S$ has an initial segment in $T_1$ (thus $S\notin\omega^5$ if and only if $S\ne A$ and $S$ has an initial segment in $T_0$). It is routine to verify, for each $t$, that $\omega^t$ is indeed of type $t$.[^17]
A class of infinite, computable, type 1 games {#nice_games}
---------------------------------------------
In this section, *we construct for each recursive set $A$, an infinite, computable, monotonic, proper, strong, nonweak simple game $\omega[A]$*. The construction is self-contained, but long and elaborate. One reason that the construction is complicated is that we construct a *family* of type 1 games $\omega[A]$, one for each recursive set $A$, while requiring *additional conditions* that would become useful for constructing other types of games in Section \[examples:nocarrier\].[^18]
Our approach is to construct recursively enumerable sets $T_0$ and $T_1$ of strings (of 0’s and 1’s) satisfying the conditions of Proposition \[delta0det2\]. We first construct certain sets $F_s$ of strings for $s\in\{0, 1, 2, \ldots\}$. We then specify an algorithm for enumerating the elements of $T_0$ and $T_1$ using the sets $F_s$, and construct a simple game $\omega[A]$ according to Proposition \[delta0det2\]. We conclude that the game is computable by checking (Lemma \[k46\]) that $T_0$ and $T_1$ satisfy the conditions of Proposition \[delta0det2\]. Finally, we show (Lemmas \[k47a\], \[k47b\], and \[k48\]) that the game satisfies the desired properties.
Before constructing sets $T_0$ and $T_1$ of determining strings, we introduce the notions of p-strings and d-strings. Roughly speaking, a p-string consists of $10$’s or $01$’s; A d-string is a concatenation of a p-string followed by $00$ or $11$. More formally, a string $\alpha$ is a **p-string** if $|\alpha|$ is even and for each $2k<|\alpha|$, we have $\alpha(2k) \alpha(2k+1)\in \{10, 01\}$ (i.e., $\alpha(2k+1)=1-\alpha(2k)$). Examples of a p-string include the empty string, 01, 0101, 0110, and 1001011010. Note that any prefix (initial substring) of even length of a p-string is a p-string. Denote by $\alpha^-$ the prefix $\alpha[|\alpha|-1]$ of $\alpha$ of length $|\alpha|-1$. In other words, $\alpha=\alpha^-*\alpha(|\alpha|-1)$. A string $\alpha$ (of even length) is a **d-string** if $\alpha^{--}$ is a p-string and $\alpha(|\alpha|-2) \alpha(|\alpha|-1)\in \{00, 11\}$ (i.e., $\alpha(|\alpha|-2)=\alpha(|\alpha|-1)$). In other words, a d-string $\alpha$ is of the form $\alpha^{--}*00$ or $\alpha^{--}*11$ for some p-string $\alpha^{--}$. It is easy to prove [@kumabe-m07csg64] the following lemma:
\[k41\] Any string of even length either is a p-string or extends a d-string. Any two distinct d-strings $\alpha$ and $\beta$ are incompatible. That is, we have neither $\alpha\subseteq \beta$ nor $\beta\subseteq\alpha$ (i.e., there is $k< \min\{|\alpha|,|\beta|\}$ such that $\alpha(k)\neq \beta(k)$).
Let $\{k_s\}_{s=0}^\infty$ be an effective listing (recursive enumeration) of the members of the recursively enumerable set $\{k : \varphi_k(2k)\in \{0,1\} \}$, where $\varphi_k(\cdot)$ is the $k$th partial recursive function of one variable (which is computed by the Turing program with code number $k$). We can assume without loss of generality that $k_0\geq 1$ and all the elements $k_s$ are distinct. Thus, $${\mathrm{CRec}}\subset \{k : \varphi_k(2k)\in \{0,1\}\} = \{k_0, k_1, k_2, \ldots\},$$ where ${\mathrm{CRec}}$ is the set of characteristic indices for recursive sets.
Let $l_{0}=2k_0+2\geq 4$ and for $s>0$, let $l_{s}=\max\{l_{s-1}, 2k_{s}+2\}$. Then $\{l_s\}$ is an nondecreasing sequence of even numbers and $l_s>2k_s+1$ for each $s$. Note also that $l_s\geq l_{s-1}>2k_{s-1}+1$, $l_s\geq l_{s-2}>2k_{s-2}+1$, etc. imply that $l_s> 2k_s+1$, $2k_{s-1}+1$, $2k_{s-2}+1$, …, $2k_0+1$.
For each $s$, let $F_s$ be the finite set of p-strings $\alpha=\alpha(0)\alpha(1)\cdots\alpha(l_s-1)
\supseteq 10$ of length $l_{s}\ge 4$ such that
1. $\alpha(2k_s)=\varphi_{k_s}(2k_s)$ and for each $s'<s$, $\alpha(2k_{s'})=1-\varphi_{k_{s'}}(2k_{s'})$.
Note that (1) imposes no constraints on $\alpha(2k)$ for $k\notin\{k_0,k_1,k_2, \ldots, k_s\}$, while it actually imposes constraints for all $k$ in the set, since $|\alpha|=l_s> 2k_s$, $2k_{s-1}$, $2k_{s-2}$, …, $2k_0$. We observe that if $\alpha\in F_s\cap F_{s'}$, then $s=s'$. Let $F=\bigcup_{s}F_s$. Then $F$ is recursive and we have the following:
\[Fincompatible1\] Any two distinct elements in $F$ are incompatible.
Let $\alpha$, $\beta\in F$ such that $|\alpha|\leq |\beta|$, without loss of generality. If $\alpha$ and $\beta$ have the same length, then the conclusion follows since otherwise they become identical strings. If $l_s=|\alpha|< |\beta|=l_{s'}$, then $s<s'$ and by (1), $\alpha(2k_s)=\varphi_{k_s}(2k_s)$ on the one hand, but $\beta(2k_s)=1-\varphi_{k_s}(2k_s)$ on the other hand. So $\alpha(2k_s)\neq \beta(2k_s)$.
Let $f$ be a recursive bijection from $F$ onto ${\mathbb{N}}$ ($f$ can be obtained by enumerating the elements of $F$ one by one, assigning $0$ to the first element enumerated, $1$ to the second element enumerated, and so on). Regarding $f$ as a partial function on the set of strings, we have $f(\alpha)\downarrow$ (i.e., $f(\alpha)$ is defined) if and only if $\alpha\in F$.
\[k42\] Let $\alpha\supseteq 10$ be a p-string of length $l_s$. Then the following statements are equivalent: no prefix of $\alpha$ is in $F$; for each $s'\leq s$, $\alpha[l_{s'}]\notin F$; for each $s'\leq s$, $f(\alpha[l_{s'}])\uparrow$; for each $s'\leq s$, $\alpha(2k_{s'})=1-\varphi_{k_{s'}}(2k_{s'})$.
The definition of $F$ implies that $\alpha\in F$ only if $|\alpha|=l_s$ for some $s$. Hence the equivalence of (i), (ii), and (iii) is immediate. We next show that (ii) and (iv) are equivalent. The direction from (iv) to (ii) is clear from (1). To see the other direction, suppose that (iv) is not the case; we derive the negation of (ii). For some $s'\leq s$, we have $\alpha(2k_{s'})=\varphi_{k_{s'}}(2k_{s'})$. Choose the least such $s'$. Then ($s'=0$ or) for any $s''<s'$, $\alpha(2k_{s''})=1-\varphi_{k_{s''}}(2k_{s''})$. So $\alpha[l_{s'}]\in F_{s'}$ by (1), since $\alpha[l_{s'}]\supseteq 10$ is a p-string of length $l_{s'}$. Thus (ii) is violated.
Let $A$ be a recursive set. The game $\omega[A]$ will be defined via the sets $T_0:=T_0^A$ and $T_1:=T_1^A$ of strings, constructed by enumerating the elements as follows:
**Construction of $T_0$ and $T_1$**. For each $s$ and $\alpha\in F_s$ (having a length $l_s$ and extending $10$),
1. for each p-string $\alpha'$ that is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $\alpha'*11$ in $T_1$ and $\alpha'*00$ in $T_0$;
2. if $f(\alpha)\in A$, enumerate $\alpha$ in $T_1$; if $f(\alpha)\notin A$, enumerate $\alpha$ in $T_0$ (note that $f(\alpha)\downarrow$ since $\alpha\in F$);
3. if a string $\beta$ is enumerated in $T_1$ (or in $T_0$) above, then enumerate $\beta^c$ in $T_0$ (or in $T_1$, respectively).
Clearly, *$T_0$ and $T_1$ are recursively enumerable* because of this generating algorithm. We observe that the sets $T_0$ and $T_1$ consist of
- d-strings (11, 00, and those extending 10 enumerated at (2.i) and those extending 01 enumerated at (3) via (2.i)) and
- p-strings (those extending $10$ enumerated at (2.ii) and those extending $01$ enumerated at (3) via (2.ii)).
We also observe that $11\in T_1$, $00\in T_0$, $T_0\cap T_1=\emptyset$, and $\alpha\in T_0 \Leftrightarrow \alpha^c\in T_1$.
*Define a game $\omega[A]$ by $S\in \omega[A]$ if and only if $S$ has an initial segment in $T_1$*. Lemma \[k46\] establishes computability of $\omega[A]$ (as well as the assertion that $T_0$ consists of losing determining strings and $T_1$ consists of winning determining strings) by way of Proposition \[delta0det2\].
\[01incompatible\] Let $\alpha$, $\beta$ be distinct strings in $T_0\cup T_1$. Then $\alpha$ and $\beta$ are incompatible. In particular, if $\alpha\in T_0$ and $\beta\in T_1$, then $\alpha$ and $\beta$ are incompatible.
Obviously, neither $\alpha$ nor $\beta$ is an empty string. Since $T_0$ and $T_1$ consist of p-strings and d-strings, there are three cases to consider:
*Case* (pp): *Both $\alpha$ and $\beta$ are p-strings*. Then either $\alpha$ or $\alpha^c$ is enumerated at (2.ii) of the generating algorithm and so $\alpha \in F$ or $\alpha^c\in F$. Similarly, $\beta\in F$ or $\beta^c\in F$. If $\alpha \in F$ and $\beta\in F$, then $\alpha$ and $\beta$ are incompatible, since any two distinct elements of $F$ are incompatible by Lemma \[Fincompatible1\]. If $\alpha \in F$ and $\beta^c\in F$, then $\alpha\supset 10$ and $\beta\supset 01$, so they are incompatible. The other two subcases are similar.
*Case* (pd): *one of $\alpha$ or $\beta$ is a p-string and the other is a d-string*. Without a loss of generality, $\alpha$ is a p-string and $\beta$ is a d-string. Suppose $\alpha$ and $\beta$ are compatible. Then, $\beta\supset \alpha$. In fact, $\beta^{--}\supseteq \alpha$. As in (pp) above, either $\alpha \in F$ or $\alpha^c\in F$. Also, since either $\beta$ or $\beta^c$ is enumerated at (2.i) of the algorithm, we have either (pd.i) $\beta^{--}\subset \tilde{\beta}$ for some $\tilde{\beta}\in F$ or (pd.ii) $(\beta^c)^{--}\subset \hat{\beta}$ for some $\hat{\beta}\in F$. *Subcase*: $\alpha\in F$ and (pd.i). $\alpha$ and $\tilde{\beta}$ and both in $F$. So they are incompatible by Lemma \[Fincompatible1\], contradicting the fact that $\alpha\subseteq \beta^{--}\subset \tilde{\beta}$. *Subcase*: $\alpha\in F$ and (pd.ii). Then $\alpha\supseteq 10$ but $\beta\supset 01$, a contradiction. *Subcase*: $\alpha^c\in F$ and (pd.i). Similar to the second subcase. *Subcase*: $\alpha^c\in F$ and (pd.ii). Similar to the first subcase.
*Case* (dd): *Both $\alpha$ and $\beta$ are d-strings*. Immediate from Lemma \[k41\].
*Notation*. We write $f(\beta)\!\downarrow \, \in A$ if $f(\beta)\in A$ (which requires $f(\beta)\downarrow$); we write $f(\beta)\! \downarrow \, \notin A$ if $f(\beta)\downarrow$ but $f(\beta)\notin A$.
\[k43\] Let $\alpha\supset 1$ be a string of length $l_s$.
1. $\alpha$ extends a string in $T_1$ if and only if for some $s'\leq s$, $f(\alpha[l_{s'}]) \! \downarrow \, \in A$ (in this case, $\alpha[l_{s'}]\in T_1$) or $\alpha$ extends a d-string $\alpha'=(\alpha')^{--}*11$ such that no prefix of $(\alpha')^{--}$ is in $F$ (in this case, $\alpha'\in T_1$).
2. $\alpha$ extends a string in $T_0$ if and only if for some $s'\leq s$, $f(\alpha[l_{s'}]) \! \downarrow\, \notin A$ (in this case, $\alpha[l_{s'}]\in T_0$) or $\alpha$ extends a d-string $\alpha'=(\alpha')^{--}*00$ such that no prefix of $(\alpha')^{--}$ is in $F$ (in this case, $\alpha'\in T_0$).
3. $\alpha$ does not extend a string in $T_0\cup T_1$ if and only if $\alpha$ is a p-string and no prefix of $\alpha$ is in $F$.
\(i) ($\Longrightarrow$). Assume $\alpha\supseteq 11$. Then (i.b) is satisfied by letting $\alpha'=11$.
Assume $\alpha\supseteq 10$ extends a string $\alpha'\in T_1$. Suppose first that $\alpha'$ is enumerated in $T_1$ by applying (2.i) of the generating algorithm. (We show (i.b) holds.) Then $\alpha'=(\alpha')^{--}*11$ and $(\alpha')^{--}$ is properly extended by some element in $F_s$. Since any two different elements in $F$ are incompatible by Lemma \[Fincompatible1\], no prefix of $(\alpha')^{--}$ is in $F$. So (i.b) holds. Suppose next that $\alpha'$ is enumerated in $T_1$ by applying (2.ii). Then $f(\alpha')\in A$. Since $\alpha'=\alpha[l_{s'}]$ for some $s'\leq s$, we obtain (i.a). Finally, the case where $\alpha'\supseteq 10$ is enumerated in $T_1$ by applying (3) is impossible, since every string enumerated at (3) extends $0$.
($\Longleftarrow$). Assume $\alpha\supseteq 11$. Since $11\in T_1$, the left hand side of (i) holds.
Assume $\alpha\supseteq 10$ and either (i.a) or (i.b) holds.
Suppose (i.a) first. By the definition of $f$, $\alpha[l_{s'}]\in F_{s'}$. Since $f(\alpha[l_{s'}])\in A$, we have $\alpha[l_{s'}]\in T_1$ by (2.ii). So $\alpha$ extends a string in $T_1$.
Suppose (i.b) next: $\alpha$ extends a d-string $\alpha'=(\alpha')^{--}*11$ such that no prefix of $(\alpha')^{--}$ is in $F$. We show that $\alpha'$ is in $T_1$.
Suppose $(\alpha')^{--}\subset \alpha[l_0]$ first. Since $l_0$ is even and $(\alpha')^{--}$ is a p-string of even length $<l_0$, we have $|(\alpha')^{--}|\leq l_0-2$. Since $l_0:=2k_0+2$, we can find a p-string $\beta$ of length $l_{0}$ that is an extension of $(\alpha')^{--}$ such that $\beta(2k_{0})=\varphi_{k_{0}}(2k_{0})$. Then $\beta\in F_0$ and by (2.i) (for $\beta$ and $(\alpha')^{--}$ instead of $\alpha$ and $\alpha'$, respectively), $\alpha'=(\alpha')^{--}*11\in T_1$.
Otherwise, there is $s''$ such that $0<s''\leq s$ and $\alpha[l_{s''-1}]\subseteq (\alpha')^{--}\subset \alpha[l_{s''}]$. Since $\alpha'$ is a d-string, $(\alpha')^{--}$ is a p-string. As $\alpha[l_{s''-1}]\subseteq (\alpha')^{--}$ and no prefix of $(\alpha')^{--}$ is in $F$, $\alpha[l_{s''-1}]$ is a p-string of which no prefix is in $F$. By Lemma \[k42\], for each $t\leq s''-1$, we have $\alpha[l_{s''-1}](2k_t)=1-\varphi_{k_t}(2k_t)$.
Since $\alpha[l_{s''-1}]\subseteq (\alpha')^{--}\subset \alpha[l_{s''}]$, we have $l_{s''-1}<l_{s''}$. Hence $l_{s''}:=\max\{l_{s''-1}, 2k_{s''}+2\}=2k_{s''}+2$. Since $| (\alpha')^{--}|$ and $l_{s''}$ are even, $|(\alpha')^{--}| \le 2k_{s''}$. We can find a p-string $\beta$ of length $l_{s''}$ that is an extension of $(\alpha')^{--}$ such that $\beta(2k_{s''})=\varphi_{k_{s''}}(2k_{s''})$. Therefore, for each $t\leq s''-1$, we have $\beta[l_{s''-1}](2k_t)=(\alpha')^{--}[l_{s''-1}](2k_t)=1-\varphi_{k_t}(2k_t)$. So $\beta\in F_{s''}$ by (1). Then since $|(\alpha')^{--}|\ge l_{s''-1}$, we have by (2.i) (for $\beta$ and $(\alpha')^{--}$ instead of $\alpha$ and $\alpha'$, respectively), $\alpha'=(\alpha')^{--}*11\in T_1$.
(ii) Similar to (i).
\(iii) ($\Longrightarrow$). Suppose that $\alpha$ does not extend a string in $T_0\cup T_1$. Then the negations of (i.a) and of (ii.a) imply for each $t\leq s$, $f(\alpha[l_{t}])\uparrow$, which implies by Lemma \[k42\] that no prefix of $\alpha$ is in $F$. Furthermore, (since no prefix of $\alpha$ is in $F$) the negations of (i.b) and of (ii.b) imply that $\alpha$ does not extend a d-string. By Lemma \[k41\] (i), $\alpha$ is a p-string.
($\Longleftarrow$). Suppose that $\alpha$ is a p-string and no prefix of $\alpha$ is in $F$. Since $\alpha$ is a p-string, no prefix of $\alpha$ is a d-string. So $\alpha$ does not satisfy (i.b) or (ii.b). Since no prefix $\alpha'$ of $\alpha$ is in $F$, we have for such $\alpha'$, $f(\alpha')\uparrow$. So $\alpha$ does not satisfy (i.a) or (ii.a). Therefore, $\alpha$ does not extend a string in $T_0\cup T_1$.
\[k45\] Let $\alpha\supset 1$ be a string of length $l_s$ such that $\alpha(2k_s)=\varphi_{k_s}(2k_s)$. Then $\alpha$ extends a string in $T_0\cup T_1$.
If $\alpha\supseteq 11$, the conclusion follows immediately, since $11\in T_1$.
Suppose $\alpha\supseteq 10$. We prove the lemma by induction on $s$. Assume $s=0$. If $\alpha$ is a p-string, then $\alpha\in F_0$. By (2.ii) of the generating algorithm for $T_0$ and $T_1$, we obtain $\alpha\in T_0\cup T_1$. Otherwise, by Lemma \[k41\] (i), $\alpha$ extends a d-string $\beta$. Since $|\beta^{--}|<l_0\le l_s$ for all $s$, no prefix of $\beta^{--}$ is in $F$ (because $F$ consists of certain strings of length $l_s$ for some $s$). By Lemma \[k43\] (i.b) or (ii.b), $\alpha$ extends a string (namely $\beta$) in $T_0\cup T_1$.
Assume the lemma holds for $s-1$. If for some $s'<s$, $\alpha(2k_{s'})=\varphi_{k_{s'}}(2k_{s'})$ then by the induction hypothesis, $\alpha[l_{s'}]$ extends a string in $T_0\cup T_1$. So $\alpha$ extends a string in $T_0\cup T_1$. Otherwise, for each $s'<s$, $\alpha(2k_{s'})=1-\varphi_{k_{s'}}(2k_{s'})$. If $\alpha$ is a p-string then $\alpha\in F$ by (1), hence it is in $T_0\cup T_1$ by (2.ii) of the construction. If $\alpha$ is not a p-string then by Lemma \[k41\] (i), $\alpha$ extends a d-string $\beta$. Then $|\beta^{--}|<l_s$. Since $\beta\subseteq\alpha$ and for each $s'<s$, $\alpha(2k_{s'})=1-\varphi_{k_{s'}}(2k_{s'})$, no prefix of $\beta^{--}$ is in $F$ by (1). By Lemma \[k43\] (i.b) or (ii.b), $\alpha$ extends a string (namely $\beta$) in $T_0\cup T_1$.
\[k46\] Any coalition $S\in{\mathrm{REC}}$ has an initial segment in $T_0$ or in $T_1$, but not both.
We show that $S$ has an initial segment in $T_0\cup T_1$. Lemma \[01incompatible\] implies that $S$ does not have initial segments in both $T_0$ and $T_1$. (We can actually show that $S$ has exactly one initial segment in $T_0\cup T_1$, a fact used to construct a type 4 game in Section \[examples:nocarrier\].)
If $S\supseteq 1$, suppose $\varphi_k$ is the characteristic function for $S$. Then $k\in\{k_0,k_1,k_2, \ldots\}$ since this set contains the set ${\mathrm{CRec}}$ of characteristic indices. So $k=k_s$ for some $s$. By Lemma \[k45\], the initial segment $S[l_s]$ (i.e., $\varphi_{k_s}[l_s]$) extends a string in $T_0\cup T_1$. So, $S$ has an initial segment in $T_0\cup T_1$.
If $S\supseteq 0$, then $S^c\supseteq 1$ has an initial segment in $T_0\cup T_1$ by the argument above. So, $S$ has an initial segment in $T_1\cup T_0$.
Next, we show that the game $\omega[A]$ has the desired properties. Before showing monotonicity, we need the following lemma. For strings $\alpha$ and $\beta$ with $|\alpha|\le |\beta|$, we say *$\beta$ properly contains $\alpha$* if for each $k<|\alpha|$, $\alpha(k)\leq\beta(k)$ and for some $k'<|\alpha|$, $\alpha(k')<\beta(k')$; we say *$\beta$ is properly contained by $\alpha$* if for each $k<|\alpha|$, $\beta(k)\le \alpha(k)$ and for some $k'<|\alpha|$, $\beta(k')<\alpha(k')$.
\[k44\] Let $\alpha$ and $\beta$ be strings such that $l_s=|\alpha|\le |\beta|$ for some $s$. If $\alpha$ extends a string in $T_1$ and $\beta$ properly contains $\alpha$, then $\beta$ extends a string in $T_1$. If $\alpha$ extends a string in $T_0$ and $\beta$ is properly contained by $\alpha$, then $\beta$ extends a string in $T_0$.
We only prove (i). The proof for (ii) is similar. Suppose that $\alpha$ extends a string in $T_1$ and that $\beta$ properly contains $\alpha$.
*Case* 1: $\alpha\supseteq 1$. In this case, (i.a) or (i.b) of Lemma \[k43\] holds.
*First assume is the case*: we can choose an $s'\leq s$ such that $f(\alpha[l_{s'}])\!\!\downarrow \in A$ (in this case, $\alpha[l_{s'}]\in T_1$). If $\beta$ extends $\alpha[l_{s'}]$, clearly the conclusion holds. Otherwise, since $|\beta|\geq l_s\geq l_{s'}$, $\alpha[l_{s'}]$ and $\beta$ are incompatible; that is, there exists $k< l_{s'}$ such that $\alpha[l_{s'}](k)\neq \beta(k)$. Choose the least such $k$; since $\beta$ properly contains $\alpha$, we have $\alpha[l_{s'}](k)=0$ and $\beta(k)=1$. Let $\beta'=\beta[k] (=\alpha[k])$. Note that $f(\alpha[l_{s'}])\downarrow$ implies $\alpha[l_{s'}]\in F$, which in turn implies $\alpha[l_{s'}]$ is a p-string.
Suppose $k$ is even. We will show that $\beta$ extends $\beta'*11\in T_1$. Since $k< l_{s'}$ and $l_{s'}$ is also even, we have $k+1<l_{s'}$, so that $\alpha[l_{s'}](k+1)\downarrow$. Since $\alpha[l_{s'}]$ is a p-string, $\beta(k+1)\geq\alpha[l_{s'}](k+1)=1-\alpha[l_{s'}](k)=1$. So $\beta(k) \beta(k+1)=11$. Hence $\beta'*11\subseteq\beta[l_s]$. Since $\alpha[l_{s'}]\in F$, no proper prefix of $\alpha[l_{s'}]$ is in $F$. As $\beta'\subset \alpha[l_{s'}]$, no prefix of $\beta'$ is in $F$. So by Lemma \[k43\] (i.b), $\beta[l_s]$ extends a string (namely, $\beta'*11$) in $T_1$.
Suppose $k$ is odd. We will show that $\beta$ extends $(\beta')^-*11\in T_1$. Since $\alpha[l_{s'}]$ is a p-string, $\beta(k-1)=\alpha[l_{s'}](k-1)=1-\alpha[l_{s'}](k)=1$. So $\beta(k-1) \beta(k)=11$. Hence $(\beta')^-*11\subseteq\beta[l_s]$. Since no proper prefix of $\alpha[l_{s'}]$ is in $F$ and $(\beta')^- \subset \alpha[l_{s'}]$, no prefix of $(\beta')^-$ is in $F$. So by Lemma \[k43\] (i.b), $\beta[l_s]$ extends a string (namely, $(\beta')^-*11$) in $T_1$.
*Next assume is the case*: $\alpha$ extends a d-string $\alpha'=(\alpha')^{--}*11$ such that no prefix of $(\alpha')^{--}$ is in $F$ (in this case, $\alpha'\in T_1$). Choose the least $k\le |\alpha|$ such that $\alpha(k)\neq \beta(k)$; we have $\alpha(k)=0$ and $\beta(k)=1$. Let $\beta'=\beta[k] (=\alpha[k])$. Since $\alpha'(|\alpha'|-2)=\alpha'(|\alpha'|-1)=1$, either $k>|\alpha'|-1$ or $k<|\alpha'|-2=|(\alpha')^{--}|$. If $k>|\alpha'|-1$, we get $\beta'\supseteq \alpha'$. This implies $\beta\supseteq \beta'\supseteq \alpha'\in T_1$; hence $\beta$ extends a string in $T_1$. Otherwise, we have $k<l:=|(\alpha')^{--}|$ and $\beta'\subset (\alpha')^{--}$.
Suppose $k$ is even. Since $k<l$ and $l$ is also even, we have $k+1<l$, so that $(\alpha')^{--}(k+1)\downarrow$. Since $\alpha$ is a p-string, $\beta(k+1)\geq (\alpha')^{--}(k+1)=1-(\alpha')^{--}(k)=1$. So $\beta(k) \beta(k+1)=11$. Hence $\beta'*11\subseteq \beta[l_s]$. Since no prefix of $(\alpha')^{--}$ is in $F$ and $\beta'\subset (\alpha')^{--}$, no prefix of $\beta'$ is in $F$. So by Lemma \[k43\] (i.b), $\beta[l_s]$ extends a string (namely, $\beta'*11$) in $T_1$.
Suppose $k$ is odd. Since $(\alpha')^{--}$ is a p-string, $\beta(k-1)=(\alpha')^{--}(k-1)=1-(\alpha')^{--}(k)=1$. So $\beta(k-1) \beta(k)=11$. Hence $(\beta')^-*11\leq\beta[l_s]$. Since no prefix of $(\alpha')^{--}$ is in $F$ and $(\beta')^-\subset (\alpha')^{--}$, no prefix of $(\beta')^-$ is in $F$. So by Lemma \[k43\] (i.b), $\beta[l_s]$ extends a string (namely, $(\beta')^-*11$) in $T_1$.
*Case* 2: $\alpha\supseteq 0$. First note that assertion (ii) for Case 1 can be proved by an argument similar to the proof of assertion (i) for Case 1 above (use Lemma \[k43\] (ii) instead of Lemma \[k43\] (i)). By the construction of $T_1$ and $T_0$, $\alpha^c\supseteq 1$ extends a string in $T_0$ and $\beta^c$ is properly contained by $\alpha^c$. Applying assertion (ii) for Case 1, we obtain that $\beta^c$ extends a string in $T_0$. Hence $\beta$ extends a string in $T_1$.
Note that the preceding proof shows that $\beta$ actually extends a *d-string* unless it extends $\alpha[l_{s'}]$.
\[k47a\] The game $\omega[A]$ is monotonic.
Suppose $B\in \omega[A]$ and $B'\supseteq B$. By the definition of $\omega[A]$, $B$ has an initial segment $\alpha\in T_1$. Choose the least $s$ such that $l_s\ge |\alpha|$. Then the initial segment $B[l_s]$ extends $\alpha\in T_1$. Let $\beta=B'[l_s]$. Then either $\beta=B[l_s]$ or $\beta$ properly contains $B[l_s]$.
If $\beta=B[l_s]$, then clearly $\beta$ extends $\alpha\in T_1$ and so does $B'$. Therefore, $B'\in \omega[A]$. Otherwise, $\beta$ properly contains $B[l_s]$, which extends $\alpha\in T_1$. By Lemma \[k44\] (i), $\beta$ extends a string in $T_1$ and so does $B'$. Therefore, $B'\in \omega[A]$.
\[k47b\] The game $\omega[A]$ is proper and strong.
It suffices to show that $S^c\in\omega \Leftrightarrow S\notin\omega$. From the observations that $T_0$ and $T_1$ consist of determining strings and that $\alpha^c\in T_0 \Leftrightarrow \alpha \in T_1$, we have: $S^c \in\omega$ iff $S^c$ has an initial segment in $T_1$ iff $S$ has an initial segment in $T_0$ iff $S\notin\omega$.
\[k48\] The game $\omega[A]$ is nonweak and does not have a finite carrier.
We construct a set $B$ such that for infinitely many $l$, the $l$-initial segment $B[l]$ has an extension that is winning and an extension that is losing. Let $B\supseteq 10$ be a set such that for each $k_s$, $B(2k_s)=1-\varphi_{k_s}(2k_s)$ and any initial segment of $B$ of even length is a p-string. Let $s$ be such that $l_{s+1}>l_s$.
Then $l_{s+1}:=\max\{l_s,2k_{s+1}+2\}=2k_{s+1}+2$ and $2k_{s+1}+2>l_s$ implies (since both sides are even numbers) that $2k_{s+1}\geq l_s$. By the definition of $B$, for each $t\leq s$, we have $B(2k_{t})=1-\varphi_{k_{t}}(2k_{t})$ and $2k_t<l_{s}$ (the last inequality from the observation that $l_s>2k_s+1$, $2k_{s-1}+1$, $2k_{s-2}+1$, …, $2k_0-1$). Then since $2k_{s+1}\geq l_{s}$, there is a p-string $\alpha\supseteq B[l_{s}]$ of length $l_{s+1}$ such that $\alpha(2k_{s+1})=\varphi_{k_{s+1}}(2k_{s+1})$ and for each $t\leq s$, $\alpha(2k_{t})=1-\varphi_{k_{t}}(2k_{t})$. Then by (1), $\alpha\in F_{s+1}$ and $|\alpha^{--}|=|\alpha|-2=l_{s+1}-2= 2k_{s+1}\geq l_{s}$. So by (2.i) of the generating algorithm, $\alpha^{--}*11\in T_1$ and $\alpha^{--}*00\in T_0$.
There are infinitely many such $s$. It follows that any initial segment of $B$ has an extension in $T_1$ and an extension in $T_0$. This means that the game has no finite carrier.
To show nonweakness, we give three (winning) coalitions in $T_1$ whose intersection is empty. First, $10$ (in fact any initial segment of the coalition $B \supseteq 10$) has extensions $\alpha$ in $T_1$ and $\beta$ in $T_0$ by the argument above. So $01$ has the extension $\beta^c$ in $T_1$. Clearly, the intersection of the winning coalitions $11\in T_1$, $\alpha\supseteq 10$, and $\beta^c\supseteq 01$ is empty.
Note that the proof that $\omega[A]$ has no finite carrier depends on (2.i), but not (2.ii) or (3), of the generating algorithm.
Infinite computable games {#examples:nocarrier}
-------------------------
In this section, for each of the ten conventional types not shown to be empty so far (footnote \[emptytypes\]), we give an example of an infinite computable game of that type. Most examples are based on the game $\omega[A]$ in Section \[nice\_games\].
1. $(++++)$ A monotonic, proper, strong, nonweak game. $\omega[A]$ is such a game.
2. [$(++-+)$]{} A monotonic, proper, nonstrong, nonweak game. Let $\omega=\omega[\emptyset]\cap\omega[{\mathbb{N}}]$; that is, $S\in \omega$ if and only if $S\in \omega[\emptyset]$ and $S\in \omega[{\mathbb{N}}]$.
To show $\omega$ is proper, suppose $S\in \omega$ and $S^c\in \omega$. Then $S\in \omega[{\mathbb{N}}]$ and $S^c\in \omega[{\mathbb{N}}]$, contradicting the properness of $\omega[{\mathbb{N}}]$.
To show $\omega$ is nonstrong, let $\alpha\in F$. We show that both $\alpha$ and $\alpha^c$ are losing. On the one hand, we have $\alpha\in T_0^\emptyset$ by (2.ii) of the generating algorithm. Since $T_0^\emptyset$ consists of losing determining strings, $\alpha\notin \omega[\emptyset]$. Hence $\alpha\notin \omega$. On the other hand, we have $\alpha\in T_1^{\mathbb{N}}$ by (2.ii). Hence $\alpha^c\in T_0^{\mathbb{N}}$. Since $T_0^{\mathbb{N}}$ consists of losing determining strings, $\alpha^c\notin \omega[{\mathbb{N}}]$. Hence $\alpha^c\notin \omega$, as desired.
Computability, monotonicity, and nonweakness of $\omega$ are immediate from the corresponding properties of $\omega[A]$. The proof that $\omega$ does not have a finite carrier is similar to the proof for $\omega[A]$.
3. [$(++--)$]{} A monotonic, proper, nonstrong, weak game. In the construction of (the sets $T_0$ and $T_1$ for) $\omega[A]$ in Section \[nice\_games\], replace (2.i), (2.ii), and (3) by
1. for each p-string $\alpha'$ that is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $1*\alpha'*11$ in $T_1$ and $1*\alpha'*00$ in $T_0$; furthermore, enumerate $0$ in $T_0$;
2. if $f(\alpha)\in A$, enumerate $1*\alpha$ in $T_1$; if $f(\alpha)\notin A$, enumerate $1*\alpha$ in $T_0$;
3. if a string $\beta=1*\beta'$ is enumerated in $T_1$ (or in $T_0$) above, then enumerate $1*(\beta')^c$ in $T_0$ (or in $T_1$, respectively).
Let $T'_0$ and $T'_1$ be the sets $T_0$ and $T_1$ in the original (Section \[nice\_games\]) construction of $\omega[A]$ renamed. We observe that $\beta=1*\beta'\in T_i$ if and only if $\beta'\in T'_i$.
We first show that any coalition $S$ has exactly one initial segment in $T_0\cup T_1$. This is immediate if $S\supseteq 0$. So, suppose $S\supseteq 1$. Define $S'$ by $S'(k)=S(k+1)$ for all $k$. Then, by the proof of Lemma \[k46\] for $\omega[A]$, $S'$ has exactly one initial segment $S'[k]$ in $T'_0\cup T'_1$. From the observation above, $S[k+1]=1*S'[k]\in T_0\cup T_1$ for a unique $k$, which is what we wanted.
To show the game is monotonic, it suffices to show Lemma \[k44\] (i) holds for the newly defined game. Suppose that $\alpha$, $\beta$ satisfy the assumption of the lemma and that $\alpha$ extends a string $\hat{\alpha}$ in $T_1$ and $\beta$ properly contains $\alpha$. Then, $\hat{\alpha}\supseteq 1$; write $\hat{\alpha}=1*\hat{\alpha}'$. Then $\hat{\alpha}'\in T'_1$ from the observation above. We can write $\beta=1*\beta'$. Then $\beta'$ either extends or properly contains $\hat{\alpha}'\in T'_1$. If $\beta'$ extends $\hat{\alpha}'\in T'_1$, then $\beta$ extends $1*\hat{\alpha}'\in T_1$, as desired. Otherwise, $\beta'$ properly contains $\hat{\alpha}'\in T'_1$. By Lemma \[k44\] for the original game $\omega[A]$ (the condition that $l_s=|\alpha|$ can be ignored for our purpose), $\beta'$ extends a string $\hat{\beta}\in T'_1$. So, $\beta=1*\beta'$ extends $1*\hat{\beta}\in T_1$, as desired.
The game is weak (hence proper by Lemma \[weakisproper\]) since every winning coalition extends $1$; in other words, $0$ is a veto player. It is nonstrong since $\{0\}\supseteq 100\in T_0$ implies $\{0\}\notin\omega$, while $\{0\}^c \supseteq 0\in T_0$ implies $\{0\}^c \notin\omega$. The proof that the game is computable and has no finite carrier is similar to the proofs for $\omega[A]$.
4. [$(+-++)$]{} A monotonic, nonproper, strong, nonweak game. Let $\omega=\omega[\emptyset]\cup\omega[{\mathbb{N}}]$; that is, $S\in \omega$ if and only if $S\in \omega[\emptyset]$ or $S\in \omega[{\mathbb{N}}]$.
To show $\omega$ is nonproper, let $\alpha\in F$. We show that both $\alpha$ and $\alpha^c$ are winning. On the one hand, we have $\alpha\in T_1^{\mathbb{N}}$ by (2.ii). So $\alpha\in \omega[{\mathbb{N}}]$, implying $\alpha\in \omega$. On the other hand, we have $\alpha\in T_0^\emptyset$ by (2.ii). Hence $\alpha^c\in T_1^\emptyset$. So $\alpha^c\in \omega[\emptyset]$. Hence $\alpha^c\in \omega$, as desired.
To show $\omega$ is strong, suppose $S\notin \omega$ and $S^c\notin \omega$. Then $S\notin \omega[{\mathbb{N}}]$ and $S^c\notin \omega[{\mathbb{N}}]$, contradicting the strongness of $\omega[{\mathbb{N}}]$.
Computability and monotonicity of $\omega$ are immediate from the corresponding properties of $\omega[A]$. Nonweakness is immediate from nonproperness by Lemma \[weakisproper\]. The proof that $\omega$ does not have a finite carrier is similar to the proof for $\omega[A]$.
5. [$(+--+)$]{} A monotonic, nonproper, nonstrong, nonweak game. Let $A$ be the set of even numbers. In the construction of $\omega[A]$, replace (2.ii) and (3) by
1. if $f(\alpha)\in A$, enumerate $\alpha$ and $\alpha^c$ in $T_1$; if $f(\alpha)\notin A$, enumerate $\alpha$ and $\alpha^c$ in $T_0$;
2. if a string $\beta$ is enumerated in $T_1$ (or in $T_0$) by applying (2.i), then enumerate $\beta^c$ in $T_0$ (or in $T_1$, respectively).
To show the game is monotonic, it suffices to show Lemma \[k44\] (i) holds for the newly defined game. Suppose that $\alpha$, $\beta$ satisfy the assumption of the lemma and that $\alpha$ extends a string $\alpha'$ in $T_1$ and $\beta$ properly contains $\alpha$. Let $T'_0$ and $T'_1$ be the sets $T_0$ and $T_1$ in the original construction of $\omega[A]$ renamed. Note that the replacement of (2.ii) and (3) by (2\*.ii) and (3\*) only affects p-strings, but not d-strings; hence the set of d-strings in $T_1$ is the same as the set of d-strings in $T'_1$, the set of d-strings in $T_0$ is the same as the set of d-strings in $T'_0$, and the set of p-strings in $T_0\cup T_1$ is the same as the set of p-strings in $T'_0\cup T'_1$. If $\alpha'$ is a d-string in $T_1$, it is in $T'_1$. Lemma \[k44\] (i) (for the original game) implies that $\beta$ extends a string in $T'_1$. In fact, an inspection of the proof of Lemma \[k44\] reveals that $\beta$ extends a d-string in $T'_1$, unless $\beta\supseteq \alpha'$, in which case the conclusion is obvious. So assume $\beta\not\supseteq \alpha'$. Then $\beta$ extends a d-string in $T'_1$; hence it extends a d-string in $T_1$, as desired. If $\alpha'$ is a p-string in $T_1$, it is in $T'_1\cup T'_0$. If $\alpha'\in T'_1$, then Lemma \[k44\] (i) implies that $\beta$ extends a string in $T'_1$. So the rest of the proof is similar. If $\alpha'\in T'_0$, then Lemma \[k44\] (ii) implies that $\beta^c$ extends a string in $T'_0$. Assume $\beta\not\supseteq \alpha'$ as before. Then $\beta^c$ extends a d-string in $T'_0$; hence it extends a d-string in $T_0$. By (3\*), $\beta$ extends a d-string in $T_1$, as desired.
The game is nonproper since (2\*.ii) implies that there is a string $\alpha\in F$ such that the coalitions $\{i: \alpha(i)=1\}$ and $\{i: \alpha(i)=1\}^c$ (which extends $\alpha^c$) are winning. Similarly, it is nonstrong since there is a string $\alpha\in F$ such that the coalitions above are losing. It is nonweak by Lemma \[weakisproper\] since it is nonproper. The proof that the game is computable and has no finite carrier is similar to the proofs for $\omega[A]$.
6. [$(-+++)$]{} A nonmonotonic, proper, strong, nonweak game. In the construction of $\omega[A]$, replace (2.i) by
1. for each p-string $\alpha'\neq \emptyset$ that is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $\alpha'*11$ in $T_1$ and $\alpha'*00$ in $T_0$; furthermore, enumerate $00$ in $T_1$.
By (3) of the construction, $11\in T_0$. (In other words, the game is constructed from the sets $T_0:=T'_0\cup \{11\}\setminus\{00\}$ and $T_1:=T'_1\cup \{00\}\setminus \{11\}$, where $T'_0$ and $T'_1$ are $T_0$ and $T_1$ in the original construction of $\omega[A]$ renamed.) Since $00$ is winning and $11$ is losing, the game is nonmonotonic. It is also nonweak since $00$ (or an empty coalition) is winning. For the remaining properties, the proofs are similar to the proofs for $\omega[A]$.
7. [$(-+-+)$]{} A nonmonotonic, proper, nonstrong, nonweak game. In the construction of $\omega[A]$, replace (2.i) and (3) by
1. for each p-string $\alpha'\neq \emptyset$ that is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $\alpha'*11$ in $T_1$ and $\alpha'*00$ in $T_0$; furthermore, enumerate $00$ and $11$ in $T_0$;
2. if a string $\beta\notin\{00,11\}$ is enumerated in $T_1$ (or in $T_0$) above, then enumerate $\beta^c$ in $T_0$ (or in $T_1$, respectively).
(In other words, the game is constructed from the sets $T_0:=T'_0\cup \{11\}$ and $T_1:=T'_1\setminus \{11\}$, where $T'_0$ and $T'_1$ are $T_0$ and $T_1$ in the original construction of $\omega[A]$ renamed.)
The game is nonmonotonic since $N$ is losing but there are winning coalitions. It is proper since it is a subset of $\omega[A]$, which is proper. It is nonstrong since $11$, $00\in T_0$ implies that the coalitions $\{0, 1\}$, $\{0,1\}^c$ are losing.
To show nonweakness, find a $\beta\in T_1$ such that $|\beta|=l_{t+1}$ for some $t$ (e.g., let $\beta=\alpha^{--}*11$ in the proof of Lemma \[k48\], with $s$ replaced by $t$). Choose an $s$ such that $l_{t+1}<l_s<l_{s+1}$. Following the proof of Lemma \[k48\], we can find $\alpha\in F_{s+1}$ such that $|\alpha^{--}|\ge l_s$, $\alpha^{--}*11\in T_1$, and $\alpha^{--}*00 \in T_0$. Then $(\alpha^c)^{--}*11\in T_1$. Nonweakness follows since the intersection of winning coalitions $\beta$ (regarded as the coalition $\{i: \beta(i)=1\}$), $\alpha^{--}*11\in T_1$, and $(\alpha^c)^{--}*11$ is empty.
The proofs of computability and nonexistence of a finite carrier are similar to the proofs for $\omega[A]$.
8. [$(-+--)$]{} A nonmonotonic, proper, nonstrong, weak game. Let $A={\mathbb{N}}$. In the construction of $\omega[A]=\omega[{\mathbb{N}}]$, replace (2.i) by
1. for each p-string $\alpha'$ that extends $1010$ or $1001$ and is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $\alpha'*11$ in $T_1$ and $\alpha'*00$ in $T_0$; furthermore, enumerate d-strings $11$ and $1000$ in $T_1$ and strings $1011$ and $0$ in $T_0$.
and remove (3). To show that any coalition $S$ has an initial segment in $T_0\cup T_1$, suppose that $S$ extends $1010$ or $1001$. (The other cases are immediate.) Let $T'_0$ and $T'_1$ be $T_0$ and $T_1$ in the original construction of $\omega[{\mathbb{N}}]$ renamed. Then, by Proposition \[k46\], $S$ has an initial segment $S[k]$ in $T'_0\cup T'_1$, where $k\geq 4$ without loss of generality. If $S[k]$ is enumerated in $T'_0\cup T'_1$ by applying (2.ii), then it is enumerated in $T_0\cup T_1$ by applying (2.ii). So, the conclusion follows. If $S[k]$ is enumerated in $T'_0\cup T'_1$ by applying (2.i), then $S[k]$ is equal to $\alpha'*11$ or $\alpha'*00$ for some p-string $\alpha'$ satisfying the requirements in (2.i). Clearly, $\alpha'$ extends $1010$ or $1001$. So, $S[k]$ is enumerated in $T_0\cup T_1$ by applying (2\*.i). So the conclusion follows.
To show that no coalition $S$ has initial segments in both $T_0$ and $T_1$, it suffices to show that a string $\alpha$ enumerated in $T_0$ by (2\*.i) and a p-string $\beta$ enumerated in $T_1$ by (2.ii) are incompatible. (Note that all $\alpha\in F$ are enumerated in $T_1$ and none in $T_0$ by (2.ii).) Since $\beta\supset 10$, it is incompatible with $0\in T_1$. All the other strings enumerated by (2\*.i) are d-strings, so $\alpha$ and $\beta$ are compatible only if $\alpha$ extends $\beta$, which in turn extends (since $\beta\in F$ is of length $\geq 4$) $1001$ or $1010$. Then, $\alpha=\alpha'*00$ for some $\alpha'$, so as above, $\alpha\in T'_0$; similarly, $\beta\in T'_1$. This implies that $\alpha$ and $\beta$ are incompatible.
The game $\omega$ defined above is nonmonotonic since $1000$ is winning but $1011$ is not. To see $\omega$ is weak (hence proper by Lemma \[weakisproper\]), note that any winning coalition extends $1$; so the intersection contains a veto player $0$. The game is nonstrong because $0$, $1011\in T_0$ imply that the coalitions $\{1\}$ and $\{1\}^c$ are losing. The proofs of computability and nonexistence of a finite carrier are similar to the proofs for $\omega[A]$.
9. [$(--++)$]{} A nonmonotonic, nonproper, strong, nonweak game. In the construction of $\omega[A]$, replace (2.i) and (3) by
1. for each p-string $\alpha'\neq \emptyset$ that is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $\alpha'*11$ in $T_1$ and $\alpha'*00$ in $T_0$; furthermore, enumerate $00$ and $11$ in $T_1$;
2. if a string $\beta\notin\{00,11\}$ is enumerated in $T_1$ (or in $T_0$) above, then enumerate $\beta^c$ in $T_0$ (or in $T_1$, respectively).
(In other words, the game is constructed from the sets $T_0:=T'_0\setminus \{00\}$ and $T_1:=T'_1\cup \{00\}$, where $T'_0$ and $T'_1$ are $T_0$ and $T_1$ in the original construction of $\omega[A]$ renamed.)
The game is nonmonotonic since $\emptyset$ is winning but there are losing coalitions. It is nonproper since the coalitions $\{0, 1\}$, $\{0,1\}^c$ are winning. It is strong since its subset $\omega[A]$ is strong. It is nonweak by Lemma \[weakisproper\] since it is nonproper. The proofs of computability and nonexistence of a finite carrier are similar to the proofs for $\omega[A]$.
10. [$(---+)$]{} A nonmonotonic, nonproper, nonstrong, nonweak game. In the construction of $\omega[A]$, replace (2.i) and (3) by
1. for each p-string $\alpha'$ that extends $1010$ or $1001$ and is a proper prefix of $\alpha$, if $s=0$ or $|\alpha'|\geq l_{s-1}$, then enumerate $\alpha'*11$ in $T_1$ and $\alpha'*00$ in $T_0$; furthermore, enumerate d-strings $00$, $1000$, and $0111$ in $T_0$ and d-strings $11$, $1011$ and $0100$ in $T_1$;
2. if a string $\beta\notin\{00, 11, 1000, 0111, 1011, 0100\}$ is enumerated in $T_1$ (or in $T_0$) above, then enumerate $\beta^c$ in $T_0$ (or in $T_1$, respectively).
The game is nonmonotonic since $0100$ is winning but $0111$ is not. The game is nonproper since $1011$, $0100\in T_1$ imply that the coalitions $\{1\}$ and $\{1\}^c$ are winning. It is nonstrong since $1000$, $0111\in T_0$ imply $\{0\}$ and $\{0\}^c$ are losing. It is nonweak by Lemma \[weakisproper\] since it is nonproper. The proofs of computability and nonexistence of a finite carrier are similar to the proofs for $\omega[A]$.
[20]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
Al-Najjar, N. I., Anderlini, L., Felli, L., 2006. Undescribable events. Review of Economic Studies 73, 849–868.
Arrow, K. J., 1963. Social Choice and Individual Values, 2nd Edition. Yale University Press, New Haven.
Bartholdi, J., III, Tovey, C. A., Trick, M. A., 1989. Voting schemes for which it can be difficult to tell who won the election. Social Choice and Welfare 6, 157–165.
Bartholdi, J. J., III, Tovey, C. A., Trick, M. A., 1989. The computational difficulty of manipulating an election. Social Choice and Welfare 6, 227–241.
Kelly, J. S., 1988. Social choice and computational complexity. Journal of Mathematical Economics 17, 1–8.
Kumabe, M., Mihara, H. R., Aug. 2007. Computability of simple games: A complete investigation of the sixty-four possibilities. MPRA Paper 4405, Munich University Library, <http://mpra.ub.uni-muenchen.de/4405/>
Kumabe, M., Mihara, H. R., 2008. Computability of simple games: A characterization and application to the core. Journal of Mathematical Economics 44, 348–366.
Kumabe, M., Mihara, H. R., 2008. The [N]{}akamura numbers for computable simple games. Social Choice and Welfare 31, 621–640.
Kumabe, M., Mihara, H. R., 2010. Preference aggregation theory without acyclicity: The core without majority dissatisfaction. Games and Economic Behavior, Doi:10.1016/j.geb.2010.06.008
Lewis, A. A., 1988. An infinite version of [A]{}rrow’s [T]{}heorem in the effective setting. Mathematical Social Sciences 16, 41–48.
May, K. O., 1952. A set of independent, necessary and sufficient conditions for simple majority decision. Econometrica 20, 680–84.
May, K. O., 1953. A note on the complete independence of the conditions for simple majority decision. Econometrica 21, 172–173.
Mihara, H. R., 1997. [A]{}rrow’s [T]{}heorem and [T]{}uring computability. Economic Theory 10, 257–76.
Mihara, H. R., 1999. [A]{}rrow’s theorem, countably many agents, and more visible invisible dictators. Journal of Mathematical Economics 32, 267–287.
Mihara, H. R., 2004. Nonanonymity and sensitivity of computable simple games. Mathematical Social Sciences 48, 329–341.
Odifreddi, P., 1992. Classical Recursion Theory: The Theory of Functions and Sets of Natural Numbers. Elsevier, Amsterdam.
Peleg, B., 2002. Game-theoretic analysis of voting in committees. In: Arrow, K. J., Sen, A. K., Suzumura, K. (Eds.), Handbook of Social Choice and Welfare. Vol. 1. Elsevier, Amsterdam, Ch. 8, pp. 395–423.
Soare, R. I., 1987. Recursively Enumerable Sets and Degrees: A Study of Computable Functions and Computably Generated Sets. Springer-Verlag, Berlin.
Thomson, W., 2001. On the axiomatic method and its recent applications to game theory and resource allocation. Social Choice and Welfare 18, 327–386.
Weber, R. J., 1994. Games in coalitional form. In: Aumann, R. J., Hart, S. (Eds.), Handbook of Game Theory. Vol. 2. Elsevier, Amsterdam, Ch. 36, pp. 1285–1303.
[^1]: Corresponding author. The mail address is available on [his site](http://www5.atwiki.jp/reiju/).\
*URL:* <http://econpapers.repec.org/RAS/pmi193.htm> (H.R. Mihara).
[^2]: Journal of Mathematical Economics (2011) [doi:10.1016/j.jmateco.2010.12.003](http://dx.doi.org/10.1016/j.jmateco.2010.12.003)
[^3]: \[weak-indep\]Despite Arrow’s endorsement [@arrow63 footnote 27, page 102], complete investigations of a set of axioms are rare in the literature, such as social choice, that adopts the axiomatic method. It is common to say that an axiom (called A1) is “independent” of some other axioms if there are (i) a rule satisfying A1 and the others and (ii) a rule violating A1 but satisfying the others [@thomson01 Section 4.1.3].
[^4]: Sometimes referred to as a “voting game” or a “simple coalitional game” in the literature.
[^5]: @kumabe-m08scw continue the complete investigation, considering only computable games. That paper asks which “degrees of rationality” are achievable in each of the thirty-two classes, while the present paper asks whether each class is empty.
[^6]: This notion of independence generally requires examination of many more cases than that in footnote \[weak-indep\] (which examines just two cases). Note that “complete independence” in May’s sense of the six axioms cannot be achieved, since the four conventional axioms are not “completely independent.” For example, it is well known that there exist no weak, nonproper games.
[^7]: What is behind this terminology is the discussion of logical and conceptual independence by @thomson01. We do not define “conceptual independence” mathematically.
[^8]: This literature includes @kelly88, @lewis88, @bartholdi-tt89vs [@bartholdi-tt89cd], @mihara97et [@mihara99jme; @mihara04mss], and @kumabe-m08jme [@kumabe-m08scw].
[^9]: This example illustrates that the desirability of the (conventional) axioms depends on the context. Monotonicity makes sense here, but may be too optimistic (adding a member may turn an acceptable team into an unacceptable one). Properness may be irrelevant or even undesirable (ensuring that a given task can be performed by two non-overlapping teams may be important from the viewpoint of reliability). These observations suggest the importance of finding games that violate some of the axioms.
[^10]: A set $S$ is *recursive* if there is a Turing machine that halts on any input $i\in N$, yielding output 1 if $i\in S$ and 0 otherwise. @soare87 and @odifreddi92 give a precise definition of recursive sets as well as detailed discussion of recursion theory. Mihara’s papers [@mihara97et; @mihara99jme] contain short reviews of recursion theory.
[^11]: The *characteristic function* for $S$ takes the value 1 if the input belongs to $S$; it takes 0 otherwise. The same coalition has infinitely many characteristic indices.
[^12]: A partial function $\delta'$ is an *extension* of $\delta_\omega$ if whenever $\delta_\omega(e)\downarrow$, we have $\delta'(e)=\delta_\omega(e)$.
[^13]: As long as games are defined for (recursive) coalitions, this notion of computability is equivalent to the following [@kumabe-m07csg64 Corollary 1]: there exists a Turing machine that, given any coalition $S$ encoded as an infinite binary sequence ($i$th term indicating whether $i\in S$), halts and correctly decides whether $S$ is winning.
[^14]: \[emptytypes\] Among the sixteen types, five (types 6, 8, 10, 14, and 16) contain no games; also, the class of type $2$ infinite games is empty (since type 2 games are dictatorial). These results are immediate from Lemmas \[weakisproper\] and \[strongweakisdic\].
[^15]: Some of the games constructed in this paper have the property that an empty coalition is winning. However, one can modify all such computable games so that an empty coalition is losing [@kumabe-m08scw].
[^16]: Let $\hat{\omega}$ be the game defined by Proposition \[delta0det2\]. It follows that (a) $S\in \omega$ if and only if either $S=A$ or \[$S \ne A^c$ and $S\in \hat{\omega}$\], (b) $S\notin\omega$ if and only if either $S=A^c$ or \[$S \ne A$ and $S\notin\hat{\omega}$\], (c) if $\hat{\omega}$ is proper, then $\omega$ is proper, (d) if $\hat{\omega}$ is strong, then $\omega$ is strong.
[^17]: @kumabe-m07csg64 give more detailed proofs for a different set of examples.
[^18]: In @kumabe-m08scw [Appendix A], we construct just one type 1 game, without requiring the additional conditions. Some aspects of the construction thus become more apparent in that construction. The construction there extends the one (not requiring the game to be of a particular type) in the companion paper [@kumabe-m08jme Section 6.2]. The reader might want to consult these papers first.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This is a great paper and it has a concise abstract.'
bibliography:
- 'yourbibfile.bib'
title: Full Title of Article
---
List of keywords
Introduction
============
This is where the content of your paper goes. Remember to:
- Include, within the first 12 or less pages, a concise and clear statement and discussion of the paper’s contributions.
- Include, either in the main text or the appendices, all proofs and derivations required to substantiate the results.
- Do not include author names (this is done automatically by inclusion of “anon" option in documentclass), and to the extent possible, avoid directly identifying the authors. You should still include all relevant references, including your own, and any other relevant discussion, even if this might allow a reviewer to infer the author identities.
- Avoid modifying the default margins, spacing and fonts (either manually or via packages such as fullpage).
My Proof of Theorem 1
=====================
This is a boring technical proof.
My Proof of Theorem 2
=====================
This is a complete version of a proof sketched in the main text.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be efficient in solving high dimensional problems. Even though several RRT variants have been proposed for dynamic replanning, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs for initial planning and informed local search for navigation. We show that this combination of simple techniques provides better responses to highly dynamic environments than the RRT extensions.'
author:
-
-
bibliography:
- 'IEEEabrv.bib'
- '../biblio.bib'
title: 'A Multi-stage Probabilistic Algorithm for Dynamic Path-Planning'
---
artificial intelligence; motion planning; RRT; Multi-stage; local search; greedy heuristics;
Introduction
============
The *dynamic path-planning* problem consists in finding a suitable plan for each new configuration of the environment by recomputing a free-collision path using the new information available at each time step [@Hwang92]. This kind of problem can be found for example by a robot trying to navigate through an area crowded with people, such as a shopping mall or supermarket. The problem has been addressed widely in its several flavors, such as cellular decomposition of the configuration space [@Stentz95], partial environmental knowledge [@Stentz94], high-dimensional configuration spaces [@Kavraki96] or planning with non-holonomic constraints [@Lavalle99]. However, simpler variations of this problem are complex enough that cannot be solved with deterministic techniques, and therefore they are worthy to study.
This paper is focused on finding and traversing a collision-free path in two dimensional space, for a holonomic robot [^1], without kinodynamic restrictions [^2], in two different scenarios:
- several unpredictably moving obstacles or adversaries.
- partially known environment, when at some point in time, a new obstacle is found.
Besides from one (or few) new obstacle(s) in the second scenario we assume that we have perfect information of the environment at all times.
We will focus on continuous space algorithms and won’t consider algorithms that use discretized representations of the configuration space, such as D\* [@Stentz95], because for high dimensional problems, the configuration space becomes intractable in terms of both memory and computation time, and there is the extra difficulty of calculating the discretization size, trading off accuracy versus computational cost.
The offline RRT is efficient at finding solutions but they are far from being optimal, and must be post-processed for shortening, smoothing or other qualities that might be desirable in each particular problem. Furthermore, replanning RRTs are costly in terms of computation time, as well as evolutionary and cell-decomposition approaches. Therefore, the novelty of this work is the mixture of the feasibility benefits of the RRTs, the repairing capabilities of local search, and the computational inexpensiveness of greedy algorithms, into our lightweight multi-stage algorithm.
In the following sections, we present several path planning methods that can be applied to the problem described above. In section \[sec:RRT\] we review the basic offline, single-query RRT, a probabilistic method that builds a tree along the free configuration space until it reaches the goal state. Afterwards, we introduce the most popular replanning variants of the RRT: ERRT in section \[sec:ERRT\], DRRT in section \[sec:DRRT\] and MP-RRT in section \[sec:MPRRT\]. Then, in section \[sec:hillclimbing\] we present our new hybrid multi-stage algorithm with the experimental results and comparisons in section \[sec:results\]. At last, the conclusions and further work are discussed in section \[sec:conclusions\].
Previous and Related Work {#sec:stateofart}
=========================
Rapidly-Exploring Random Tree {#sec:RRT}
-----------------------------
One of the most successful probabilistic sampling methods for offline path planning currently in use, is the Rapidly-exploring Random Tree (RRT), a single-query planner for static environments, first introduced in [@Lavalle98]. RRTs work towards finding a continuous path from a state $q_{init}$ to a state $q_{goal}$ in the free configuration space $C_{free}$, by building a tree rooted at $q_{init}$. A new state $q_{rand}$ is uniformly sampled at random from the configuration space $C$. Then the nearest node, $q_{near}$, in the tree is located, and if $q_{rand}$ and the shortest path from $q_{rand}$ to $q_{near}$ are in $C_{free}$, then $q_{rand}$ is added to the tree. The tree growth is stopped when a node is found near $q_{goal}$. To speed up convergence, the search is usually biased to $q_{goal}$ with a small probability.\
In [@Kuffner00], two new features are added to RRTs. First, the EXTEND function is introduced, which, instead of trying to add directly $q_{rand}$ to the tree, makes a motion towards $q_{rand}$ and tests for collisions. Then a greedier approach is introduced, which repeats EXTEND until an obstacle is reached. This ensures that most of the time, we will be adding states to the tree, instead of just rejecting new random states. The second extension is the use of two trees, rooted at $q_{init}$ and $q_{goal}$, which are grown towards each other. This significantly decreases the time needed to find a path.
ERRT {#sec:ERRT}
----
The execution extended RRT presented in [@Bruce02] introduces two RRTs extensions to build an on-line planner: the *waypoint cache* and the *adaptive cost penalty search*, which improves re-planning efficiency and the quality of generated paths. The waypoint cache is implemented by keeping a constant size array of states, and whenever a plan is found, all the states in the plan are placed in the cache with random replacement. Then, when the tree is no longer valid, a new tree must be grown, and there are three possibilities for choosing a new target state. With probability P\[*goal*\], the goal is chosen as the target; With probability P\[*waypoint*\], a random waypoint is chosen, and with remaining probability a uniform state is chosen as before. Values used in [@Bruce02] are P\[*goal*\]$=0.1$ and P\[*waypoint*\]$=0.6$.\
In the other extension — the adaptive cost penalty search — the planner dynamically modifies a parameter $\beta$ to help it finding shorter paths. A value of $1$ for $\beta$ will always extend from the root node, while a value of $0$ is equivalent to the original algorithm. Unfortunately, the solution presented in [@Bruce02] lacks of implementation details and experimental results on this extension.
Dynamic RRT {#sec:DRRT}
-----------
The Dynamic Rapidly-exploring Random Tree (DRRT) described in [@Ferguson06] is a probabilistic analog to the widely used D\* family of algorithms. It works by growing a tree from $q_{goal}$ to $q_{init}$. The principal advantage is that the root of the tree does not have to be changed during the lifetime of the planning and execution. Also, in some problem classes the robot has limited range sensors, thus moving obstacles (or new ones) are typically near the robot and not near the goal. In general, this strategy attempts to trim smaller branches and farther away from the root. When new information concerning the configuration space is received, the algorithm removes the newly-invalid branches of the tree, and grows the remaining tree, focusing, with a certain probability(empirically tuned to $0.4$ in [@Ferguson06]) to a vicinity of the recently trimmed branches, by using the a similar structure to the waypoint cache of the ERRT. In experimental results DRRT vastly outperforms ERRT.
MP-RRT {#sec:MPRRT}
------
The Multipartite RRT presented in [@Zucker07] is another RRT variant which supports planning in unknown or dynamic environments. The MP-RRT maintains a forest $F$ of disconnected sub-trees which lie in $C_{free}$, but which are not connected to the root node $q_{root}$ of $T$, the main tree. At the start of a given planning iteration, any nodes of $T$ and $F$ which are no longer valid are deleted, and any disconnected sub-trees which are created as a result are placed into $F$. With given probabilities, the algorithm tries to connect $T$ to a new random state, to the goal state, or to the root of a tree in $F$. In [@Zucker07], a simple greedy smoothing heuristic is used, that tries to shorten paths by skipping intermediate nodes. The MP-RRT is compared to an iterated RRT, ERRT and DRRT, in 2D, 3D and 4D problems, with and without smoothing. For most of the experiments, MP-RRT modestly outperforms the other algorithms, but in the 4D case with smoothing, the performance gap in favor of MP-RRT is much larger. The authors explained this fact due to MP-RRT being able to construct much more robust plans in the face of dynamic obstacle motion. Another algorithm that utilizes the concept of forests is the Reconfigurable Random Forests (RRF) presented in [@Li02], but without the success of MP-RRT.
A Multi-stage Probabilistic Algorithm {#sec:hillclimbing}
=====================================
In highly dynamic environments, with many (or a few but fast) relatively small moving obstacles, regrowing trees are pruned too fast, cutting away important parts of the trees before they can be replaced. This reduce dramatically the performance of the algorithms, making them unsuitable for these class of problems. We believe that a better performance could be obtained by slightly modifying a RRT solution using simple obstacle-avoidance operations on the new colliding points of the path by informed local search. Then, the path could be greedily optimized if the path has reached the feasibility condition.
Problem Formulation
-------------------
At each time-step, the proposed problem could be defined as an optimization problem with satisfiability constraints. Therefore, given a path our objective is to minimize an evaluation function (i.e. distance, time, or path-points), with the $C_{free}$ constraint. Formally, let the path $\rho=p_1p_2\ldots p_n$ a sequence of points, where $p_i \in \mathbb{R}^n$ a $n$-dimensional point ($p_1 = q_{init}, p_n = q_{goal}$), $O_t\in \mathcal{O} $ the set of obstacles positions at time $t$, and $eval:\mathbb{R}^n \times \mathcal{O} \mapsto \mathbb{R}$ an evaluation function of the path depending on the object positions. Then, our ideal objective is to obtain the optimum $\rho*$ path that minimize our $eval$ function within a feasibility restriction in the form
$$\displaystyle\rho*=arg\min_{\rho}[eval(\rho,O_t)] \textrm{ with } feas(\rho,O_t) = C_{free}
\label{eq:problem}$$
where $feas(\cdot,\cdot)$ is a *feasibility* function that equals to $C_{free}$ iff the path $\rho$ is collision free for the obstacles $O_t$. For simplicity, we use very naive $eval(\cdot,\cdot)$ and $feas(\cdot,\cdot)$ functions, but this could be extended easily to more complex evaluation and feasibility functions. The used $feas(\rho,O_t)$ function assumes that the robot is a punctual object (dimensionless) in the space, and therefore, if all segments $\overrightarrow{p_i p_{i+1}}$ of the path do not collide with any object $o_j \in O_t$, we say that the path is in $C_{free}$. The $eval(\rho,O_t)$ function will be the points count of $\rho$, assuming that similar paths with less points are shorter. This could be easily changed to the euclidean distance, time, smoothness, clearness or several other optimization criterions.
A Multi-stage Probabilistic Strategy
------------------------------------
If solving equation \[eq:problem\] is not a simple task in static environments, solving dynamic versions turns out to be even more difficult. In dynamic path planning we cannot wait until reaching the optimal solution because we must deliver a “good enough” plan within some time quantum. Then, a heuristic approach must be developed to tackle the on-line nature of the problem. The heuristic algorithms presented in sections \[sec:ERRT\], \[sec:DRRT\] and \[sec:MPRRT\], extend a method developed for static environments, which produce a poor response to highly dynamic environments and an unwanted complexity of the algorithms.
We propose a multi-stage combination of three simple heuristic probabilistic techniques to solve each part of the problem: feasibility, initial solution and optimization.
![**A Multi-stage Strategy for Dynamic Path Planning**. This figure describes the life-cycle of the multi-stage algorithm presented here. The RRT, informed local search, and greedy heuristic are combined to produce an expensiveness solution to the dynamic path planning problem.[]{data-label="fig:diag"}](diag){width="50.00000%"}
### Feasibility
The key point in this problem is the hard constraint in equation \[eq:problem\] which must be met before even thinking about optimizing. The problem is that in highly dynamic environments a path turns rapidly from feasible to unfeasible — and the other way around — even if our path does not change. We propose a simple *informed local search* to obtain paths in $C_{free}$. The idea is to randomly search for a $C_{free}$ path by modifying the nearest colliding segment of the path. As we include in the search some knowledge of the problem, the *informed* term is coined to distinguish it from blind local search. The details of the operators used for the modification of the path are described in section \[sec:implementation\].
### Initial Solution
The problem with local search algorithms is that they repair a solution that it is assumed to be near the feasibility condition. Trying to produce feasible paths from scratch with local search (or even with evolutionary algorithms [@Xiao97]) is not a good idea due the randomness of the initial solution. Therefore, we propose feeding the informed local search with a *standard RRT* solution at the start of the planning, as can be seen in figure \[fig:diag\].
### Optimization
Without an optimization criteria, the path could grow infinitely large in time or size. Therefore, the $eval(\cdot,\cdot)$ function must be minimized when a (temporary) feasible path is obtained. A simple *greedy* technique is used here: we test each point in the solution to check if it can be removed maintaining feasibility, if so, we remove it and check the following point, continuing until reaching the last one.
Algorithm Implementation {#sec:implementation}
------------------------
$q_{robot} \leftarrow$ is the current robot position $q_{goal} \leftarrow$ is the goal position $(time)$ $(time)$
The multi-stage algorithm proposed in this paper works by alternating environment updates and path planning, as can be seen in Algorithm \[alg:main\]. The first stage of the path planning (see Algorithm \[alg:process\]) is to find an initial path using a RRT technique, ignoring any cuts that might happen during environment updates. Thus, the RRT ensures that the path found does not collide with static obstacles, but might collide with dynamic obstacles in the future. When a first path is found, the navigation is done by alternating a simple informed local search and a simple greedy heuristic as is shown in Figure \[fig:diag\].
$q_{robot} \leftarrow$ is the current robot position $q_{start} \leftarrow$ is the starting position $q_{goal} \leftarrow$ is the goal position $T_{init} \leftarrow$ is the tree rooted at the robot position $T_{goal} \leftarrow$ is the tree rooted at the goal position $path \leftarrow$ is the path extracted from the merged RRTs $q_{robot} \leftarrow q_{start}$ $T_{init}.init(q_{robot})$ $T_{goal}.init(q_{goal})$ $(T_{init},T_{goal})$ firstCol $\leftarrow$ collision point closest to robot arc$(path, firstCol)$ mut$(path, firstCol)$ $(path)$
![**The arc operator**. This operator draws an offset value $\Delta$ over a fixed interval called vicinity. Then, one of the two axises is selected to perform the arc and two new consecutive points are added to the path. $n_1$ is placed at a $\pm \Delta$ of the point $b$ and $n_2$ at $\pm \Delta$ of point $c$, both of them over the same selected axis. The axis, sign and value of $\Delta$ are chosen randomly from an uniform distribution.[]{data-label="fig:arc"}](arc){width="50.00000%"}
![**The mutation operator**. This operator draws two offset values $\Delta_x$ and $\Delta_y$ over a vicinity region. Then the same point $b$ is moved in both axises from $b=[b_x,b_y]$ to $b'=[b_x \pm \Delta_x, b_y\pm \Delta_y]$, where the sign and offset values are chosen randomly from an uniform distribution.[]{data-label="fig:mut"}](mut){width="50.00000%"}
The second stage is the informed local search, which is a two step function composed by the *arc* and *mutate* operators (Algorithms \[alg:arc\] and \[alg:mut\]). The first one tries to build a square arc around an obstacle, by inserting two new points between two points in the path that form a segment colliding with an obstacle, as is shown in Figure \[fig:arc\]. The second step in the function is a mutation operator that moves a point close to an obstacle to a random point in the vicinity, as is graphically explained in Figure \[fig:mut\]. The mutation operator is inspired by the ones used in the Adaptive Evolutionary Planner/Navigator(EP/N) presented in [@Xiao97], while the arc operator is derived from the arc operator in the Evolutionary Algorithm presented in [@Alfaro05].
vicinity $\leftarrow$ some vicinity size randDev $\leftarrow$ random$(-vicinity, vicinity)$ point1 $\leftarrow$ path\[firstCol\] point2 $\leftarrow$ path\[firstCol+1\] newPoint1 $\leftarrow$ (point1\[X\]+randDev,point1\[Y\]) newPoint2 $\leftarrow$ (point2\[X\]+randDev,point2\[Y\]) newPoint1 $\leftarrow$ (point1\[X\],point1\[Y\]+randDev) newPoint2 $\leftarrow$ (point2\[X\],point2\[Y\]+randDev) add new points between point1 and point2 drop new point2
vicinity $\leftarrow$ some vicinity size path\[firstCol\]\[X\] $+=$ random$(-vicinity, vicinity)$ path\[firstCol\]\[Y\] $+=$ random$(-vicinity, vicinity)$ accept new point reject new point
The third and last stage is the greedy optimization heuristic, which can be seen as a post-processing for path shortening, that eliminates intermediate nodes if doing so does not create collisions, as is described in the Algorithm \[alg:postProcess\].
i $\leftarrow$ 0 delete path\[i+1\] i $\leftarrow$ i+1
Experiments and Results {#sec:results}
=======================
The multi-stage strategy proposed here has been developed to navigate highly-dynamic environments, and therefore, our experiments should be aimed towards that purpose. Therefore, we have tested our algorithm in two highly-dynamic situations, both of them over a map representing an office building or shopping mall (i.e. with some static walls). Also, we have ran the DRRT and MP-RRT algorithms over the same situations in order to compare the performance of our proposal.
Experimental Setup
------------------
![The dynamic environment. The *green* square is our robot, currently at the start position. The *blue* squares are the moving obstacles. The *blue* cross is the goal.[]{data-label="fig:dynamic"}](dynamic){width="50.00000%"}
The first environment for our experiments consists on a map with 30 moving obstacles the same size of the robot, with a random speed between 10% and 55% the speed of the robot. This *dynamic environment* is shown in figure \[fig:dynamic\].\
![The partially know environment. The *green* square is our robot, currently at the start position. The *black* squares are the suddenly appearing obstacles. The *blue* cross is the goal.[]{data-label="fig:partial"}](partial){width="50.00000%"}
The second environment uses the same map, but with six obstacles, three to four times the size of the robot, appearing at a predefined time and position. This *partially known environment* is shown in figure \[fig:partial\].\
The three algorithms were ran a hundred times in each environment. The cutoff time was five minutes for the first environment and one minute for the second, after which, the robot was considered not to have reached the goal.
Implementation Details
----------------------
The algorithms where implemented in C++ using a framework [^3] developed by the same authors.\
There are several variations that can be found in the literature when implementing RRTs. For all our RRT variants, the following are the details on where we departed from the basics:
- We always use two trees rooted at $q_{init}$ and $q_{goal}$.
- Our EXTEND function, if the point cannot be added without collisions to a tree, adds the mid point between the nearest tree node and the nearest collision point to it.
- In each iteration, we try to add the new randomly generated point to both trees, and if successful in both, the trees are merged, as proposed in [@Kuffner00].
- We found that the success rate was somewhat lower if we allow the robot to advance towards the node nearest to the goal when the trees are disconnected, as proposed in [@Zucker07]. The problem is that the robot would become stuck if it enters a small concave zone of the environment(like a room in a building) while there are moving obstacles inside that zone. Therefore our robot only moves when the trees are connected.
In MP-RRT, the forest was handled simply replacing the oldest tree in it if the forest had reached the maximum size allowed.
Concerning the parameter selection, the probability for selecting a point in the vicinity of a point in the waypoint cache in DRRT was set to 0.4 as suggested in [@Ferguson06]. The probability for trying to reuse a sub tree in MP-RRT was set to 0.1 as suggested in [@Zucker07]. Also, the forest size was set to 25 and the minimum size of a tree to be saved in the forest was set to 5 nodes.
Dynamic Environment Results
---------------------------
The results in table \[table:firstresults5min\] show that it takes our algorithm around a third of the time it takes the DRRT and MP-RRT to get to the goal, with far less collision checks. It was expected that nearest neighbor lookups would be much lower in the multi-stage algorithm than in the other two, because they are only performed in the RRT phase, not during navigation. However, the multi-stage algorithm seems to be slighty less dependable, as it arrived to the goal 98 out of 100 times, while the other two managed to arrive always.
Algorithm Success % Coll. Checks Nearest Neigh. Time\[s\]
------------- ----------- -------------- ---------------- -----------
Multi-stage 98 24364 1468 7.08
DRRT 100 92569 4536 19.81
MP-RRT 100 97517 4408 21.53
: **Dynamic Environment Results.** Average results over 100 runs, with 5 minutes cutoff[]{data-label="table:firstresults5min"}
Partially Known Environment Results
-----------------------------------
The results in table \[table:secondresults1min\] show that our multi-stage algorithm is very undependable, though faster than the other two when it actually reaches the goal. Due to the simplicity of our local search, and that it basically just avoids obstacles by stepping to the side or letting the obstacle move out of the way, when the changes to the environment are significant and obstacles do not move, it is very prone to getting stuck.
Algorithm Success % Coll. Checks Nearest Neigh. Time\[s\]
------------- ----------- -------------- ---------------- -----------
Multi-stage 44 4856 673 5.95
DRRT 100 9845 1037 7.25
MP-RRT 98 17029 1156 8.13
: **Partially Known Environment Results.** Average results over 100 runs, with 1 minute cutoff[]{data-label="table:secondresults1min"}
Conclusions {#sec:conclusions}
===========
The new multi-stage algorithm proposed here has a very good performance in very dynamic environments. It behaves particularly well when several small obstacles are moving around seemingly randomly. It’s major shortcoming is that it gets easily stuck when significant changes to the environment are made, such as big static obstacles appearing near the robot, a situation usually considered as a partially known environment.
Future Work
-----------
There are several areas of improvement for the work presented in this paper. First of all, the multi-stage algorithm must recognize a situation where it is stuck, and restart an RRT from the current location, before continuing with the navigation phase. The detection could be as simple as recognizing that the robot has not moved out of a certain vicinity for a given period of time, or that the next collision in the planned path has been against the same obstacle during a given period of time, meaning that the local search has been unable to find a path around it. This will yield a much more dependable algorithm in different kinds of environments.
A second area of improvement is to experiment with different on-line planners such as the EP/N presented in [@Xiao97], a version of the EvP([@Alfaro05] and [@Alfaro08]) modified to work in continuous configuration space or a potential field navigator. Also, the local search presented here, could benefit from the use of more sophisticated operators.
A third area of research that could be tackled is extending this algorithm to other types of environments, ranging from totally known and very dynamic, to static partially known or unknown environments. An extension to higher dimensional problems would be one logical way to go, as RRTs are know to work well in higher dimensions.
Finally, as RRTs are suitable for kinodynamic planning, we only need to adapt the on-line stage of the algorithm to have a new multi-stage planner for problem with kinodynamic constraints.
[^1]: A holonomic robot is a robot in which the controllable degrees of freedom is equal to the total degrees of freedom.
[^2]: Kinodynamic planning is a problem in which velocity and acceleration bounds must be satisfied
[^3]: MoPa homepage: https://csrg.inf.utfsm.cl/twiki4/bin/view/CSRG/MoPa
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The origin of the $90$ K nematic transition in the chalcogenide FeSe, which displays no magnetic order down to $T=0$, remains a major puzzle for a unifying theory for the iron-based superconductors. We analyze this problem in light of recent experimental data which reveal very small Fermi pockets in this material. We show that the smallness of the Fermi energy leads to a near-degeneracy between magnetic fluctuations and fluctuations in the charge-current density-wave channel. While the two fluctuation modes cooperate to promote the same preemptive Ising-nematic order, they compete for primary order. We argue that this explains why in FeSe the nematic order emerges when the magnetic correlation length is smaller than in other Fe-based materials, and why no magnetism is observed. We discuss how pressure lifts this near-degeneracy, resulting in a non-monotonic dependence of the nematic transition with pressure, in agreement with experiments.'
author:
- 'Andrey V. Chubukov$^{1}$, Rafael M. Fernandes$^{1}$, and Joerg Schmalian$^{2}$'
title: The origin of nematic order in FeSe
---
Nematic order in Fe-pnictides and Fe-chalcogenides develops at a temperature $T_{s}$ that is larger than the magnetic transition (for reviews, see [@review]). It spontaneously breaks the tetragonal $C_{4}$ lattice symmetry down to orthorhombic $C_{2}$. The origin of this symmetry breaking is currently one of the most intensely debated issues of the Fe-based superconducting materials [@Fernandes14]. In the Fe-pnictides, nematic order occurs reasonably close to the instability towards stripe magnetic order at the Neel temperature $T_{N}$. Because the stripe order breaks $Z_{2}$ tetragonal symmetry ($C_{4}\to C_{2}$) in addition to the $O(3)$ spin-rotational symmetry and because $T_{s}$ and $T_{N}$ show similar doping dependencies, it seems reasonable to associate the nematic order with magnetism [@Fernandes14]. Indeed, several groups have argued [@Xu08; @Fang08; @Si11; @igor_m; @Batista11; @Lorenzana11; @Brydon11; @Fernandes12; @Dagotto13; @Yamase15] that magnetic fluctuations split the mean-field stripe magnetic transition into two separate $O(3)$ and $Z_{2}$ transitions. The discrete $Z_{2}$ symmetry is broken first at $T_{s}>T_{N}$, resulting in an intermediate phase, dubbed Ising-nematic, where long-range magnetic order is absent but the $C_{4}$ lattice symmetry is broken down to $C_{2}$. Such $Z_{2}$ order triggers orbital and structural order as all three break the same $C_{4}$ symmetry.
The magnetic scenario for nematicity in Fe-pnictides is supported by a variety of experimental observations, such as the doping dependencies of $T_{N}$ and $T_{s}$ [@Fernandes12], the scaling between the shear modulus and the spin-lattice relaxation rate [@Fernandes13], and the sign-change of the in-plane resistivity anisotropy between electron-doped and hole-doped Fe-pnictides[@Blomberg13]. This scenario, however, has been challenged for the Fe-chalcogenide FeSe. This material displays a nematic transition at $T_{s}\approx90K$. The properties of the nematic phase in FeSe resemble those in Fe-pnictides: similar softening of the shear modulus [@Meingast_FeSe], similar orthorhombic distortion and orbital order [@Nakayama_ARPES_14; @Ding_ARPES_15; @ZXShen_ARPES_15], and similar behavior of the resistivity anisotropy upon applied strain [@Coldea15]. Furthermore, neutron scattering experiment shows that spin fluctuations are peaked at the same ordering vectors as in the Fe-pnictides [@INS_FeSe_1; @INS_FeSe_2]. Yet, in distinction to Fe-pnictides, no magnetic order has thus far been observed in FeSe in the absence of external pressure [@McQueen09; @Imai09]. Moreover, NMR measurements were interpreted as evidence that the magnetic correlation length $\xi$ remains small at $T_{s}$ [@Buchner_FeSe; @Meingast_FeSe]. Although in the Ising-nematic scenario $\xi$ *does not* *have to be large* at $T_{s}$, this seems to be the case for all Fe-pnictides.
Given these difficulties with the Ising-nematic scenario, spontaneous orbital order has been invoked to explain the nematic state in FeSe [@Buchner_FeSe; @Meingast_FeSe]. However, at present, no microscopic theory exists where orbital order appears spontaneously instead of being induced by magnetism [@w_ku10; @devereaux10; @Phillips10; @Phillips12; @Kontani12]. Alternative scenarios for magnetically-driven nematicity in FeSe have also been proposed, involving the formation of a quantum paramagnet [@Kivelson_Lee_15], the onset of spin quadrupolar order [@Si15], and strong frustration of the magnetic fluctuations [@Mazin15]. Yet, the issue of why FeSe does not fit into a “universal” theory for the iron-based superconductors still persists.
In this communication, we present an extension of the spin-nematic scenario which explicitly builds on a unique property of the electronic structure of FeSe, namely, the fact that the Fermi energy $E_{F}$ in this material is small – only a few meV, as seen by ARPES and dH-vA experiments [@Coldea15; @FeSe_dHvA]. For a system with a small $E_{F}$, earlier renormalization-group (RG) calculations have shown that there are two density-wave channels whose fluctuations are strong at momenta $(0,\pi)/(\pi,0)$: a spin density-wave (SDW) channel and a charge-current density-wave (CDW) channel (a CDW with imaginary order parameter, which we denote as iCDW [@zlatko]). The relative strength between the two depends on the sign of the inter-pocket exchange interaction ($u_{2}$ in our notations below). For repulsive $u_{2}$, the coupling in the SDW channel is larger, while for attractive $u_{2}$ the coupling in the iCDW channel is larger. In both cases, however, the RG calculations show that the coupling in the subleading channel approaches the one in the leading channel at small energies. The RG process stops at $E_{F}$, implying that if $E_{F}$ is larger than the highest instability temperature ($T_{s}$ for FeSe) the subleading channel is not a strong competitor and for all practical purposes can be neglected. However, if $E_{F} \sim T_{s}$, as in FeSe, the couplings in the two channels become degenerate within the RG. The degeneracy implies that the order parameter manifold increases from $O(3)\times Z_{2}$, for the three-component SDW, or from $Z_{2}\times Z_{2}$, for the one-component iCDW, to a larger $O(4)\times Z_{2}$. In all cases, the $Z_{2}$ part of the manifold corresponds to selecting either $(0,\pi)$ or $(\pi,0)$ for the density-wave ordering vector. While in both $O(3)\times Z_{2}$ and $O(4)\times Z_{2}$ models the $Z_{2}$ symmetry can be broken before the continuous one, in the latter this happens at a significantly smaller correlation length. As a result, at small $E_{F}$, the nematic order emerges while magnetic fluctuations are still weak. Furthermore, the SDW transition temperature $T_{N}$ in the $O(4)$ model is additionally suppressed due to the competition with iCDW. We argue that these features explain the properties of the nematic state in FeSe, including non-monotonic pressure dependence of $T_{s}$ [@FeSe_pressure1; @FeSe_pressure2]. *The model.* We consider a quasi-2D itinerant band model with two hole pockets at the $\Gamma$ point and two electron pockets at $(0,\pi)$ and $(\pi,0)$ in the 1-Fe Brillouin zone [@Eremin10; @Fernandes12]. This model can be obtained from an underlying 5-orbital model with Hubbard and Hund interactions and hopping between the Fe $3d$ orbitals [@GraserSDDeg; @comm].
![Schematic representation of the SDW and iCDW ordered states with ordering vector $\left(\pi,0\right)$. In the latter, the arrows represent charge currents along the bonds, while in the former, they represent spins on the sites. While fluctuations of both channels support nematicity, they compete for long-range magnetic and charge order. \[fig\_SDW\_iCDW\]](ordered_states){width="0.8\columnwidth"}
The quadratic part of the Hamiltonian in the band basis describes the dispersion of the low-energy fermions, and the information about the orbital content along the Fermi pockets is passed onto inter-pocket and intra-pocket interactions, which are the Hubbard and Hund terms dressed by the matrix elements arising from the change from the orbital to the band basis [@Valenzuela14]. The angular dependence of the matrix elements leads to angle-dependent interactions. The three interactions relevant for Ising-nematic order are the inter-pocket density-density interaction $u_{1}$, the exchange interaction $u_{2}$, and the pair-hopping interaction $u_{3}$ [@Chubukov_RG]. To simplify the analysis, we follow earlier works [@Fernandes12] and analyze the Ising-nematic order within an RG procedure that (i) approximates these three interactions as angle-independent and (ii) restricts the analysis to one hole pocket. The extension to two pockets and angle-dependent interactions makes the calculations more involved but does not modify the RG equations in any substantial way and, moreover, leaves them intact if we treat the hole pockets as circular and neglect the $d_{xy}$ orbital component on the electron pockets.
We label the fermions near the hole pocket as $c_{\mathbf{k}}$ and the fermions near the electron pockets as $f_{1,\mathbf{k}}$ and $f_{2,\mathbf{k}}$. The $O(3)$ magnetic order parameter is given by $${\bf M}_{j}=\frac{1}{N}\sum_{\mathbf{k}\alpha\beta}\left(c_{\mathbf{k},\alpha}^{\dagger}{\bf \boldsymbol{\sigma}}_{\alpha\beta}f_{j,\mathbf{k}+\mathbf{Q}_{j},\beta}+h.c\right),$$ whereas the $Z_{2}$ iCDW order parameter is $$\Phi_{j}=\frac{i}{N}\sum_{\mathbf{k}\alpha}\left(c_{\mathbf{k},\alpha}^{\dagger}f_{j,\mathbf{k}+\mathbf{Q}_{j},\alpha}-h.c.\right),$$ with $j=1,2$ corresponding to the two possible ordering vectors $\mathbf{Q}_{1}=(\pi,0)$ and $\mathbf{Q}_{2}=(0,\pi)$. We show these two ordered states in Fig. \[fig\_SDW\_iCDW\].
*$O(4)$ Ising-nematic action.* In the Ising-nematic scenario, the $C_{4}\to C_{2}$ symmetry breaking implies the appearance of a composite order, quadratic in the density-wave order parameters ${\bf M}_{j}$ and $\Phi_{j}$. To analyze this scenario, we need to know the flow of the couplings that drive SDW order, $\Gamma_{\mathrm{sdw}}=u_{1}+u_{3}$, and iCDW order, $\Gamma_{\mathrm{icdw}}=u_{1}+u_{3}-2u_{2}$ (Ref. [@Chubukov_RG]). The bare coupling $\Gamma_{\mathrm{sdw}}>\Gamma_{\mathrm{icdw}}$ when $u_{2}>0$ and $\Gamma_{\mathrm{icdw}}>\Gamma_{\mathrm{sdw}}$ when $u_{2}<0$. As one integrates out the high-energy degrees of freedom via an RG procedure, the ratio $u_{2}/(u_{1}+u_{3})$ decreases as the system flows to lower energies (or temperatures) and approaches zero at the energy/temperature scale in which the system develops SDW/iCDW order. This holds, however, only if this scale is larger than $E_{F}$. If $E_{F}$ is larger, the RG flow stops at $E_{F}$ and the system develops an instability only in the channel with the largest bare coupling.
To illustrate our point, we plot in Fig. \[fig\_RG\_flow\](a)-(b) the RG flow of $\Gamma_{\mathrm{icdw}}$ and $\Gamma_{\mathrm{sdw}}$ for a particular set of bare couplings $u_{1}\left(0\right)=u_{2}\left(0\right)=10u_{3}\left(0\right)$, chosen deliberately to give a negative bare $\Gamma_{\mathrm{icdw}}$. Under the RG flow, $\Gamma_{\mathrm{icdw}}$ becomes positive and approaches $\Gamma_{\mathrm{sdw}}$ at the scale where the couplings diverge and the system develops a density-wave order. The Fermi energy $E_{F}$ sets the scale at which the RG flow stops. In case I (large $E_{F}$), the RG stops when $\Gamma_{\mathrm{icdw}}$ is still small. In case II (smaller $E_{F}$), the RG stop when $\Gamma_{\mathrm{icdw}}$ is comparable to $\Gamma_{\mathrm{sdw}}$, and in case III (even smaller $E_{F}$), the RG flow reaches the $O(4)$ fixed point already at energies larger than $E_{F}$. We associate case I in Fig. \[fig\_RG\_flow\] with Fe-pnictides, and cases II/III with FeSe based on the values of $E_{F}$ obtained by ARPES and quantum oscillations [@FeSe_dHvA; @Coldea15].
![(a) RG flow of the SDW and iCDW interactions $\Gamma_{\mathrm{sdw}}$ (red curve) and $\Gamma_{\mathrm{icdw}}$ (blue curve) as function of decreasing energy $E$. $W$ is the bandwidth, $u_{0}=u_{1}\left(0\right)=u_{2}\left(0\right)=10u_{3}\left(0\right)$ is the bare interaction parameter, and the dashed line is the energy in which the two degenerate instabilities occur. The RG flow stops at the Fermi energy $E_{F}$: if $E_{F}$ is large (case I, Fe-pnictides), only SDW fluctuations are relevant, whereas if $E_{F}$ is small (cases II/III, FeSe), both SDW and iCDW fluctuations are important. The insets show schematically the Fermi pockets in each case. (b) Ratio $\Gamma_{\mathrm{icdw}}/\Gamma_{\mathrm{sdw}}$ along the RG flow. (c) Electronic manifestation of the Ising-nematic order on the hole pockets. There is a $\cos2\theta$ distortion, with opposite signs for the two pockets, and an overall shift of the chemical potential. \[fig\_RG\_flow\]](RG_flow_withFS){width="0.8\columnwidth"}
We next take the RG results as input and analyze the emergence of a nematic order which spontaneously breaks the symmetry between momenta ${\bf Q}_{1}$ and ${\bf Q}_{2}$ without breaking any other symmetry. The analysis follows the same steps as for pure SDW order [@Fernandes12]: we introduce ${\bf M}_{j}$ and $\Phi_{j}$ ($j=1,2$) as Hubbard-Stratonovich fields which decouple the four-fermion interaction terms, integrate over the fermions, and obtain the effective action in terms of ${\bf M}_{j}$ and $\Phi_{j}$:
$$\begin{aligned}
S_{\mathrm{eff}} & = & \int_{qj}\left(\chi_{s,q}^{-1}\mathbf{M}_{j}^{2}+\chi_{c,q}^{-1}\Phi_{j}^{2}\right)+\frac{u}{2}\int_{xj}\left(\mathbf{M}_{j}^{2}+\Phi_{j}^{2}\right)^{2}\nonumber \\
& - & \frac{g}{2}\int_{x}\left[\left(\mathbf{M}_{1}^{2}+\Phi_{1}^{2}\right)-\left(\mathbf{M}_{2}^{2}+\Phi_{2}^{2}\right)\right]^{2}\label{S_eff}\end{aligned}$$
where $\chi_{s,q}^{-1}=\Gamma_{\mathrm{sdw}}^{-1}-\Pi_{q}$ and $\chi_{c,q}^{-1}=\Gamma_{\mathrm{icdw}}^{-1}-\Pi_{q}$ with $\Pi_{q}=\int_{k}G_{c,k+q}\left(G_{f_{1},k}+G_{f_{2},k}\right)$. Note, the only asymmetry between the two order parameters is due to the interactions $\Gamma_{{\rm sdw}}$ and $\Gamma_{{\rm icdw}}$, respectively. Near ${\bf Q}_{j}$, we can expand $\chi_{s(c),q}^{-1}\approx r_{0,s(c)}+\alpha({\bf q}-{\bf Q}_{j})^{2}$, where $r_{0,s(c)}$ measures the distance to the SDW (iCDW) mean-field instability and $\alpha\sim\mathcal{O}(1)$. The input from the RG analysis is that $r_{0,s}$ and $r_{0,c}$ are close to each other. The quartic coefficients are given by $(u,g)=\pm\frac{1}{2}\int_{k}G_{c,k}^{2}\left(G_{f_{1},k}\pm G_{f_{2},k}\right)^{2}$. At $\Gamma_{\mathrm{sdw}}=\Gamma_{\mathrm{icdw}}$, the action depends on ${\bf M}$ and $\Phi$ only via the combination ${\bf M}^{2}+\Phi^{2}$, and the order parameter manifold is $O(4)\times Z_{2}$. Evaluating the integrals at $E_F \sim T_s$, we find $u>0$ and $g>0$, what implies that long-range order selects either $j=1$ or $j=2$, but not both, i.e. it breaks both $O(4)$ and $Z_{2}$ symmetries.
Within a mean-field approximation, $O(4)$ and $Z_{2}$ are broken at the same temperature. Beyond mean-field, the $Z_{2}$ symmetry is broken first, and *both* ${\bf M}$ and $\Phi$ contribute to it, even if $\Gamma_{\mathrm{sdw}}\neq\Gamma_{\mathrm{icdw}}$. To see this, we treat ${\bf M}$ and $\Phi$ as fluctuating fields, introduce the composite fields $\psi=u\left(\mathbf{M}_{x}^{2}+\Phi_{x}^{2}+\mathbf{M}_{y}^{2}+\Phi_{y}^{2}\right)$ and $\varphi=g\left(\mathbf{M}_{x}^{2}+\Phi_{x}^{2}-\mathbf{M}_{y}^{2}+\Phi_{y}^{2}\right)$ to decouple the quartic terms, integrate over the primary fields ${\bf M}$ and $\Phi$ and obtain the action in terms of $\psi$ and $\varphi$: $$\begin{aligned}
S_{\mathrm{eff}}\left[\varphi,\psi\right] & = & \frac{\varphi^{2}}{2g}-\frac{\psi^{2}}{2u}+\frac{3}{2}\int_{q}\ln\left[\left(\chi_{s}^{-1}+\psi\right)^{2}-\varphi^{2}\right]\nonumber \\
& + & \frac{1}{2}\int_{q}\ln\left[\left(\chi_{c}^{-1}+\psi\right)^{2}-\varphi^{2}\right]\label{e_1}\end{aligned}$$ The field $\psi$ has a non-zero expectation value $\left\langle \psi\right\rangle \neq0$ at any tenperature as it does not break any symmetry, but only renormalizes the correlation lengths of the primary fields ${\bf M}$ and $\Phi$ to $\xi_{s(c)}^{-2}=r_{0,s(c)}+\left\langle \psi\right\rangle $. A non-zero $\left\langle \varphi\right\rangle $, on the other hand, breaks the tetragonal $C_{4}$ symmetry. If this happens before the susceptibilities of the primary fields soften at ${\bf Q}_{j}$, then the $Z_{2}$ rotational symmetry breaks prior to other symmetry breakings. We emphasize that the nematic order parameter $\varphi$ involves the combination ${\bf M}^{2}+\Phi^{2}$, hence one cannot separate SDW induced and iCDW induced nematic order, even when $\chi_{s}$ and $\chi_{c}$ are not equivalent.
We solve the action in (\[e\_1\]) within the saddle-point approximation, similarly to what was done in Refs.[@Fernandes12]. We find that at $\xi_{s},\xi_{c}\approx\xi$, a non-zero nematic order parameter $\left\langle \varphi\right\rangle \neq0$ emerges when the correlation length $\xi^{2}=\pi/g$, or, to logarithmic accuracy in $g\ll1$, at $T_{s}=2\pi\rho_{s}/|\log g|$, where $\rho_{s}$ is the stiffness of the $O(4)$ non-linear $\sigma$ model associated with Eq. (\[S\_eff\]). It is instructive to compare this result with the case where only $O(3)$ SDW fluctuations are present. In that case, the nematic order emerges when $3\xi_{O(3)}^{2}=4\pi/g_{O(3)}$, and the transition temperature is $T_{s}=2\pi\rho_{s}/|\log\sqrt{g_{O(3)}}|$, where $g_{O(3)}$ is the coupling in the SDW $O(3)$ model. As a result, to obtain the same $T_{s}$, one needs a much smaller coupling constant $g_{O(3)}\sim g_{O(4)}^{2}$. Consequently, at $T=T_{s}$, the correlation length $\xi_{O(4)}$ in the $O(4)$ case is proportional to $\xi_{O(4)}\sim\sqrt{\xi_{O(3)}}$, i.e. it is much smaller than it would be if nematicity was driven solely by SDW fluctuations. This is consistent with NMR [@Buchner_FeSe; @Meingast_FeSe] and neutron scattering data [@INS_FeSe_2] in the paramagnetic phase of FeSe, which point to the presence of SDW fluctuations, albeit weaker than in the Fe-pnictide compounds. The rapid increase of the correlation lengths below $T_{s}$, obeying $\xi_{s,c}^{-2}=\xi_{s,c}^{-2}\left(T_{s}\right)-\left\langle \varphi\right\rangle $, is also consistent with the increase of $1/T_{1}T$ and the inelastic neutron signal[@Buchner_FeSe; @Meingast_FeSe; @INS_FeSe_2]. The $O(4)$ Ising-nematic scenario also addresses why no magnetic order appears down to the lowest temperatures. The SDW and iCDW orders compete via the bi-quadratic term $\left(u-g\right)\mathbf{M}_{j}^{2}\Phi_{j}^{2}$ in the low-energy action of Eq. (\[S\_eff\]). As a result, for $u_{2}>0$, fluctuations of the sub-leading iCDW channel suppress the transition temperature of the leading SDW channel. Such a suppression is the largest when the difference between the coupling constants $\left|\Gamma_{\mathrm{sdw}}-\Gamma_{\mathrm{icdw}}\right|$ is the smallest, which happens when the system flows towards $O(4)$ symmetry within RG, i.e. when $E_{F}$ is small, such as in FeSe.
![Density plot of the nematic transition $T_{\mathrm{nem}}$ as function of the bare iCDW and SDW transitions $T_{\mathrm{iCDW}}$ and $T_{\mathrm{SDW}}$. To mimic the effect of pressure, they start at the same negative value $-\theta$ at zero pressure, for which the nematic transition temperature is $T_{\mathrm{nem,0}}$, and then vary in opposite ways upon increasing pressure, **$T_{\mathrm{iCDW}}<-\theta$ and $T_{\mathrm{SDW}}>-\theta$.** \[Fe\_pressure\]](Fe_pressure){width="0.8\columnwidth"}
*Experimental signatures.* We know discuss the experimental consequences of the Ising-nematic order. The breaking of the $Z_{2}$ symmetry between the $j=1$ and $j=2$ components of the $O(4)$ field implies the breaking of $C_{4}$ lattice rotational symmetry down to $C_{2}$. This instantaneously triggers structural order due to the coupling to lattice. To investigate how $Z_{2}$ order affect the electronic states, we return to the original four-pocket model (with fermions near the two hole pockets described by the operators $c_{1,\mathbf{k}}$ and $c_{2,\mathbf{k}}$) and include the explicit angle-dependence introduced by the matrix elements for the transformation between orbital and band basis. This transformation has the particularly simple form $c_{1,\mathbf{k}}=d_{xz}\cos{\theta_{\mathbf{k}}}-d_{yz}\sin{\theta_{\mathbf{k}}}$, $c_{2,\mathbf{k}}=d_{xz}\sin{\theta_{\mathbf{k}}}+d_{yz}\cos{\theta_{\mathbf{k}}}$ if one considers circular hole pockets and neglects the $d_{xy}$ orbital component on the electron pockets [@Vafek_Fernandes]
The feedback effect of the Ising-nematic order on the fermions takes place via the self-energy corrections involving the unequal susceptibilities of the primary SDW and iCDW fields at momenta $\mathbf{Q}_{1}$ and $\mathbf{Q}_{2}$. These corrections not only shift the chemical potentials of the $f_{1}$ and $f_{2}$ electron pockets in opposite directions $\left\langle f_{1,\mathbf{k}}^{\dagger}f_{1,\mathbf{k}}\right\rangle -\left\langle f_{2,\mathbf{k}}^{\dagger}f_{2,\mathbf{k}}\right\rangle \propto\left\langle \varphi\right\rangle $, but also give rise to a $d$-wave like distortion of the $c_{1}$ and $c_{2}$ hole pockets: $\left\langle c_{1,\mathbf{k}}^{\dagger}c_{1,\mathbf{k}}\right\rangle -\left\langle c_{2,\mathbf{k}}^{\dagger}c_{2,\mathbf{k}}\right\rangle \propto\left\langle \varphi\right\rangle \cos2\theta_{\mathbf{k}}$ (see Fig.\[fig\_RG\_flow\]b). In the orbital basis, the latter corresponds to ferro-orbital order $\left\langle d_{xz}^{\dagger}d_{xz}\right\rangle -\left\langle d_{yz}^{\dagger}d_{yz}\right\rangle \propto\left\langle \varphi\right\rangle $ [@Vafek_Fernandes]. Note that besides the changes in the dispersions proportional to $\left\langle \varphi\right\rangle $, there is an overall shift of the chemical potential, symmetric for the two electron and the two hole pockets. The behavior of hole pockets in the Ising-nematic scenario is consistent with the existing ARPES data that show a $d$-wave type elongation of one of the hole pockets, whereas the other hole pocket sinks below the Fermi level [@Coldea15]. The behavior of the electron pockets in the 2-Fe Brillouin zone is also consistent with the splitting of the chemical potentials of the $f_{1}$ and $f_{2}$ fermions. We also investigate how pressure affects the nematic transition temperature $T_{s}$. Within our approach, $T_{s}$ is defined by the condition $3\xi_{s}^{2}+\xi_{c}^{2}=4\pi/g$. Upon pressure, the Fermi pockets become bigger, and the Fermi energy increases. As a result iCDW becomes less competitive and $\xi_{c}$ decreases, while $\xi_{s}$ increases. The combination of these two opposite tendencies in general gives rise to a non-monotonic behavior of $T_{s}$. This is illustrated in Fig. \[Fe\_pressure\] using a simple modeling in which $\xi_{j}^{-2}\approx T-T_{j}$, with $T_{j}$ denoting the bare transition temperatures for SDW and iCDW (see caption).
Note that in our analysis so far we considered $u_{2}\left(0\right)>0$. If on the other hand this interaction is attractive, $u_{2}(0)<0$, the iCDW phase is the leading instability, and the ground state manifold is $Z_{2}\times Z_{2}$. In this case, the nematic and iCDW transitions are expected to be simultaneous [@Fernandes12]. Although at present no microscopic mechanism is known to give $u_{2}\left(0\right)<0$ [@zlatko], this could be another possibility to explain the existence of nematic order without magnetic order in FeSe. Such an iCDW phase could be detected via its time-reversal symmetry breaking, which would be manifested in, e.g., $\mu$SR measurements. The phase diagram under pressure can be explained by assuming that under pressure $u_{2}$ would change sign and SDW would become the leading instability.
*Summary* In summary, we propose a natural extension of the Ising-nematic scenario to explain the puzzling nematic state observed in FeSe. Our scenario relies on the smallness of $E_{F}$ and explains the onset of nematic order far from magnetism due to the near degeneracy between the SDW channel and an iCDW charge-current density wave channel. This near-degeneracy could result in the nucleation of local iCDW order in the presence of point-like impurities, which favor iCDW against SDW order [@Schmalian15]. While these fluctuations cooperate with magnetic ones to break the tetragonal symmetry, they compete for long-range order and reduce both $T_{N}$ and the magnetic correlation length at the onset of nematic order. We argue that this Ising-nematic scenario can also explain the observed non-monotonic dependence of the nematic transition temperature $T_{s}$ upon pressure. We thank A. Boehmer, I. Fisher, P. Hirschfeld, U. Karahasanovic, J. Kang, S. Kivelson, I. Mazin, C. Meingast, R. Valenti, for useful discussions. This work was supported by the Office of Basic Energy Sciences U. S. Department of Energy under awards DE-FG02-ER46900 (AVC) and DE-SC0012336 (RMF) and the Deutsche Forschungsgemeinschaft through DFG-SPP 1458 *Hochtemperatursupraleitung in Eisenpniktiden* (JS).
[10]{} D. C. Johnston, Adv. Phys., **59**, 803 (2010); D.N. Basov and A.V. Chubukov, Nature Physics **7**, 241 (2011); J. Paglione and R. L. Greene, Nature Phys. **6**, 645 (2010); P. C. Canfield and S. L. Bud’ko, Annu. Rev. Cond. Mat. Phys. **1**, 27 (2010); H. H. Wen and S. Li, Annu. Rev. Cond. Mat. Phys. **2**, 121 (2011); P. Dai, J. Hu, and E. Dagotto, Nature Phys. **8**, 709 (2012).
R. M. Fernandes, A. V. Chubukov, and J. Schmalian, Nature Phys. **10**, 97 (2014).
C. Xu, M. Mueller, and S. Sachdev, Phys. Rev. B **78**, 020501(R) (2008).
C. Fang, H. Yao, W.-F. Tsai, J.P. Hu, and S. A. Kivelson, Phys. Rev. B **77** 224509 (2008).
E. Abrahams and Q. Si, J. Phys.: Condens. Matter **23**, 223201 (2011).
M. D. Johannes and I. I. Mazin, Nature Phys. **5**, 141 (2009).
Y. Kamiya, N. Kawashima, and C. D. Batista, Phys. Rev. B **84**, 214429 (2011).
M. Capati, M. Grilli, and J. Lorenzana, Phys. Rev. B **84**, 214520 (2011).
P. M. R. Brydon, J. Schmiedt, and C. Timm, Phys. Rev. B **84**, 214510 (2011).
R. M. Fernandes, A. V. Chubukov, J. Knolle, I. Eremin and J. Schmalian, Phys. Rev. B **85**, 024534 (2012).
S. Liang, A. Moreo, and E. Dagotto, Phys. Rev. Lett. **111**, 047004 (2013).
H. Yamase and R. Zeyher, arXiv:1503.07646.
R. M. Fernandes, A. E. B�hmer, C. Meingast, and J. Schmalian, Phys. Rev. Lett. **111**, 137001 (2013).
E. C. Blomberg, M. A. Tanatar, R. M. Fernandes, I. I. Mazin, B. Shen, H.-H. Wen, M. D. Johannes, J. Schmalian, and R. Prozorov, Nature Comm. **4**, 1914 (2013).
A.E. B�hmer, T. Arai, F. Hardy, T. Hattori, T. Iye, T. Wolf, H.v. L�hneysen, K. Ishida, and C. Meingast Phys. Rev. Lett. **114**, 027001 (2015).
K. Nakayama, Y. Miyata, G.N. Phan, T. Sato, Y. Tanabe, T. Urata, K. Tanigaki, and T. Takahashi Phys. Rev. Lett. **113**, 237001 (2014).
P. Zhang, T. Qian, P. Richard, X. P. Wang, H. Miao, B. Q. Lv, B. B. Fu, T. Wolf, C. Meingast, X. X. Wu, Z. Q. Wang, J. P. Hu, and H. Ding, arXiv:1503.01390.
Y. Zhang, M. Yi, Z.-K. Liu, W. Li, J. J. Lee, R. G. Moore, M. Hashimoto, N. Masamichi, H. Eisaki, S. -K. Mo, Z. Hussain, T. P. Devereaux, Z.-X. Shen, and D. H. Lu, arXiv:1503.01556.
M. D. Watson, T. K. Kim, A. A. Haghighirad, N. R. Davies, A. McCollam, A. Narayanan, S. F. Blake, Y. L. Chen, S. Ghannadzadeh, A. J. Schofield, M. Hoesch, C. Meingast, T. Wolf, and A. I. Coldea, arXiv:1502.02917.
M. C. Rahn, R. A. Ewings, S. J. Sedlmaier, S. J. Clarke, and A. T. Boothroyd, arXiv:1502.03838.
Q. Wang, Y. Shen, B. Pan, Y. Hao, M. Ma, F. Zhou, P. Steffens, K. Schmalzl, T. R. Forrest, M. Abdel-Hafiez, D. A. Chareev, A. N. Vasiliev, P. Bourges, Y. Sidis, H. Cao, and J. Zhao, arXiv:1502.07544.
T. M. McQueen *et al*, Phys. Rev. Lett. **103**, 057002 (2009).
T. Imai, K. Ahilan, F.L. Ning, T.M. McQueen, and R.J. Cava, Phys. Rev. Lett. **102**, 177005 (2009).
S.-H. Baek, D. V. Efremov, J. M. Ok, J. S. Kim, J. van den Brink, and B. B�chner, Nat. Mater. **14**, 210 (2014).
C. C. Lee, W. G. Yin, and W. Ku, Phys. Rev. Lett. **103**, 267001 (2009)
C.-C. Chen, J. Maciejko, A. P. Sorini, B. Moritz, R. R. P. Singh, and T. P. Devereaux, Phys. Rev. B **82**, 100504 (2010)
W. Lv, F. Kr�ger, and P. Phillips, Phys. Rev. B **82**, 045125 (2010).
W.-C. Lee and P. W. Phillips, Phys. Rev. B **86**, 245113 (2012).
S. Onari H. and Kontani, Phys. Rev. Lett. **109**, 137001 (2012).
F. Wang, S. Kivelson, and D.-H. Lee, arXiv:1501.00844.
R. Yu and Q. Si, arXiv:1501.05926.
J. K. Glasbrenner, I. I. Mazin, H. O. Jeschke, P. J. Hirschfeld, and R. Valenti, arXiv:1501.04946.
T. Terashima *et al*, Phys. Rev. B **90**, 144517 (2014).
J. Kang and Z. Tesanovic, Phys. Rev. B **83**, 020505 (2011).
M. Bendele, A. Ichsanow, Y. Pashkevich, L. Keller, T. Strassle, A. Gusev, E. Pomjakushina, K. Conder, R. Khasanov, and H. Keller, Phys. Rev. B **85**, 064517 (2012).
T. Terashima, N. Kikugawa, S. Kasahara, T. Watashige, T. Shibauchi, Y. Matsuda, T. Wolf, A. E. B�hmer, F. Hardy, C. Meingast, H. v. L�hneysen, and S. Uji, arXiv:1502.03548.
I. Eremin and A. V. Chubukov, Phys. Rev. B **81**, 024511 (2010).
S. Graser, T. A. Maier, P. J. Hirschfeld, and D. J. Scalapino, New J. Phys. **11**, 025016 (2009).
The low-energy excitations in the band basis come from three orbitals – $d_{xz}$, $d_{yz}$, and $d_{xy}$. The two hole pockets are composed predominantly of $d_{xz}$ and $d_{yz}$ orbitals, and the two electron pockets of $d_{xy}$ and $d_{xz}$ orbitals ($(0,\pi)$ pocket) or $d_{xy}$ and $d_{yz}$ orbitals ($(\pi,0)$ pocket).
**L. Fanfarillo, A. Cortijo, and B. Valenzuela, arxiv:1410.8488.**
A. V. Chubukov, D. Efremov, and I. Eremin, Phys. Rev. B **78**, 134512 (2008); A. V. Chubukov, Physica C **469**, 640 (2009); S. Maiti and A. V. Chubukov, Phys. Rev. B **82**, 214515 (2010).
R. M. Fernandes and O. Vafek, Phys. Rev. B **90**, 214514 (2014); V. Cvetkovic and O. Vafek, Phys. Rev. B 88, 134510 (2013).
M. Hoyer, M. S. Scheurer, S. V. Syzranov, and J. Schmalian, Phys. Rev. B **91**, 054501 (2015).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Given a nonnegative integer $m$ and a finite collection ${{\mathcal A}}$ of linear forms on ${{\mathbb Q}}^d$, the arrangement of affine hyperplanes in ${{\mathbb Q}}^d$ defined by the equations $\alpha(x) = k$ for $\alpha \in {{\mathcal A}}$ and integers $k \in [-m, m]$ is denoted by ${{\mathcal A}}^m$. It is proved that the coefficients of the characteristic polynomial of ${{\mathcal A}}^m$ are quasi-polynomials in $m$ and that they satisfy a simple combinatorial reciprocity law.'
address: |
Department of Mathematics (Division of Algebra-Geometry)\
University of Athens\
Panepistimioupolis\
15784 Athens, Greece
author:
- 'Christos A. Athanasiadis'
date: 'October 16, 2006'
title: A combinatorial reciprocity theorem for hyperplane arrangements
---
[^1]
Introduction {#intro}
============
Let $V$ be a $d$-dimensional vector space over the field ${{\mathbb Q}}$ of rational numbers and ${{\mathcal A}}$ be a finite collection of linear forms on $V$ which spans the dual vector space $V^*$. We denote by ${{\mathcal A}}^m$ the essential arrangement of affine hyperplanes in $V$ defined by the equations $\alpha(x) = k$ for $\alpha \in {{\mathcal A}}$ and integers $k \in [-m, m]$ (we refer to [@OT; @Sta3] for background on hyperplane arrangements). Thus ${{\mathcal A}}^0$ consists of the linear hyperplanes which are the kernels of the forms in ${{\mathcal A}}$ and ${{\mathcal A}}^m$ is a deformation of ${{\mathcal A}}^0$, in the sense of [@Ath1; @PS].
The characteristic polynomial [@OT Section 2.3] [@Sta3 Section 1.3] of ${{\mathcal A}}^m$, denoted $\chi_{{\mathcal A}}(q, m)$, is a fundamental combinatorial and topological invariant which can be expressed as $$\chi_{{\mathcal A}}(q, m) \, = \, \sum_{i=0}^d \ c_i (m) \, q^i.
\label{eq:qm}$$ We will be concerned with the behavior of $\chi_{{\mathcal A}}(q, m)$ as a function of $m$. Let ${{\mathbb N}}:= \{0, 1,\dots\}$ and recall that a function $f: {{\mathbb N}}{\rightarrow}{{\mathbb R}}$ is called a *quasi-polynomial* with period $N$ if there exist polynomials $f_1, f_2,\dots,f_N: {{\mathbb N}}{\rightarrow}{{\mathbb R}}$ such that $f(m) = f_i (m)$ for all $m \in {{\mathbb N}}$ with $m \equiv
i \ ({\rm mod} \, N)$. The degree of $f$ is the maximum of the degrees of the $f_i$. Our main result is the following theorem.
Under the previous assumptions on ${{\mathcal A}}$, the coefficient $c_i (m)$ of $q^i$ in $\chi_{{\mathcal A}}(q, m)$ is a quasi-polynomial in $m$ of degree at most $d-i$. Moreover, the degree of $c_0 (m)$ is equal to $d$ and $$\chi_{{\mathcal A}}(q, -m) \, = \, (-1)^d \chi_{{\mathcal A}}(-q, m-1).
\label{eq:rec}$$ \[thm0\]
In particular we have $\chi_{{\mathcal A}}(q, -1) = (-1)^d \chi_{{\mathcal A}}(-q)$, where $\chi_{{\mathcal A}}(q)$ is the characteristic polynomial of ${{\mathcal A}}^0$. Let ${{\mathcal A}}_{{\mathbb R}}^m$ denote the arrangement of affine hyperplanes in the real $d$-dimensional vector space $V_{{\mathbb R}}= V \otimes_{{\mathbb Q}}{{\mathbb R}}$ defined by the same equations defining the hyperplanes of ${{\mathcal A}}^m$. Let $r_{{\mathcal A}}(m) = (-1)^d \chi_{{\mathcal A}}(-1, m)$ and $b_{{\mathcal A}}(m) = (-1)^d \chi_{{\mathcal A}}(1,
m)$ so that, for $m \in {{\mathbb N}}$, $r_{{\mathcal A}}(m)$ and $b_{{\mathcal A}}(m)$ count the number of regions and bounded regions, respectively, into which $V_{{\mathbb R}}$ is dissected by the hyperplanes of ${{\mathcal A}}_{{\mathbb R}}^m$ [@Sta3 Section 2.2] [@Za].
Under the previous assumptions on ${{\mathcal A}}$, the function $r_{{\mathcal A}}(m)$ is a quasi-polynomial in $m$ of degree $d$ and, for all positive integers $m$, $(-1)^d r_{{\mathcal A}}(-m)$ is equal to the number $b_{{\mathcal A}}(m-1)$ of bounded regions of ${{\mathcal A}}_{{\mathbb R}}^{m-1}$. \[cor0\]
Theorem \[thm0\] and its corollary belong to a family of results demonstrating some kind of combinatorial reciprocity law; see [@Sta1] for a systematic treatment of such phenomena. Not surprisingly, the proof given in Section \[proof\] is a simple application of the main results of Ehrhart theory [@Sta2 Section 4.6]. More specifically, equation (\[eq:rec\]) will follow from the reciprocity theorem [@Sta2 Theorem 4.6.26] for the Ehrhart quasi-polynomial of a rational polytope. An expression for the coefficient of the leading term $m^d$ of either $c_0 (m)$ or $r_{{\mathcal A}}(m)$ is also derived in that section. Some examples, including the motivating example in which ${{\mathcal A}}_{{\mathbb R}}^0$ is the arrangement of reflecting hyperplanes of a Weyl group, and remarks are discussed in Section \[remarks\]. In the remainder of this section we give some background on characteristic and Ehrhart (quasi-)polynomials needed in Section \[proof\]. We will denote by $\#
S$ or $|S|$ the cardinality of a finite set $S$.
*Arrangements of hyperplanes.* Let $V$ be a $d$-dimensional vector space over a field ${{\mathbb K}}$. An *arrangement of hyperplanes* in $V$ is a finite collection ${{\mathcal H}}$ of affine subspaces of $V$ of codimension one (we will allow this collection to be a multiset). The *intersection poset* of ${{\mathcal H}}$ is the set $L_{{\mathcal H}}=
\{ \cap \, {{\mathcal F}}: {{\mathcal F}}\subseteq {{\mathcal H}}\}$ of all intersections of subcollections of ${{\mathcal H}}$, partially ordered by reverse inclusion. It has a unique minimal element $\hat{0} = V$, corresponding to the subcollection ${{\mathcal F}}= \emptyset$. The *characteristic polynomial* of ${{\mathcal H}}$ is defined by $$\chi_{{\mathcal H}}(q) \, = \sum_{x \in L_{{\mathcal H}}} \mu (x) \, q^{\dim x}
\label{eq:char}$$ where $\mu$ stands for the Möbius function on $L_{{\mathcal H}}$ defined by $$\mu (x) \, = \, \begin{cases}
1, & \text{if \ $x = \hat{0}$} \\
- \sum_{y < x} \mu (y), & \text{otherwise.} \end{cases}$$ Equivalently [@OT Lemma 2.55] we have $$\chi_{{\mathcal H}}(q) \, = \sum_{{{\mathcal G}}\subseteq {{\mathcal H}}} (-1)^{\# {{\mathcal G}}} \,
q^{\dim (\cap \, {{\mathcal G}})}
\label{eq:char2}$$ where the sum is over all ${{\mathcal G}}\subseteq {{\mathcal H}}$ with $\cap \, {{\mathcal G}}\neq
\emptyset$.
In the case ${{\mathbb K}}= {{\mathbb R}}$, the connected components of the space obtained from $V$ by removing the hyperplanes of ${{\mathcal H}}$ are called *regions* of ${{\mathcal H}}$. A region is *bounded* if it is a bounded subset of $V$ with respect to a usual Euclidean metric.
*Ehrhart quasi-polynomials.* A convex polytope $P \subseteq {{\mathbb R}}^n$ is said to be a *rational* or *integral* polytope if all its vertices have rational or integral coordinates, respectively. If $P$ is rational and $P^\circ$ is its relative interior then the functions defined for nonnegative integers $m$ by the formulas $$\begin{tabular}{l}
$i (P, m) \, = \, \# \, (m P \cap {{\mathbb Z}}^n)$ \\
$\bar{i} (P, m) \, = \, \# \, (m P^\circ \cap {{\mathbb Z}}^n)$
\end{tabular}
\label{eq:ehr}$$ are quasi-polynomials in $m$ of degree $d = \dim (P)$, related by the Ehrhart reciprocity theorem [@Sta2 Theorem 4.6.26] $$i (P, -m) \, = \, (-1)^d \ \bar{i} (P, m).
\label{eq:ehrrec}$$ The function $i (P, m)$ is called the *Ehrhart quasi-polynomial* of $P$. The coefficient of the leading term $m^d$ in either $i (P, m)$ or $\bar{i} (P, m)$ is a constant equal to the normalized $d$-dimensional volume of $P$ (meaning the $d$-dimensional volume of $P$ normalized with respect to the affine lattice $V_P \cap {{\mathbb Z}}^n$, where $V_P$ is the affine span of $P$ in ${{\mathbb R}}^n$). If $P$ is an integral polytope then $i (P, m)$ is a polynomial in $m$ of degree $d$, called the *Ehrhart polynomial* of $P$.
Proof of Theorem \[thm0\] {#proof}
=========================
In this section we prove Theorem \[thm0\] and Corollary \[cor0\] and derive a formula for the coefficient of the leading term $m^d$ of $r_{{\mathcal A}}(m)$. In what follows ${{\mathcal A}}$ is as in the beginning of Section \[intro\]. We use the notation $[a, b] = \{x \in {{\mathbb R}}: a \le x \le b\}$ and $[a, b]_{{\mathbb Z}}= [a, b] \cap {{\mathbb Z}}$ for $a, b \in {{\mathbb Z}}$ with $a \le b$.
*Proof of Theorem \[thm0\] and Corollary \[cor0\].* Using formula (\[eq:char2\]) we get $$\chi_{{{\mathcal A}}^m} (q) \, = \sum_{{{\mathcal G}}\subseteq {{\mathcal A}}^m} (-1)^{\# {{\mathcal G}}} \,
q^{\dim (\cap \, {{\mathcal G}})}
\label{eq:proof1}$$ where the sum is over all ${{\mathcal G}}\subseteq {{\mathcal A}}^m$ with $\cap \, {{\mathcal G}}\neq \emptyset$. Clearly for this to happen ${{\mathcal G}}$ must contain at most one hyperplane of the form $\alpha (x) = k$ for each $\alpha \in {{\mathcal A}}$. In other words we must have ${{\mathcal G}}= {{\mathcal F}}_b$ for some ${{\mathcal F}}\subseteq {{\mathcal A}}$ and map $b: {{\mathcal F}}{\rightarrow}[-m,m]_{{\mathbb Z}}$ sending $\alpha$ to $b_\alpha$, where ${{\mathcal F}}_b$ consists of the hyperplanes $\alpha (x) = b_\alpha$ for $\alpha \in {{\mathcal F}}$. Let us denote by $\dim {{\mathcal F}}$ the dimension of the linear span of ${{\mathcal F}}$ in $V^*$ and observe that $\dim (\cap \, {{\mathcal F}}_b)
= d - \dim {{\mathcal F}}$ whenever $\cap \, {{\mathcal F}}_b$ is nonempty. From the previous observations and (\[eq:proof1\]) we get
$$\begin{aligned}
\chi_{{\mathcal A}}(q, m) & = & \sum_{{{\mathcal F}}\subseteq {{\mathcal A}}} \ \sum_{\substack{b:
{{\mathcal F}}{\rightarrow}[-m,m]_{{\mathbb Z}}\\ \cap \, {{\mathcal F}}_b \neq \emptyset}}
(-1)^{\# {{\mathcal F}}_b} \, q^{\dim (\cap \, {{\mathcal F}}_b)} \\ \\
& = & \sum_{{{\mathcal F}}\subseteq {{\mathcal A}}} (-1)^{\# {{\mathcal F}}} \,
q^{d-\dim {{\mathcal F}}} \ \# \{b: {{\mathcal F}}{\rightarrow}[-m,m]_{{\mathbb Z}}, \, \cap \, {{\mathcal F}}_b \neq
\emptyset \}.\end{aligned}$$
Let us write ${{\mathcal F}}= \{\alpha_1, \alpha_2,\dots,\alpha_n\}$ and $b_i
= b_{\alpha_i}$, so that $b$ can be identified with a column vector in ${{\mathbb Q}}^n$. Then $\cap \, {{\mathcal F}}_b$ is nonempty if and only if the linear system $\alpha_i (x) = b_i$, $1 \le i \le
n$, has a solution in ${{\mathbb Q}}^d$ or, equivalently, if and only if $b$ lies in the image ${{\rm Im}}T_{{\mathcal F}}$ of the linear transformation $T_{{\mathcal F}}:
{{\mathbb Q}}^d {\rightarrow}{{\mathbb Q}}^n$ mapping $x \in {{\mathbb Q}}^d$ to the column vector in ${{\mathbb Q}}^n$ with coordinates $\alpha_1 (x)$, $\alpha_2
(x),\dots,\alpha_n (x)$. It follows that
$$\begin{aligned}
\# \{b: {{\mathcal F}}{\rightarrow}[-m,m]_{{\mathbb Z}}, \, \cap \, {{\mathcal F}}_b \neq \emptyset \} & =
& \# \ {{\rm Im}}T_{{\mathcal F}}\cap ([-m,m]_{{\mathbb Z}})^n \\
& = & \# \ {{\rm Im}}T_{{\mathcal F}}\cap [-m,m]^n \cap {{\mathbb Z}}^n \\
& = & \# \ (m \, ({{\rm Im}}T_{{\mathcal F}}\cap [-1,1]^n) \cap {{\mathbb Z}}^n) \\
& = & \# \ (m P_{{\mathcal F}}\cap {{\mathbb Z}}^n) \\
& = & i (P_{{\mathcal F}}, m)\end{aligned}$$
where $P_{{\mathcal F}}= ({{\rm Im}}T_{{\mathcal F}}\otimes_{{\mathbb Q}}{{\mathbb R}}) \cap [-1,1]^n$, and hence that $$\chi_{{\mathcal A}}(q, m) \, = \sum_{{{\mathcal F}}\subseteq {{\mathcal A}}} (-1)^{\# {{\mathcal F}}} \, q^{d-\dim
{{\mathcal F}}} \ i (P_{{\mathcal F}}, m).
\label{eq:proof2}$$ Equivalently we have $$c_i (m) \ = \sum_{\substack{{{\mathcal F}}\subseteq {{\mathcal A}}\\ \dim {{\mathcal F}}= d-i}}
(-1)^{\# {{\mathcal F}}} \, i (P_{{\mathcal F}}, m)
\label{eq:proof3}$$ for $0 \le i \le d$, where the $c_i (m)$ are as in (\[eq:qm\]). Clearly $P_{{\mathcal F}}$ is a rational convex polytope of dimension $\dim ({{\rm Im}}T_{{\mathcal F}}) = \dim {{\mathcal F}}$ and hence $i (P_{{\mathcal F}}, m)$ is a quasi-polynomial in $m$ of degree $\dim {{\mathcal F}}$. It follows from (\[eq:proof3\]) that $c_i (m)$ is a quasi-polynomial in $m$ of degree at most $d-i$ and that $r_{{\mathcal A}}(m) = \sum_{i=0}^d \, (-1)^{d-i} c_i (m)$ is a quasi-polynomial in $m$ of degree at most $d$. Moreover we have $r_{{\mathcal A}}(m) \ge (2m+2)^d$ for $m \ge 0$ since ${{\mathcal A}}$ contains $d$ linearly independent forms and the corresponding hyperplanes of ${{\mathcal A}}^m_{{\mathbb R}}$ dissect $V_{{\mathbb R}}$ into $(2m+2)^d$ regions. It follows that the degree of $r_{{\mathcal A}}(m)$ is no less than $d$, which implies that the degrees of $r_{{\mathcal A}}(m)$ and $c_0 (m)$ are, in fact, equal to $d$.
It remains to prove the reciprocity relation (\[eq:rec\]). For ${{\mathcal F}}\subseteq {{\mathcal A}}$ with $\# {{\mathcal F}}= n$ let $W_{{\mathcal F}}$ be the real linear subspace ${{\rm Im}}T_{{\mathcal F}}\otimes_{{\mathbb Q}}{{\mathbb R}}$ of ${{\mathbb R}}^n$, so that $P_{{\mathcal F}}=
W_{{\mathcal F}}\cap [-1,1]^n$. We have
$$\begin{aligned}
m P^\circ_{{\mathcal F}}\cap {{\mathbb Z}}^n & = & (W_{{\mathcal F}}\cap [-m,m]^n)^\circ \cap
{{\mathbb Z}}^n \\
& = & W_{{\mathcal F}}\cap [-(m-1), m-1]^n \cap {{\mathbb Z}}^n \\
& = & (m-1) P_{{\mathcal F}}\cap {{\mathbb Z}}^n\end{aligned}$$
and hence $\bar{i} (P_{{\mathcal F}}, m) = i (P_{{\mathcal F}}, m-1)$. The Ehrhart reciprocity theorem (\[eq:ehrrec\]) implies that $$i (P_{{\mathcal F}}, -m) \, = \, (-1)^{\dim {{\mathcal F}}} \, i (P_{{\mathcal F}}, m-1).
\label{eq:proof4}$$ Equation (\[eq:rec\]) follows from (\[eq:proof2\]) and (\[eq:proof4\]).
The following corollary is an immediate consequence of the case $i=0$ of (\[eq:proof3\]), the equation $r_{{\mathcal A}}(m) = \sum_{i=0}^d
\, (-1)^{d-i} c_i (m)$ and the fact that the degree of $c_i (m)$ is less than $d$ for $1 \le i \le d$.
The coefficient of the leading term $m^d$ in $r_{{\mathcal A}}(m)$ is equal to the expression $$\sum_{\substack{{{\mathcal F}}\subseteq {{\mathcal A}}\\ \dim {{\mathcal F}}= d}}
(-1)^{\# {{\mathcal F}}- d} \, {{\rm vol}}_d (P_{{\mathcal F}}),$$ where $P_{{\mathcal F}}$ is as in the proof of Theorem \[thm0\] and ${{\rm vol}}_d
(P_{{\mathcal F}})$ is the normalized $d$-dimensional volume of $P_{{\mathcal F}}$. \[cor1\]
Examples and remarks {#remarks}
====================
In this section we list a few examples, questions and remarks.
[If $V = {{\mathbb Q}}$ and ${{\mathcal A}}$ consists of two forms $\alpha_1, \alpha_2: V {\rightarrow}{{\mathbb Q}}$ with $\alpha_1 (x) = x$ and $\alpha_2 (x) = 2x$ for $x \in V$ then ${{\mathcal A}}^m$ consists of the affine hyperplanes (points) in $V$ defined by the equations $x = k$ and $x = k/2$ for $k \in [-m, m]_{{\mathbb Z}}$. One can check that $$\chi_{{\mathcal A}}(q, m) \ = \
\begin{cases} q-3m-1, & \text{if $m$ is even} \\
q-3m-2, & \text{if $m$ is odd} \end{cases}$$ and that (\[eq:rec\]) holds. Moreover we have $$r_{{\mathcal A}}(m) \ = \
\begin{cases} 3m+2, & \text{if $m$ is even} \\
3m+3, & \text{if $m$ is odd}. \end{cases}$$ Note that ${{\rm vol}}_d (P_{{\mathcal F}})$ takes the values 2, 2 and 1 when ${{\mathcal F}}= \{\alpha_1\},
\{\alpha_2\}$ and $\{\alpha_1, \alpha_2\}$, respectively. ]{} \[ex1\]
[If $V = {{\mathbb Q}}^d$ and ${{\mathcal A}}$ consists of the coordinate functions $\alpha_i (x) = x_i$ for $1 \le i \le d$ then ${{\mathcal A}}^m$ consists of the affine hyperplanes in $V$ given by the equations $x_i = k$ with $1 \le i \le d$, $k \in [-m, m]_{{\mathbb Z}}$ and $\chi_{{\mathcal A}}(q, m) = (q-2m-1)^d$, which is a polynomial in $q$ and $m$ satisfying (\[eq:rec\]). ]{} \[ex2\]
[Let $\Phi$ be a finite, irreducible, crystallographic root system spanning the Euclidean space ${{\mathbb R}}^d$, endowed with the standard inner product $( \ , \, )$ (we refer to [@BB; @Bou; @Hu] for background on root systems). Fix a positive system $\Phi^+$ and let $Q_\Phi$ and $W$ be the coroot lattice and Weyl group, respectively, corresponding to $\Phi$. Let also ${{\mathcal A}}_\Phi^m$ denote the $m$th *generalized Catalan arrangement* associated to $\Phi$ [@Ath1; @Ath2; @PS], consisting of the affine hyperplanes in ${{\mathbb R}}^d$ defined by the equations $(\alpha, x) = k$ for $\alpha \in \Phi^+$ and $k \in [-m, m]_{{\mathbb Z}}$ (so that ${{\mathcal A}}_\Phi^0$ is the real reflection arrangement associated to $\Phi$). If $V$ is the ${{\mathbb Q}}$-span of $Q_\Phi$ then there exists a finite collection ${{\mathcal A}}$ of linear forms on $V$ (one for each root in $\Phi^+$) such that, in the notation of Section \[intro\], ${{\mathcal A}}_{{\mathbb R}}^m$ coincides with ${{\mathcal A}}_\Phi^m$. In [@Ath2 Theorem 1.2] a uniform proof was given of the formula $$\chi_{{\mathcal A}}(q, m) \, = \, \prod_{i=1}^d \, (q-mh-e_i)
\label{eq:mcat}$$ for the characteristic polynomial of ${{\mathcal A}}_\Phi^m$, where $e_1,
e_2,\dots,e_d$ are the exponents and $h$ is the Coxeter number of $\Phi$. Thus the reciprocity law (\[eq:rec\]) in this case is equivalent to the well-known fact [@Bou Section V.6.2] [@Hu Lemma 3.16] that the numbers $h - e_i$ are a permutation of the $e_i$. As was already deduced in [@Ath2 Corollary 1.3], it follows from (\[eq:mcat\]) that $$r_{{\mathcal A}}(m) \, = \, \prod_{i=1}^d \, (mh+e_i+1)$$ and $$b_{{\mathcal A}}(m) \, = \, \prod_{i=1}^d \, (mh+e_i-1)$$ are polynomials in $m$ of degree $d$ (a fact which was the main motivation behind this paper). Setting $N (\Phi, m) =
\frac{1} {|W|} \, r_{{\mathcal A}}(m)$ and $N^+ (\Phi, m) = \frac{1} {|W|}
\, b_{{\mathcal A}}(m)$, as in [@AT; @FR], our Corollary \[cor0\] implies that $$(-1)^d \, N(\Phi, -m) \, = \, N^+ (\Phi, m-1).
\label{eq:N}$$ It was suggested in [@FR Remark 12.5] that this equality, first observed in [@FR (2.12)], may be an instance of Ehrhart reciprocity. This was confirmed in [@AT Section 7] using an approach which is different from the one followed in this paper. Finally we note that Corollary \[cor1\] specializes to the curious identity $$h^d \, = \, \sum_F \, (-1)^{\# F - d} \, {{\rm vol}}_d (P_F)
\label{eq:curious}$$ where in the sum on the right hand-side $F$ runs through all subsets $\{\alpha_1, \alpha_2,\dots,\alpha_n\}$ of $\Phi^+$ spanning ${{\mathbb R}}^d$, $P_F$ is the intersection of the cube $[-1,
1]^n$ with the image of the linear transformation $T_F: {{\mathbb R}}^d {\rightarrow}{{\mathbb R}}^n$ mapping $x \in {{\mathbb R}}^d$ to the column vector in ${{\mathbb R}}^n$ with coordinates $(\alpha_1, x)$, $(\alpha_2, x),\dots,(\alpha_n, x)$ and ${{\rm vol}}_d (P_F)$ is the normalized $d$-dimensional volume of $P_F$. If $\Phi$ has type $A_d$ in the Cartan-Killing classification then (\[eq:curious\]) translates to the equation $$(d+1)^d \, = \, \sum_G \, (-1)^{e(G) - d} \, {{\rm vol}}_d (Q_G)
\label{eq:curiousA}$$ where in the sum on the right hand-side $G$ runs through all connected simple graphs on the vertex set $\{1, 2,\dots,d+1\}$, $e(G)$ is the number of edges of $G$ and $Q_G$ is the $d$-dimensional polytope in ${{\mathbb R}}^d$ defined in the following way. Let $\tau$ be a spanning tree of $G$ with edges labelled in a one to one fashion with the variables $x_1, x_2,\dots,x_d$. For any edge $e$ of $G$ which is not an edge of $T$ let $R_e$ be the region of ${{\mathbb R}}^d$ defined by the inequalities $-1 \le x_{i_1} +
x_{i_2} + \cdots + x_{i_k} \le 1$, where $x_{i_1},
x_{i_2},\dots,x_{i_k}$ are the labels of the edges (other than $e$) of the fundamental cycle of the graph obtained from $T$ by adding the edge $e$. The polytope $Q_G$ is the intersection of the cube $[-1, 1]^d$ and the regions $R_e$. ]{} \[ex3\]
[It is well-known [@Sta3 Corollary 3.5] that the coefficients of the characteristic polynomial of a hyperplane arrangement strictly alternate in sign. As a consequence, in the notation of (\[eq:qm\]), we have $(-1)^{d-i} c_i (m) > 0$ for all $m \in {{\mathbb N}}$ and $0 \le i \le
d$. We do not know of an example of a collection ${{\mathcal A}}$ of forms for a which a negative number appears among the coefficients of the quasi-polynomials $(-1)^{d-i} c_i (m)$. ]{} \[rem0\]
[If the matrix defined by the forms in ${{\mathcal A}}$ with respect to some basis of $V$ is integral and totally unimodular, meaning that all its minors are $-1, 0$ or 1, then the polytopes $P_{{\mathcal F}}$ in the proof of Theorem \[thm0\] are integral and, as a consequence, the functions $c_i (m)$ and $r_{{\mathcal A}}(m)$ are polynomials in $m$. This assumption on ${{\mathcal A}}$ is satisfied in the case of graphical arrangements, that is when ${{\mathcal A}}$ consists of the forms $x_i - x_j$ on ${{\mathbb Q}}^r$, where $1 \le i < j \le r$, corresponding to the edges $\{i, j\}$ of a simple graph $G$ on the vertex set $\{1,
2,\dots,r\}$. The degree of the polynomial $r_G (m) := r_{{\mathcal A}}(m)$ is equal to the dimension of the linear span of ${{\mathcal A}}$, in other words to the rank of the cycle matroid of $G$. ]{} \[rem1\]
[Let ${{\mathcal A}}$ and ${{\mathcal H}}$ be finite collections of linear forms on a $d$-dimensional ${{\mathbb Q}}$-vector space $V$ spanning $V^*$. Using the notation of Section \[intro\], let ${{\mathcal H}}_m$ denote the union of ${{\mathcal A}}_{{\mathbb R}}^m$ with the linear arrangement ${{\mathcal H}}_{{\mathbb R}}^0$. It follows from Theorem \[thm0\], the Deletion-Restriction theorem [@OT Theorem 2.56] and induction on the cardinality of ${{\mathcal H}}$ that the function $r ({{\mathcal H}}_m)$ is a quasi-polynomial in $m$ of degree $d$. Given a region $R$ of ${{\mathcal H}}_{{\mathbb R}}^0$, let $r_R (m)$ denote the number of regions of ${{\mathcal H}}_m$ which are contained in $R$, so that $$r ({{\mathcal H}}_m) \, = \, \sum_R \, r_R (m)$$ where $R$ runs through the set of all regions of ${{\mathcal H}}_{{\mathbb R}}^0$. Is the function $r_R (m)$ always a quasi-polynomial in $m$? ]{} \[rem2\]
[99]{} C.A. Athanasiadis, *Deformations of Coxeter hyperplane arrangements and their characteristic polynomials*, in *Arrangements – Tokyo 1998* (M. Falk and H. Terao, eds.), Adv. Stud. Pure Math. [** 27**]{}, Kinokuniya, Tokyo, 2000, pp. 1–26. C.A. Athanasiadis, *Generalized Catalan numbers, Weyl groups and arrangements of hyperplanes*, Bull. London Math. Soc. [** 36**]{} (2004), 294–302. C.A. Athanasiadis and E. Tzanaki, *On the enumeration of positive cells in generalized cluster complexes and Catalan hyperplane arrangements*, J. Algebr. Comb. [** 23**]{} (2006), 355–375; [math.CO/0605685]{}. A. Björner and F. Brenti, Combinatorics of Coxeter groups, Graduate Texts in Mathematics [** 231**]{}, Springer-Verlag, New York, 2005. N. Bourbaki, Lie Groups and Lie Algebras, Chapters 4-6, Springer-Verlag, Berlin, Heidelberg, New York, 2002. S. Fomin and N. Reading, *Generalized cluster complexes and Coxeter combinatorics*, Int. Math. Res. Not. [** 44**]{} (2005), 2709–2757; [math.CO/0505085]{}. J.E. Humphreys, Reflection groups and Coxeter groups, Cambridge Studies in Advanced Mathematics [** 29**]{}, Cambridge University Press, Cambridge, England, 1990. P. Orlik and H. Terao, Arrangements of Hyperplanes, Grundlehren 300, Springer-Verlag, New York, NY, 1992. A. Postnikov and R.P. Stanley, *Deformations of Coxeter hyperplane arrangements*, J. Combin. Theory Series A [** 91**]{} (2000), 544–597; [math.CO/9712213]{}. R.P. Stanley, *Combinatorial reciprocity theorems*, Adv. Math. [** 14**]{} (1974), 194–253. R.P. Stanley, Enumerative Combinatorics, vol. 1, Wadsworth & Brooks/Cole, Pacific Grove, CA, 1986; second printing, Cambridge University Press, Cambridge, 1998. R.P. Stanley, *An Introduction to Hyperplane Arrangements*, in *Geometric Combinatorics*, IAS/Park City Mathematics Series (to appear). T. Zaslavsky, *Facing up to arrangements: face-count formulas for partitions of space by hyperplanes*, Mem. Amer. Math. Soc. vol. 1, no. 154, 1975.
[^1]: 2000 *Mathematics Subject Classification.* Primary 52C35; Secondary 05E99.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present high-resolution observations of two kinds of dynamic behavior in a quiescent prominence using the New Vacuum Solar Telescope, i.e., Kelvin-Helmholtz instabilities (KHIs) and small-scale oscillations. The KHIs were identified as rapidly developed vortex-like structures with counter-clockwise/clockwise rotations in the H$\alpha$ red-wing images at +0.3 [Å]{}, which were produced by the strong shear-flows motions on the surface/interface of prominence plumes. The KHI growth rates are estimated to be $\sim$0.0135$\pm$0.0004 and $\sim$0.0138$\pm$0.0004. Our observational results further suggest that the shear velocities (i.e, supersonic) of the mass flows are fast enough to produce the strong deformation of the boundary and overcome the restraining surface tension force. This flow-driven instability might play a significant role in the process of plasma transfer in solar prominences. The small-scale oscillations perpendicular to the prominence threads are observed in the H$\alpha$ line-center images. The oscillatory periods changed non-monotonically and showed two changing patterns, in which one firstly decreased slowly and then it changed to increase, while the other grew fast at the beginning and then it changed to decrease. Both of these two thread oscillations with changing periods were observed to be unstable for an entire cycle, and they were local in nature. All our findings indicate that the small-scale thread oscillations could be magnetohydrodynamic waves in the solar corona.'
author:
- 'Dong Li, Yuandeng Shen, Zongjun Ning, Qingmin Zhang, and Tuanhui Zhou'
title: Two Kinds of Dynamic Behavior in a Quiescent Prominence Observed by the NVST
---
Introduction
============
Solar prominences are one of the most common features in the solar atmosphere. They are suspended in the tenuous hot corona and consist of relatively dense but very cool plasma [@Labrosse10; @Arregui12]. Generally, the plasma in prominences is about 100 times denser and cooler than the surrounding coronal plasma, which raises important issues about their origin, stability, and magnetic structures [see, @Mackay10; @Su12; @Parenti14; @Cheng14; @Hao15]. Solar prominences are thought to be the most enigmatic structures supported by coronal magnetic fields, and they are considered as the source/driver of large-scale solar eruptions such as coronal mass ejections (CMEs) [@Okamoto07; @Guo10; @shen11a; @shen12; @Schmieder13a; @Zheng17]. In high-resolution observations, prominences are composed of numerous thin threads [@Lin05; @Ning09a; @Yan15] that are of magnetic nature [@Martin08], and mass flows are ubiquitous along these threads. The dynamics of these prominence fine structures might cause the plasma instability [@Zirker98; @Chae00; @Zhang17a]. Both ground and space observations [@Ning09b; @Shen15; @Zhang17b] showed that the dynamic behaviors of prominences might be related to coronal structures. Therefore, small-scale dynamic behaviors such as Kelvin-Helmholtz instabilities [KHIs, @Berger17; @Li18] and small-scale oscillations [@Okamoto07; @Ning09a] are key to understand the stability and disruption of solar prominences [@Labrosse10; @Arregui12; @Parenti14].
KHIs are produced by two flow-fluids in differential shearing velocities parallel to an interface of discontinuity [@Chandrasekhar61; @Masson18]. They are usually observed at strong shear-flows boundaries which can easily overcome the restraining surface tension force and often accompanied by a process of plasma transfer [@Johnson14; @Berger17]. Magnetohydrodynamic (MHD) simulations [@Tian16; @Ni17] supported the presence of KHIs in various coronal structures. For example, KHIs have been found in solar twisted flux tubes [@Zhelyazkov12; @Murawski16], along extreme-ultraviolet (EUV) or X-ray jets [@Zaqarashvili15; @Zhao18], in solar spicules [@Ajabshirizadeh15], flares [@Fang16] and prominence [@Antolin15], in coronal loops [@Karpen94; @Howson17], in CMEs [@Gomez16], and solar winds [@Zaqarashvili14]. In contrast, space observations also showed evidence of KHIs in the solar atmosphere, which are charactered by vortex-like features due to the supersonic velocities [@Berger10; @Johnson14]. Using the Atmospheric Imaging Assembly (AIA) on board the [*Solar Dynamics Observatory*]{} ([*SDO*]{}), the spatial and temporal evolutions of KHIs were observed on the surface of fast CMEs [@Foullon11; @Foullon13] at a high temperature ($\sim$11 MK), i.e., AIA 131 [Å]{}. They can be found on the interface between an erupting/dimming region and the surrounding corona in AIA EUV passbands [@Ofman11]. The vortex-like structures of KHIs were also detected on the boundary of a filament which embedded in a CME [@Mostl13] or along the boundary of a twisting solar polar coronal hole jet [@shen11b; @Zhelyazkov18] at lower temperature, such as AIA 304 [Å]{}. Beside the [*SDO*]{}/AIA observations, [@Feng13] reported a KHI in a coronal streamer which detected by the [*Solar and Heliospheric Observatory*]{} ([*SOHO*]{}). Using [*Hinode*]{}/Solar Optical Telescope (SOT) observations, KHIs were observed in an active region jet [@Zhelyazkov16] or along the bubble boundaries in a quiescent prominence [@Ryutova10; @Berger17]. All those simulations and observations indicate that KHI is one of the magnetohydrodynamic (MHD) instabilities and could play an important role in the dynamic of the solar atmosphere [e.g., @Foullon11; @Ofman11; @Innes12].
Prominence oscillations are usually classified as large-amplitude ($\geq$20 km s$^{-1}$) and small-amplitude oscillations [$\leq$2$-$3 km s$^{-1}$, @Oliver02]. Large-amplitude oscillations are usually induced by external disturbances such as Moreton and EUV waves, coronal jets, flares, and mini-filament eruptions [e.g., @Eto02; @Chen08; @Asai12; @Liu13; @Luna14; @Xue14; @Zhou17; @Zhang18], and they can affect the entire structure of prominences. Small-amplitude oscillations in prominences are locally in nature [@Thompson91; @Ballester06; @Soler07], and often exhibit as thread perturbations [@Lin07; @Lin09; @Okamoto07; @Okamoto15; @Ning09a]. The oscillatory periods are observed from minutes to hours [@Pouget06; @Ning09b; @Kim14; @Shen14a]. Previous observations of prominence oscillations usually found in the H$\alpha$ and Ca II H images [e.g., @Ramsey66; @Jing03; @Okamoto07; @Zhang12; @Schmieder13], or the He I 584.33 [Å]{} line [e.g., @Regnier01; @Pouget06]. Recently, the prominence oscillations have also been detected in EUV passbands using [*SDO*]{}/AIA observations, such as 171 [Å]{}, 193 [Å]{}, 211 [Å]{}, and 304 [Å]{} [@Dai12; @Li12; @Bi14; @Shen14b]. These studies could help us to understand their origin and physical properties, which is one of the important issues in prominence seismology.
High-resolution observations from the New Vacuum Solar Telescope [NVST, @Liu14] provides us an unique chance to investigate the solar fine structures, such as the small-scale instabilities and oscillations in the prominence threads. In this paper, using the NVST and [*SDO*]{}/AIA [@Lemen12] observations, we investigate the Kelvin-Helmholtz instabilities (KHIs) and the small-scale thread oscillations in an off-limb quiescent prominence, i.e., S40E83. Our observational results provide new clues to diagnose the physical properties of the small-scale dynamic behaviors in solar prominences.
Observations and Measurements
=============================
The NVST is an one-meter ground-based telescope located at Fuxian Solar Observatory, whose main aim is to observe the photospheric and chromospheric fine structures. The telescope is operated by Yunnan Observatories of the Chinese Academy of Sciences. On 2017 September 18, a quiescent prominence located at the southeast limb of the solar disk (S40E83) was observed by the NVST from 03:01:00 to 03:53:00 UT with the H$\alpha$ line-center and its off-bands ($\pm$0.3 [Å]{}). However, the raw images taken by the NVST were randomly and rapidly degraded due to the turbulence of the Earth’s atmosphere. Therefore, they must be firstly reconstructed by using high-resolution imaging algorithms. In this paper, we used the NVST level1 data which were processed by frame selection [or lucky imaging, @Tubbs04]. Briefly, one high-resolution image was reconstructed from about 100 raw short-exposure images [see., @Liu14; @Xu14; @Xiang16]. The reconstructed NVST H$\alpha$ images have a time cadence of $\sim$25 s and a spatial scale of $\sim$0.136/pixel, respectively. In addition, the EUV and H$\alpha$ images observed by the [*SDO*]{}/AIA ($\sim$0.6/pixel) and GONG ($\sim$1.0/pixel) are also used in this paper.
Figure \[image\] shows the location and morphology of the prominence in multi-wavelength images, including NVST H$\alpha$ (b) and its off-bands (a, c), GONG H$\alpha$ (d), AIA 304 [Å]{} (e) and 211 [Å]{} (f). The prominence was ‘quiescent’ and suspended above the solar limb, i.e., S40E83. The main body of this quiescent prominence shows as a typical ‘hedgerow’ structure at the low-temperature channels such as NVST/GONG H$\alpha$ and AIA 304 [Å]{}, which is also regarded as the prominence spine. The spine profile is drawn from the GONG H$\alpha$ image (d), and we also outline it in the NVST H$\alpha$ image at line-center (b), as shown by the yellow contour. Similar to the quiescent prominence reported by [@Ning09a], the present prominence is perpendicular to its spine axis. During our observing time interval, an intriguing bubble formed in the prominence, which exhibited as a dark cavity in the H$\alpha$ images (b, d) but a bright patch in the AIA 211 [Å]{} image (f) (see the orange arrow in Figure \[image\]). The bubble structure can be identified in both low-(b, d) and high-temperature (f) observational images, which is similar to the previous findings in quiescent prominence [e.g., @Ryutova10; @Shen15]. Based on the NVST high-resolution observations, much more fine structures can be observed in the quiescent prominence, such as small-scale mass flows and thin threads (th1, th2). These mass flows and thin threads are perpendicular to the solar limb (dashed turquoise line), which are different from those observed in [@Okamoto07], where the authors found that the prominence was along its spine axis. However, these fine structures are missed by GONG H$\alpha$ and AIA EUV images due to their lower spatial resolutions.
Results
=======
Kelvin-Helmholtz Instabilities
------------------------------
Thanks to the high-resolution observational data taken by the NVST, we are able to investigate the fine instabilities caused by the small-scale mass flows in the quiescent prominence. Figure \[khi1\] shows the NVST images with a small field-of-view (FOV) of around 24 Mm$\times$16 Mm, as outlined by the purple rectangle (KHI) in Figure \[image\] (c). The left panels show the time evolution of the red-wing (+0.3 [Å]{}) H$\alpha$ images at the pronounced KHI time (b) and its nearby times (a, c). The right panels give the H$\alpha$ images at the line-center (d) and blue-wing (f), and also the LOS velocity image (e) between two extended wings ($\pm$0.3 [Å]{}). Those three images in the right panels are chosen at the pronounced KHI time, i.e., around 03:43 UT. All these NVST images show various plume-structures in the prominence spine, no matter the H$\alpha$ line-center or the two extended wings. The yellow arrows outline the plume-structures that move toward to the different directions, which consist of a series of thin and short threads. The movie of khi.mp4 further shows that the plume-structures in the red-wing H$\alpha$ images moved at different velocities in the plane-of-sky, which could be considered as small-scale mass flows along the prominence plumes. When the speeds of mass flows are fast enough to shear the boundary and overcome the surface tension force, vortex-like structures are formed along their surfaces/interface [@Johnson14]. In our observations, two well-developed vortex-like structures were produced on the interfaces of prominence plumes, as indicated by the turquoise curves in panel (b). The sizes of these two pronounced vortex-like structures are small (say, less than 2 Mm). As indicted by the turquoise arrows and the khi.mp4 movie, both of the two vortex-like structures rotated counter-clockwise. In the khi.mp4 movie, the fronts of these vortex-like features were brighter than the background prominence plasma, which displayed as bright vortex-like blobs, as also indicated by the red and blue crosses in the left panels. Both of these two vortex-like blobs evolved rapidly with a time scale of minutes from the movie. Such small-scale vortex-like blobs are most likely to be the KHIs which caused by the strong shear-flows motions on the interface of prominence plumes. However, theses well-developed vortex-like blobs are not found in the line-center (d) and blue-wing (f) H$\alpha$ images. The LOS velocity image (e) derived from two H$\alpha$ extended wings at $\pm$0.3 [Å]{} confirms this result that the vortex-like blobs only appear pronounced at the H$\alpha$ red-wing.
The movie of khi.mp4 also shows that these prominence plumes moved quickly along their interfaces. To examine the shear velocities that were strong enough to drive the KHIs, we then plot the time-distance (TD) images in H$\alpha$ +0.3 [Å]{} along the surfaces (purple lines in Figure \[khi1\]) of prominence plumes, as shown in Figure \[khis\]. Panel (a) gives the TD image along the slit A$\rightarrow$B, various mass flows appear on the surface of prominence plume, and one pronounced mass flow is identified and it is outlined by the turquoise arrow. The mass flow moves from $B$ to $A$ at a speed of $\sim$25 km s$^{-1}$. The appearance time of this mass flow is consistent with the time interval of the vortex-like structure (AB) in Figure \[khi1\], and their positions are also overlaid. All these observational results suggest that the vortex-like structure was caused by this mass flow. The TD image in the LOS velocity (b) also exhibit the mass flow which moves from $B$ to $A$ at a speed of $\sim$25 km s$^{-1}$, implying that the mass flow along the prominence plume appears obviously in the H$\alpha$ red-wing. Panels (c) and (d) show the similar results of the other vortex-like structure, a lot of mass flows appear on the surface of prominence plume, and one pronounced mass flow is identified and it is outlined by the turquoise arrow. It moves from $C$ to $D$ at a speed of $\sim$18 km s$^{-1}$. The appearance time and location of this mass flow is also in agreement with the time interval of vortex-like structure (CD) in Figure \[khi1\]. The measured mass-flow speeds agree well with previous findings in prominence plumes observed by [*Hinode*]{}/SOT [@Berger10; @Shen15]. All our findings suggest that the two vortex-like structures in the prominence plumes could be regarded as KHIs on the interface of the prominence plumes.
For there to be an instability there must be the growth of a quantity, especially in the linear phase the growth is exponential. Therefore, the bright-blob positions which marked by the blue and red crosses (‘$\times$’) in Figure \[khi1\] were selected to show the quantity of vortex-like structures, such as the grow rate ($\gamma$) of KHI. Figure \[growth\] plots the deformation (d) over time at two bright vortex-like blobs, i.e., blue and red crosses. Here we assume that the KHI deformation is used as the displacement of bright-blobs. Both of these two distortions are growth in the exponential form. Using Equation \[defo\], the deformation is fitted with an exponential function. The blue and red lines in Figure \[growth\] give the best-fitted curves for the growth of double KHIs, and the growth rates are estimated to be about 0.0135$\pm$0.0004 and 0.0138$\pm$0.0004, respectively. Our results are of the same order of recent findings about the KHI growth [see, @Li18].
$$d = d_0 e^{\gamma t}.
\label{defo}$$
Figure \[khi2\] shows the other kind of vortex-like structure on the surface/interface of the prominence plume, which is much larger than those already described in the above. The FOV is the same as that in Figure \[khi1\] but at a different time. Panels (a)-(c) show the time evolution of the H$\alpha$ red-wing images, a large vortex-like structure (turquoise curve) appeared on the surface/interface of prominence plumes. However, it was much weaker than those shown in Figure \[khi1\], and there was also not the bright front in this vortex-like structure, see also the movie of khi.mp4. The LOS velocity image (d) at the pronounced time also confirms the vortex-like structure, and the size of this vortex-like structure is larger ($\sim$9 Mm) than those in Figure \[khi1\]. The movie of khi.mp4 and panels (a)-(c) indicate that the vortex-like structure rotated clockwise, as indicated by the turquoise curved arrows. Meanwhile, the TD images at H$\alpha$ +0.3 [Å]{} (e) and LOS velocity (f) along the purple line (E$\rightarrow$F) indicates that many mass flows move on the surface of prominence plume, and one pronounced mass flow is identified, as outlined by the turquoise arrow. It is estimated that the speed of the mass flow on the interface is $\sim$23 km s$^{-1}$. The evolved time scale is also estimated to be around 6 minutes from the movie, i.e., from $\sim$03:46 UT till the movie end. All these observational facts imply the appearance of a KHI on the surface of quiescent prominence. Notice that we only plot the red-wing +0.3 [Å]{} and its LOS velocity image, because the KHIs in this quiescent prominence were only pronounced in the red-wing H$\alpha$ images.
Small-scale Oscillations
------------------------
Benefited from the high-resolution H$\alpha$ observations take by the NVST, the fine thread structures with sub-arcsec width in the quiescent prominence can well be observed, such as th1 and th2. Figure \[th1\] shows the evolution of a thin and short prominence thread (th1, case 1) in the H$\alpha$ line-center images. The thread width is estimated to be $\sim$600 km, which is similar to previous findings obtained by [*Hinode*]{}/SOT [@Lin05; @Okamoto07; @Ning09a] or NVST [@Shen15]. The thin thread exhibited a periodic movement, and a complete period is shown in these four H$\alpha$ images in Figure \[th1\]. For example, panels (a) and (c) show that the thin thread located at $\sim$2.5 Mm, which indicate that the positions are close to the peak times. While it sited at $\sim$1.7 Mm in panels (b) and (d), suggesting the positions around the lower times.
To examine the period of the thread oscillations, we plot TD images along the red line which is perpendicular to the thin thread, as shown in Figure \[os1\]. Panel (a) shows the TD image in the H$\alpha$ line-center, which exhibits a pronounced oscillatory behavior. The brightest pixels in the prominence thread are selected at each given time, as marked by the blue and red pluses (‘+’). Here the red four pluses outline the times of the images shown in Figure \[th1\]. We note that they are around the peak/lower times. Next, we fit these selected points (including blue and red pluses) with a sinusoidal signal [e.g., @Zhang17a; @Zhang17b; @Su18]. Equation \[yfit\] shows the fitting function, and the oscillatory period (P) is changing with time, which is a second order polynomial, indicting a non-monotonic changed period.
$$A(t) = A_0+k_0 t + A_m \sin(\frac{2\pi t}{P_0+k_1 t+k_2t^2} + \phi).
\label{yfit}$$
Here A$_0$ is the initial position, k$_0$ indicates the thread drifting speed, A$_m$ is the amplitude of the thread oscillations, $\phi$ represents the initial phase shift. The oscillatory period is a function of time, P$_0$ is the initial oscillatory period, k$_1$ and k$_2$ indicate the changing (decreasing/growth) rates of the oscillatory period. Finally, an initial oscillatory period of $\sim$20 minutes are derived from this thin thread, with a changing rates of $-$0.9 and +0.022 (table \[yfit\]). Notice that ‘$-$’ indicates the oscillatory period is decrease, while ‘+’ implying a growth rate. Considering the initial period of $\sim$20 minutes, the changing rates suggest that the oscillatory period first decreased slowly and then it changed to increase. The oscillatory amplitude is also estimated to be around 800 km. This small-scale oscillatory phenomenon was pronounced in H$\alpha$ line-center images (panel a), but it was very weak and even disappeared after a half cycle in the two H$\alpha$ wings, such as H$\alpha~\pm$0.3 [Å]{}, as shown in panels (b) and (c).
Figure \[th2\] shows another case (case 2) of thread oscillations in the quiescent prominence. The upper panels show another thin thread (th2) at two different times, i.e., around the times of peak (a) and lower (b) positions, as marked by the red pluses in the bottom panel. The blue and red pluses outline the brightest pixels of the prominence thread too. We apply the same sinusoidal function (Equation \[yfit\]) to fit these points, and obtain an initial oscillatory period of $\sim$6 minutes, with a changing rates of +0.9 and $-$0.025 (table \[yfit\]), which indicates that the thread oscillatory period grew fast at the beginning, then it changed to decrease. The oscillatory amplitude is estimated to be about 900 km. This thread showed a drifting motion at a slow speed of $\sim$0.33 km s$^{-1}$ during the oscillation, similar to previous findings as reported in [@Ning09a]. The small-scale oscillations were also only observed in the H$\alpha$ line-center images.
Discussions
===========
The small-scale mass flows along the interface/surface of prominence plumes always move at different speeds. The flow speeds are estimated to be at about 25 km s$^{-1}$, 18 km s$^{-1}$, and 23 km s$^{-1}$, respectively. Assuming that the temperature of the quiescent prominence is around 7000 K [@Hirayama85], the sound speed in the quiescent prominence can be estimated as $v_s~\sim~147\sqrt{T/MK}~\thickapprox$ 12 km s$^{-1}$ [@Aschwanden05], the measured mass flow speeds are larger than the local sound speed, which indicate the flow speeds are of supersonic [@Berger10]. Thus, the shear-flows speeds that along the interface/surface of prominence plumes are strong enough (i.e, supersonic) to deform the boundary and overcome the restraining surface tension force. This suggest that KHIs can occur on the interface/surface [@Chandrasekhar61; @Johnson14; @Masson18]. Our observational facts support this idea very well. Pervious studies found that KHIs are important in dissipation of free energy in the shearing flows and plasma heating [@Karpen94; @Ofman11; @Cavus13]. Therefore, the KHIs could play an important role in the process of energy transfer in prominence plasmas.
It is very interesting that the KHIs in the quiescent prominence only appeared clearly in the red-wing H$\alpha$ at +0.3 [Å]{} (Figures \[khi1\] and \[khi2\]). So far, most observations of KHIs in the solar atmosphere are at the surface of CMEs [@Foullon11; @Foullon13] or jets [@shen11b; @Zhelyazkov16]. By using observational data taken by the [*Hinode*]{}/SOT H$\alpha$ line-center or Ca II H lines, [@Berger10] and [@Ryutova10] showed that prominence plumes were often highly turbulent and they are apt to form vortex structures (i.e., KHIs) on the interface between prominence and corona, such as along the boundaries of prominence bubbles. In the present paper, we detect the vortex-like structures on the interface/surface of prominence plumes in the red-wing H$\alpha$ observations at +0.3 [Å]{} shifted from line-center, which indicate that the observed small-scale mass flows or the vortex structures in the prominence were moving away from us.
It is also very interesting that the small-scale oscillations with changing periods are detected in the fine threads that compose of the quiescent prominence. The amplitudes of thread oscillations are less than 1000 km, such as 800 km and 900 km. Thanks to the high spatial resolution ($\sim$100 km/pixel) of NVST, such small-amplitude oscillations can be observed. The alignment of NVST images can be as accurate as one pixel. Therefore, the small-scale oscillations are reliably. As the prominence threads are of magnetic nature [@Martin08], the changing periods might be caused by the variations of the inherent physical properties of the thin threads such as the magnetic field strength and the plasma density. The thin threads exhibited two kinds of oscillation behaviors: the one (th1) decreased its period with a slow rate, while the other one (th2) increased its period with a fast rate. However, the oscillatory amplitude did not decay in time, which is different from the large-scale prominence oscillations that usually show strong damping [e.g., @Ning09b; @Zhang12; @Shen14b; @Zhang17a]. To the best of our knowledge, this is the first report of the small-scale individual oscillations in prominence threads that changed their periods (growth/decreasing) with time [see, @Ning09a].
Both of the two oscillating prominence threads can last for one entire cycle, which are similar to previous findings about the small-scale thread oscillations [@Okamoto07; @Okamoto15; @Lin09; @Ning09a], but different to large-scale prominence oscillations that usually last for several or even dozen of cycles [e.g., @Li13; @Shen14a; @Zhang17a]. The oscillating amplitudes are less than 1000 km, which indicate that the drivers of those thread oscillations were of small scale and might be numerous [see also, @Okamoto07; @Okamoto15; @Ning09a; @Ning09b]. These small-scale thread oscillations are only observed pronounced in the H$\alpha$ line-center images, but missed at the two extended wings (Figure \[os1\]). This suggests that there are no strong upflows/downflows in these prominence threads.
Finally, the Sun was quiet on 2017 September 18, only one active region (N08W39) appeared on the solar disk, but it was far away from the solar prominence (S40E83). Moreover, we could not find any small-scale eruptions around this quiescent prominence. Therefore, the small-scale oscillations of prominence threads were not caused by external disturbances such as flares or other kinds of eruptive activities on the Sun as what has been reported in previous studies of large-scale prominence oscillations [e.g., @Jing03; @Isobe06; @Chen08; @Shen14a; @Shen14b; @Zhang17b]. Our findings support the scenario that small-scale thread oscillations, which are perpendicular to the prominence threads, are local in nature, and the driving of such kind of oscillations could be MHD waves that stem from the photospheric magnetic field [@Joarder97; @Diaz05; @Okamoto07; @Lin09; @Ning09a], such as kink mode due to transverse displacement of thin threads in solar prominences [@Edwin83; @Terradas08].
The small-scale KHIs and thread oscillations are simultaneously observed in a same quiescent prominence, but they are independent dynamic behaviors in the fine prominence structures. Because that they only appeared clearly in the H$\alpha$ red-wing (KHIs) and line-center (oscillations) H$\alpha$ images, respectively. The correlation-ship between their temporal and spatial relationships is also not found. Therefore, the two kinds of dynamic behavior detected in the prominence were independent to each other, which is different to previous findings that the KHI at the thread boundary is triggered by transverse oscillations [@Okamoto15].
Summary
=======
Two kinds of dynamic behavior of the fine structures in a quiescent prominence is studied in detailed based on the high-resolution observational data taken by the NVST. The primary results of this study are summarized as following:
1. The KHIs in a quiescent prominence are detected on the interface/surface of prominence plumes. They are identified as vortex-like structures with rapidly rotation motions in the H$\alpha$ red-wing images, but missed by the H$\alpha$ line-center and blue-wing observations.
2. The KHIs exhibited two kinds of dynamic behavior in the same prominence. One rotated counter-clockwise, and it showed as small-scale bright vortex-like structures ($<$2 Mm). The other one rotated clockwise, and it showed as relatively larger but weak vortex-like structure ($\sim$9 Mm).
3. Small-scale thread oscillations are detected in the quiescent prominence, which were perpendicular to the prominence threads. They are only pronounced in the H$\alpha$ line-center images and can last for one entire cycle.
4. The thread oscillations exhibited two changing patterns. One showed an initial period of $\sim$20 minutes, it firstly decreased with a slowly rate and then it changed to increase. The other one exhibited an initial period of $\sim$6 minutes, it grew quickly at the beginning and then it changed to decrease. It also exhibited simultaneous drifting and oscillating motions.
We thank the anonymous referee for his/her valuable comments and inspiring suggestions. The data used in this paper was obtained by the NVST. The authors would like to acknowledge Dr. L. H., Deng and Y. Y., Xiang for helping reconstructed data. This work is supported by NSFC (Nos., 11603077, 11573072, 11773079, 11773068, 11790302, 11790300, 11729301, 11333009), the CRP (KLSA201708), the Youth Fund of Jiangsu (Nos. BK20161095, and BK20171108), the National Natural Science Foundation of China (U1731241), the Strategic Priority Research Program on Space Science, CAS (Nos., XDA15052200 and XDA15320301), and the Youth Innovation Promotion Association of Chinese Academy of Sciences (No., 2014047). D. Li and Y. Shen are supported by the Specialized Research Fund for State Key Laboratories. The Laboratory No. 2010DP173032. Li & Ning acknowledge support by ISSI-BJ to the team of “Pulsations in solar flares: matching observations and models”.
Ajabshirizadeh, A., Ebadi, H., Vekalati, R. E., & Molaverdikhani, K. 2015, , 357, 33 Antolin, P., Okamoto, T. J., De Pontieu, B., et al. 2015, , 809, 72 Arregui, I., Oliver, R., & Ballester, J. L. 2012, Living Reviews in Solar Physics, 9, 2 Asai, A., Ishii, T. T., Isobe, H., et al. 2012, , 745, L18 Aschwanden, M. J. 2005, Physics of the Solar Corona (2nd ed.; Chichester: Praxis Publishing), p317 Ballester, J. L. 2006, , 122, 129 Berger, T. E., Slater, G., Hurlburt, N., et al. 2010, , 716, 1288 Berger, T., Hillier, A., & Liu, W. 2017, , 850, 60 Bi, Y., Jiang, Y., Yang, J., et al. 2014, , 790, 100 Cavus, H., & Kazkapan, D. 2013, , 25, 89 Chae, J., Denker, C., Spirock, T. J., Wang, H., & Goode, P. R. 2000, , 195, 333 Chandrasekhar, S. 1961, Hydrodynamic and hydromagnetic stability, International Series of Monographs on Physics, Oxford: Clarendon Chen, P. F., Innes, D. E., & Solanki, S. K. 2008, , 484, 487 Cheng, X., Ding, M. D., Zhang, J., et al. 2014, , 789, L35 Dai, Y., Ding, M. D., Chen, P. F., & Zhang, J. 2012, , 759, 55 D[í]{}az, A. J., Oliver, R., & Ballester, J. L. 2005, , 440, 1167 Edwin, P. M., & Roberts, B. 1983, , 88, 179 Eto, S., Isobe, H., Narukage, N., et al. 2002, , 54, 481 Fang, X., Yuan, D., Xia, C., Van Doorsselaere, T., & Keppens, R. 2016, , 833, 36 Feng, L., Inhester, B., & Gan, W. Q. 2013, , 774, 141 Foullon, C., Verwichte, E., Nakariakov, V. M., Nykyri, K., & Farrugia, C. J. 2011, , 729, L8 Foullon, C., Verwichte, E., Nykyri, K., Aschwanden, M. J., & Hannah, I. G. 2013, , 767, 170 G[ó]{}mez, D. O., DeLuca, E. E., & Mininni, P. D. 2016, , 818, 126 Guo, Y., Schmieder, B., D[é]{}moulin, P., et al. 2010, , 714, 343 Hao, Q., Fang, C., Cao, W., & Chen, P. F. 2015, , 221, 33 Hirayama, T. 1985, , 100, 415 Howson, T. A., De Moortel, I., & Antolin, P. 2017, , 602, A74 Innes, D. E., Cameron, R. H., Fletcher, L., Inhester, B., & Solanki, S. K. 2012, , 540, L10 Isobe, H., & Tripathi, D. 2006, , 449, L17 Jing, J., Lee, J., Spirock, T. J., et al. 2003, , 584, L103 Joarder, P. S., Nakariakov, V. M., & Roberts, B. 1997, , 173, 81 Johnson, J. R., Wing, S., & Delamere, P. A. 2014, , 184, 1 Karpen, J. T., Dahlburg, R. B., & Davila, J. M. 1994, , 421, 372 Kim, S., Nakariakov, V. M., & Cho, K.-S. 2014, , 797, L22 Labrosse, N., Heinzel, P., Vial, J.-C., et al. 2010, , 151, 243 Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, , 275, 17 Li, T., & Zhang, J. 2012, , 760, L10 Li, L., & Zhang, J. 2013, , 282, 147 Li, X. H., Zhang, J., Yang, S. H., et al. 2018, Scientific Reports, 8, 8136 Lin, Y., Engvold, O., Rouppe van der Voort, L., Wiik, J. E., & Berger, T. E. 2005, , 226, 239 Lin, Y., Engvold, O., Rouppe van der Voort, L. H. M., & van Noort, M. 2007, , 246, 65 Lin, Y., Soler, R., Engvold, O., et al. 2009, , 704, 870 Liu, R., Liu, C., Xu, Y., et al. 2013, , 773, 166 Liu, Z., Xu, J., Gu, B.-Z., et al. 2014, Research in Astronomy and Astrophysics, 14, 705-718 Luna, M., Knizhnik, K., Muglach, K., et al. 2014, , 785, 79 Mackay, D. H., Karpen, J. T., Ballester, J. L., Schmieder, B., & Aulanier, G. 2010, , 151, 333 Martin, S. F., Lin, Y., & Engvold, O. 2008, , 250, 31 Masson, A., & Nykyri, K. 2018, , 214, 71 M[ö]{}stl, U. V., Temmer, M., & Veronig, A. M. 2013, , 766, L12 Murawski, K., Chmielewski, P., Zaqarashvili, T. V., & Khomenko, E. 2016, , 459, 2566 Ni, L., Zhang, Q.-M., Murphy, N. A., & Lin, J. 2017, , 841, 27 Ning, Z., Cao, W., Okamoto, T. J., Ichimoto, K., & Qu, Z. Q. 2009a, , 499, 595 Ning, Z., Cao, W., & Goode, P. R. 2009b, , 707, 1124 Ofman, L., & Thompson, B. J. 2011, , 734, L11 Okamoto, T. J., Tsuneta, S., Berger, T. E., et al. 2007, Science, 318, 1577 Okamoto, T. J., Antolin, P., De Pontieu, B., et al. 2015, , 809, 71 Oliver, R., & Ballester, J. L. 2002, , 206, 45 Parenti, S. 2014, Living Reviews in Solar Physics, 11, 1 Pouget, G., Bocchialini, K., & Solomon, J. 2006, , 450, 1189 Ramsey, H. E., & Smith, S. F. 1966, , 71, 197 R[é]{}gnier, S., Solomon, J., & Vial, J. C. 2001, , 376, 292 Ryutova, M., Berger, T., Frank, Z., Tarbell, T., & Title, A. 2010, , 267, 75 Schmieder, B., D[é]{}moulin, P., & Aulanier, G. 2013a, Advances in Space Research, 51, 1967 Schmieder, B., Kucera, T. A., Knizhnik, K., et al. 2013b, , 777, 108 Shen, Y. D., Liu, Y., & Liu, R. 2011a, Research in Astronomy and Astrophysics, 11, 594 Shen, Y., Liu, Y., Su, J., & Ibrahim, A. 2011b, , 735, L43 Shen, Y., Liu, Y., & Su, J. 2012, , 750, 12 Shen, Y., Ichimoto, K., Ishii, T. T., et al. 2014a, , 786, 151 Shen, Y., Liu, Y. D., Chen, P. F., & Ichimoto, K. 2014b, , 795, 130 Shen, Y., Liu, Y., Liu, Y. D., et al. 2015, , 814, L17 Su, Y., & van Ballegooijen, A. 2012, , 757, 168 Su, W., Guo, Y., Erd[é]{}lyi, R., et al. 2018, Scientific Reports, 8, 4471 Soler, R., Oliver, R., & Ballester, J. L. 2007, , 471, 1023 Terradas, J., Arregui, I., Oliver, R., & Ballester, J. L. 2008, , 678, L153 Thompson, W. T., & Schmieder, B. 1991, , 243, 501 Tian, C., & Chen, Y. 2016, , 824, 60 Tubbs, R. N. 2004, The Observatory, 124, 159 Xiang, Y. Y., Liu, Z., & Jin, Z. Y. 2016, , 49, 8 Xu, Z., Jin, Z. Y., Xu, F. Y., & Liu, Z. 2014, Nature of Prominences and their Role in Space Weather, 300, 117 Xue, Z. K., Yan, X. L., Qu, Z. Q., & Zhao, L. 2014, Solar Polarization 7, 489, 53 Yan, X. L., Xue, Z. K., Xiang, Y. Y., & Yang, L.-H. 2015, Research in Astronomy and Astrophysics, 15, 1725 Zaqarashvili, T. V., V[ö]{}r[ö]{}s, Z., & Zhelyazkov, I. 2014, , 561, A62 Zaqarashvili, T. V., Zhelyazkov, I., & Ofman, L. 2015, , 813, 123 Zhang, Q. M., Chen, P. F., Xia, C., & Keppens, R. 2012, , 542, A52 Zhang, Q. M., Li, T., Zheng, R. S., Su, Y. N., & Ji, H. S. 2017a, , 842, 27 Zhang, Q. M., Li, D., & Ning, Z. J. 2017b, , 851, 47 Zhang, Q. M., & Ji, H. S. 2018, arXiv:1805.01088 Zhao, T. L., Ni, L., Lin, J., & Ziegler, U. 2018, Research in Astronomy and Astrophysics, 18, 045 Zhelyazkov, I., & Zaqarashvili, T. V. 2012, , 547, A14 Zhelyazkov, I., Chandra, R., & Srivastava, A. K. 2016, , 361, 51 Zhelyazkov, I., Zaqarashvili, T. V., Ofman, L., & Chandra, R. 2018, Advances in Space Research, 61, 628 Zheng, R., Chen, Y., Wang, B., Li, G., & Xiang, Y. 2017, , 840, 3 Zhou, Y. H., Zhang, L.-Y., Ouyang, Y., Chen, P. F., & Fang, C. 2017, , 839, 9 Zirker, J. B., Engvold, O., & Martin, S. F. 1998, , 396, 440
Case A$_0$ (km) A$_m$ (km) P$_0$ (min) $\phi$ k$_0$ (km s$^{-1}$) k$_1$ k$_2$
------ ------------ ------------ ------------- --------------- --------------------- ------- --------
1 1900 800 20 103$^{\circ}$ 0 -0.9 0.022
2 1800 900 6 -23$^{\circ}$ 0.33 0.9 -0.025
\[tab\]
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Mikko Kuronen
- Mari Myllymäki
- Adam Loavenbruck
- Aila Särkkä
title: Point process models for sweat gland activation observed with noise
---
Introduction
============
Assessment of sudomotor (sweat) function has long been used in clinical and research settings for detection of neurologic dysfunction [@CoonEtal1941; @LaderEtal1962]. Minor’s starch iodine test was originally described in 1928 [@Minor1928], where tincture of iodine was applied to the skin, air dried, and then powdered with corn starch. Sweating is stimulated with increasing room temperature or use of pilocarpine. As sweat flows from pores, iodine is diluted and the solution absorbed by the starch powder, turning dark black from yellow. Normally the entire skin surface should be able to sweat in response to sufficient stimuli, and absence of sweating in an area of the body is indicative of loss of neurologic function in that area. Sweating is critical in human evolution in maintaining ability to thermoregulate in a wide range of climates and activity levels. Neurologic control, headquartered in the hypothalamis, is therefore tightly regulated and concordantly disruptive in pathologic states such as peripheral neuropathy.
Peripheral neuropathy is a disease state of peripheral nerves, the segment of the nervous system which extends from the brain and spinal cord to various targets in the body, such as muscles, sensory receptors and autonomically controlled organs. Peripheral neuropathy occurs in etiologically diverse conditions which cause damage or dysgenesis of peripheral nerves. The most common causes include diabetes, toxicity such as in chemotherapy and excessive alcohol consumption, and vitamin deficiencies [@LoavenbruckEtal2017]. The resulting nerve damage causes various combinations of muscle weakness, pain, numbness and autonomic dysfunction.
Autonomic nerves are often the earliest to be affected in peripheral neuropathy [@Sumner2003; @Said2007; @SharmaEtal2015], and penetrate all parts of the body, including digestive tract, liver, kidneys, bladder, genitals, lungs, pupils, heart, and skin. Skin is the largest organ in the body, and contains a vast network of the distal ends of sensory and autonomic nerves over the entire body surface. These distal ends of nerves are especially susceptible to systemic disease. Because sweating is neutrally controlled and modulated, and can be measured at the skin surface, it can be used to reflect alterations in the underlying nerves.
Currently there are several tests used in clinical practice to evaluate sudomotor function [@HilzEtal2006; @IlligensEtal2009; @MinotaEtal2019]. Thermoregulatory sweat testing [@FealeyEtal1989] measures percentage of body surface area sweating elicited by heated, humidified sauna. Sweat imprints and silastic molds [@PapanasEtal2005; @HarrisEtal1972], measure the density and distribution of activated sweat glands in an area of skin. Quantitative sudomotor axon reflex testing (QSART) is likely the most widely used autonomic test of sweating [@LowEtal1983; @LowEtal2006], comparing against robust normative databases the total volume of sweat produced by 1 $\text{cm}^2$ areas of skin at standardized sites.
The sensitive sweat test (SST) enhanced Minor’s starch iodine test with closeup time lapse imaging, and software analysis [@ProviteraEtal2010; @KennedyEtal2013; @LoavenbruckEtal2017; @LoavenbruckEtal2019]. The critical feature of the test is a rigid, transparent video screen which limits sweat to an essentially two-dimensional space. As sweat exits ducts, it dilutes the iodine painted on the skin onto starch coated plastic film. The imaged result is a field of sharply demarcated, dark sweat spots on a white background, expanding centripetally around the opening of each duct (Figure \[fig:4frames\]). The area of each spot is therefore a measurement of the volume of sweat produced by each gland. Sub-nanoliter volumes of liquid were measured by pipette and shown to create reproducible sweat spot areas. Similarly, tracking the increase in sweat spots’ areas between timelapse frames measures the rate of sweating from each duct at the nanoliter level. Of note, blackened areas of film do not return to white during the test – sweat spots can only enlarge. The videos therefore provide several measurable physiologic data points, including coordinates and relative locations of all sweat spots, the second by second volumes of sweat (nanoliters) and flow rate of each sweat gland (nL/minute), total number of activated sweat glands, density of activated sweat glands (glands/$\text{cm}^2$), total sweat volume (nL), and total sweat rate (nL/minute).
Using the dynamic sweat test, a significant reduction of sweating was observed in diabetic subjects in the distal leg but not in forearm [@ProviteraEtal2010]. The study included measurements taken from the forearm of 14 diabetic subjects and 14 age- and sex-matched healthy controls and from distal leg of 7 diabetes patients and 7 controls. In a larger study [@LoavenbruckEtal2017], 178 healthy controls and 20 neuropathy subjects were tested, most of them at the hand, thigh, calf, and foot, some only at calf and foot, and it was concluded that neuropathy subjects had lower sweat rates per sweat gland, lower total amount of sweat, and lower sweat gland density. It was also observed visually that the sweat patterns of the diabetic subjects were less regular than the healthy patterns[@ProviteraEtal2010]. This visual inspection indicates that the spatial sweat patterns that the videos provide may reveal some additional abnormalities that may appear when the sweat rate and the total amount of sweat are still within normal ranges. However, up to now, no spatial analysis of the sweat patterns to quantify this observation has been performed.
In this paper, we concentrate on spatial analysis of the sweat patterns, regarding the locations of sweat spots or glands as realizations of spatial point processes. Our main emphasis is to develop suitable methodology for analysing the spatial structure of the sweat gland patterns and activation extracted from video sequences. As the data are non-standard in point process literature, some special treatment is needed.
To extract the coordinates of the individual sweat spots, i.e. the points of the point patterns, from the videos (see Figure \[fig:4frames\]), several image analysis steps are needed. As a non-standard step, we introduce an algorithm based on the detection of a change point in each pixel. This pixel-by-pixel approach suits to the video sequences, where the sweat does not dry once it has appeared, much better than going through the videos frame by frame, because the sweat gland locations are best detected at times where sweat first appears. However, even though we perform careful analysis of the videos, there are some spots that are incorrectly recorded as two spots due to, e.g., wrinkles in the skin. It is not straightforward to remove these errors automatically and doing it manually can be very time consuming. Therefore, they need to be taken into account in the analysis of the point patterns.
![A sequence of snapshots (1 sec (top left), 15 sec (top right), 30 sec (bottom left), and 60 sec (bottom right)) of one control subject with extracted gland locations (+).[]{data-label="fig:4frames"}](4framesp.pdf){width="\textwidth"}
Some studies of point patterns observed with errors or noise can be found in the literature. For example, in the area of minefield detection, one first detects a minefield and then classifies each observed point in the minefield either as mine or as noise. The observation window is typically divided into two parts, the minefield as a region with a higher intensity containing both mines and noise and a low intensity area containing only noise [@ByersRaftery1998]. The points can then be classified in a Bayesian set-up where a posterior probability for each point being a mine is derived [@CressieLawson2000; @WalshRaftery2002]. In a similar Bayesian framework, classification of points of a superposition of a Strauss process and Poisson noise has been considered [@RedenbachEtal2015; @RajalaEtal2016]. A likelihood of an imperfect observation given the true point process, where the imperfect observation can be due to random thinning, displacement, censoring of the displaced points or superposition of additional points[@LundRudemo2000] and Bayesian analysis for similar data [@LundPenttinenRudemo1999] can also be found in the literature. Furthermore, a Bayesian framework for estimating the intensity of a noisy point process, where the noise is either due to points within the sampling window but regarded as being outside and/or points outside the window that were incorrectly regarded as points inside the window, is available [@ChakrabortyGelfand2010] as well as descriptive statistics for noisy spatial point processes, where the noise is perturbation of points[@BarHenEtal2013].
Here, we suggest two different ways to model the activation of sweat glands and to take noise into account in the analysis, by either including an error term in the model or using an estimation procedure that is robust with respect to the errors. We pay special attention to incorrectly recorded close pairs of points since they can cause problems for the analysis of regular point patterns such as our data.
We propose two models for the sweat gland patterns which are different in their nature. In the first model, the activation of individual sweat glands is modelled by a sequential point process, where sweat spots appear conditionally on the pattern so far. The other model is more physiologically motivated, a generative point process model, where the activated sweat gland pattern is modelled as a thinning of the underlying true (unobserved) sweat gland pattern which is modelled first. While the likelihood of the sequential model is tractable, it has been considered computationally costly to evaluate [@PenttinenYlitalo2016]. Here, we propose an efficient way to perform traditional likelihood-based inference for a certain type of sequential models, which makes also likelihood-based Bayesian inference feasible. The likelihood of the generative model is not easily tractable and, therefore, we employ approximate Bayesian computation (ABC) to estimate the model parameters. When some noise points are present, the sequential model is replaced by a mixture model having the sequential point process and an error point process as its components. In the generative model, the summary statistics in the ABC approach are chosen such that they are robust with respect to the errors.
The rest of the paper is organized as follows. We first describe the extraction of the points from the videos and the preliminary analysis of the data in Section \[sec:prelim\]. Then, we present the sequential and generative models together with a description of the inference methods in Sections \[sec:seqmodel\] and \[sec:genmodel\], respectively. Further, the methodology is illustrated by analysing a set of video sequences taken from the right foot of 15 subjects, either controls or subjects with suspected or diagnosed neuropathy. The models are fitted separately to each subject. Section \[sec:discussion\] is left for further discussion. We carry out all computations in Julia [@BezansonEtal2017] while we use R [@R2018] mainly for plotting.
Data and preliminary data analysis {#sec:prelim}
==================================
Description of data
-------------------
The data have been collected by Dr. Kennedy’s group at the University of Minnesota by using the dynamic sweat test they have presented [@ProviteraEtal2010]. First, sweating is stimulated by placing a patch with pilocarpine gels on the test site, foot or calf. Then, the test site is dried and painted with iodine solution. Finally, a camera is placed on the skin and a video is recorded at the rate of 1 frame/sec for 60 seconds. The size of the frame was $2592\times 1944$ pixels corresponding to $17.5\times13 \text{ mm}^2$. Videos were taken from the feet and/or calves from 121 healthy controls without known neuropathy or known risk factors for neuropathy, as well as 72 subjects who had reported having symptoms of neuropathy, 20 of whom had well characterized neuropathy (diagnosis based on neurologic examination and nerve conduction studies). Therefore, the subjects were divided into three groups: controls, subjects with suspected neuropathy (MNA), and subjects with diagnosed neuropathy (MNA Diagnosed).
In this study, we have access to five videos from the right foot from each of the three groups. Based on earlier studies, it was clear that the number of activated sweat glands is an important predictor for the condition, controls having higher density than subjects in the neuropathy groups. To study whether the spatial features of the activated sweat gland patterns could indicate differences between the healthy and neuropathic subjects having similar densities, the five videos were selected based on the point density of the pattern so that different groups have overlapping densities. A sequence of snapshots (1 sec, 15 sec, 30 sec, and 60 sec) of one control subject is shown in Figure \[fig:4frames\]. Here, we study the patterns of activated sweat glands at the end of videos as realisations of spatial point processes. The complete video is needed to extract the gland locations, because these can be obtained most precisely at their first occurrence (see Section \[video-processing\]).
Video processing with change point detection {#video-processing}
--------------------------------------------
Extracting the locations and sizes of the sweat spots required several video processing steps: transforming the video into sweat/not sweat binary video, splitting the sweat part of the video into the sweat produced by individual sweat glands and finally extracting the point pattern of gland locations.
The first step consisted of a background correction, finding change points, and finding and applying a threshold to the change points. The background correction was done by kernel smoothing using a Gaussian kernel with $\sigma = 100$ pixels to the first frame of the video. Since the first frame had only small amount of sweat, the resulting image mostly mimicked the lighting conditions. For example, the corners of the frame were darker than the middle. Next, the grayscale values $g_t$ of each pixel at frame $t$ were divided by the estimated lightning intensity $l$ of the pixel and the time series of these scaled grayscale values were considered to find the pixels that belong to the wet area. More precisely, a time series was constructed for each pixel as follows: Let $x_{-2} = x_{-1} = x_0 = 1$ and $x_t = g_t/l$ for $t = 1, \dots, T$. The change point of the time series $x_{-2}, x_{-1}, x_0, x_1, \dots, x_T$ was defined as the integer value $t \ge 1$ that minimizes $f(t) = s_{-2,t}^2 + s_{t+1, T}^2$, where $$s_{j,k}^2 = \frac{1}{k-j+1}\sum_{i=j}^k x_i^2 - \left(\frac{1}{k-j+1}\sum_{i=j}^k x_i\right)^2$$ is the sample variance of $x_j, \dots, x_k$. The time series and estimated change points for four pixels are shown in Figure \[fig:jumps\]. Since each pixel, even the ones that do not belong to the wet area, obtained a change point, thresholding on the difference of sample means was used to filter out small changes. A per video threshold was found by trial and error evaluating the point patterns that resulted from the whole video processing visually. In Figure \[fig:jumps\], the two largest jumps, 1 and 2, passed the threshold. The resulting binary video frames were post processed with a morphological closing to fill in some small gaps.
The sweat area in the first frame was segmented into the sweat produced by individual glands. Starting with the second frame, each new sweat pixel was assigned to the closest spot in the previous frame. The distance was measured as the shortest path through the new sweat area. Several filtering steps were applied in various stages of the process to account for pixels that belonged to spots that were too small to be sweat.
Finally, we extracted a point pattern with coordinates for each gland. To obtain an ordered point pattern we used the frame of the first appearance, and for those spots that arrived at the same frame we used spot size as a surrogate for the time, where larger ones were assumed to have appeared earlier. An example of extracted point patterns in the video can be seen in Figure \[fig:4frames\]. Figure \[fig:ppdata\] shows the final point patterns of all subjects.
![Time series for four pixels with estimated jump locations (frames) marked by dashed lines.[]{data-label="fig:jumps"}](time_series.pdf){width="\textwidth"}
![Point patterns extracted from the videos for control subjects (top) and subjects with suspected (middle) and diagnosed (bottom) neuropathy.[]{data-label="fig:ppdata"}](pointpatterns_data_v3.pdf){width="\textwidth"}
Spatial summary functions
-------------------------
To analyse the spatial structure of the activated sweat gland patterns, we used two different commonly used spatial summary functions, the pair correlation function $g$ and the empty space function $F$. If the underlying point process is stationary and isotropic, these summary functions are functions of distance only.
The pair correlation function $g$ summarizes the second-order property of point patterns[@IllianEtal2008]. Heuristically, $\lambda {\rm d}x g(r)$, where $\lambda$ is the intensity of the point process, is the probability that there is a point in an infinitesimal region with size ${\rm d}x$ at distance $r$ from an arbitrary point of the process. To estimate the pair correlation function we used a traditional kernel smoothing method [@StoyanStoyan1994] with translational edge correction, the recommended bandwidth $0.15/\sqrt{\hat \lambda}$, where $\hat \lambda$ is the intensity estimated from the point pattern, and the Epanechnikov kernel.
The empty space function $F(r)$ measures the probability that an arbitrary location has the nearest point of the process within radius $r$. It was estimated using Kaplan-Meier method [@BaddeleyGill1997].
Descriptive statistics of the point patterns
--------------------------------------------
We first estimated the pair correlation function for each of the sweat gland patterns shown in Figure \[fig:ppdata\] and thereafter, obtained the groupwise pair correlation functions (see Figure \[fig:pooled\_pcf\]) by pooling the estimated pair correlation functions of all the subjects within the group[@IllianEtal2008]. The individual pair correlation functions were weighted by the squared number of points when pooling. The pair correlation functions show a clear sign of inhibition in all three groups ($g(r)<1$ for small $r$). Further, the first top of the functions appears approximately at the same distance for the control and suspected neuropathy groups. However, the diagnosed neuropathy group has the first top at a slightly longer distance, indicating somewhat larger range of inhibition than in the other two groups.
At very short distances, especially the control subjects seem to have some unexpected close pairs of points. Upon a closer inspection of the point patterns and the videos it was reasonable to assume that some sweat spots had been detected as two nearby spots, instead of having merged into one. An obvious, simple idea to remove such close pairs of spots would be to merge all small glands having a larger spot closer than at some specified distance with the larger spot. However, such erroneous pairs of glands may appear at various (small) distances from each other and thus, applying a global limiting distance is not reasonable. Instead of using an additional image analysis step, we include some of this inaccuracy in the modelling and/or parameter estimation.
![Pooled pair correlation functions for the three groups.[]{data-label="fig:pooled_pcf"}](Pooled_pcf_n2.pdf){width="70.00000%"}
Sequential point process model {#sec:seqmodel}
==============================
Since sweat glands activate at different times, we modelled the activation by using sequential point processes similar to those suggested for modelling eye movements[@PenttinenYlitalo2016]. The points, activated sweat glands in our case, are generated sequentially conditioning on the already existing points. Points are added until the observed number of points in the pattern has been reached and the main focus here is to make inference on the arrival density. Below, we first recall the general sequential model [@PenttinenYlitalo2016] (Section \[sec:seqmodel\_general\]) and specify it in our case without (Section \[sec:softcore-model\]) and with noise (\[sec:mixture-model\]). Further, we discuss efficient inference for the sequential models (Section \[sec:seqmodel\_inf\]) and, finally, fit the sequential model with noise to the sweat gland data (Section \[sec:seqmodel\_data\]).
General sequential point process model {#sec:seqmodel_general}
--------------------------------------
Denote by $W$ the observation window and by $n$ the fixed number of points in the pattern. The first point $x_1$ is assumed to be uniformly distributed in $W$ and the $k$th point, $k=2,\dots,n$, is assumed to follow the density $y \mapsto f(y; \vec{\bf x}_{k-1})$, where $\vec{\bf x}_{k-1} = (x_1, x_2, \dots, x_{k-1})$. The density function for the whole point pattern $(x_1,...,x_N)$ is then $$\begin{aligned}
\vec{\bf x}_n \mapsto \frac{1}{|W|} \prod_{k=2}^n f(x_k; \vec{\bf x}_{k-1}),\end{aligned}$$ where $1/|W|$ is the contribution of the first point. A nice feature of the sequential point process models is that they have a tractable likelihood even though it can be costly to compute [@PenttinenYlitalo2016].
Soft-core model {#sec:softcore-model}
---------------
The function $f$ above should be chosen based on the phenomenon we would like to model. The activated sweat gland location patterns are repulsive. Our first attempt was to use a hardcore model, where sweat glands cannot be closer together than some minimum hardcore distance, but such a model turned out not to be flexible enough. Therefore, we suggest to use a soft-core model with the density $$\begin{aligned}
f_{SC}(y; \vec{\bf x}_k, R, \kappa) \propto \exp\left(-\sum_{i=1}^{k}\left(\frac{R}{d(y, x_i)}\right)^{2/\kappa}\right)\end{aligned}$$ for adding the point $y$ in the realisation. Above, $R>0$ is an inhibition range parameter and $0<\kappa<1$ in the exponent describes how “soft-core” the model is. In the limit as $\kappa\rightarrow 0$, we obtain a hard-core process with hard-core distance $R$. Some soft-core Gibbs point process models have been introduced in the literature [@OgataTanemura1981; @OgataTanemura1984], including models with the particular interaction function that we use here [@spatstat2015].
The log likelihood of the model becomes $$\label{seqmodel_loglik}
l(R, \kappa; \vec{\bf x}_n)
= -\log|W| -\sum_{k=2}^n\sum_{i=1}^{k-1}\left(\frac{R}{d(x_k, x_i)}\right)^{2/\kappa} - \sum_{k=2}^n\log Z(R, \kappa, \vec{\bf x}_k),$$ where $$Z(R, \kappa, \vec{\bf x}_k)^{-1} = \int_W \exp\left(-\sum_{i=1}^{k-1}\left(\frac{R}{d(y, x_i)}\right)^{2/\kappa}\right) dy$$ is a normalising constant.
Efficient likelihood inference for the sequential models {#sec:seqmodel_inf}
--------------------------------------------------------
Even though the likelihood of a sequential point process can be costly to compute, the particular sum structure in allows faster computations. Using an integration scheme with $J$ integration points $y_1, y_2, \dots, y_J$ with weights $w_1, w_2, \dots, w_J$, the last term in can be written as $$\begin{aligned}
\sum_{k=2}^n\log Z(R, \kappa, \vec{\bf x}_k)
&= \sum_{k=2}^n\log \left(\int_W \exp\left(-\sum_{i=1}^{k-1}\left(\frac{R}{d(y, x_i)}\right)^{2/\kappa}\right) dy\right)^{-1}\\
&= - \sum_{k=2}^n\log \sum_{j=1}^J w_j\exp\left(-\sum_{i=1}^{k-1}\left(\frac{R}{d(y_j, x_i)}\right)^{2/\kappa}\right).\end{aligned}$$ In total, there are $Jn(n-1)/2$ summands, among which only $Jn$ are distinct. Therefore, the integrals are efficiently calculated by evaluating the terms in the innermost sum only once.
Soft-core model with noise {#sec:mixture-model}
--------------------------
To account for the incorrectly identified close pairs in the extracted point patterns, we used a mixture model where one of the components is a uniformly distributed error component. Such an error component can be added to any point process model and here, we add it in the sequential soft-core model. The arrival density of a point $y$ (after the uniformly distributed first point) is then $$\begin{aligned}
f_M(y; \vec{\bf x}_k, R, \kappa, \theta) &= (1-\theta) f_{SC}(y; \vec{\bf x}_k, R, \kappa) + \frac{\theta}{|W|}\\
&= (1-\theta) Z(R, \kappa, \vec{\bf{x}}_k)\exp\left(-\sum_{i=1}^{k}\left(\frac{R}{d(y, x_i)}\right)^{2/\kappa}\right) + \frac{\theta}{|W|}.\end{aligned}$$ Therefore, the point at $y$ comes from the soft-core process with probability $1-\theta$ (the first term on the right-hand side of the formula) and from the uniformly distributed error process with probability $\theta$. Even though this model allows extra points everywhere, not only near the real activated glands, it can improve estimation of the parameters as shown below. However, the parameter $\theta$ cannot be interpreted directly as the probability of incorrectly identified glands since some of the points without close neighbours regarded as noise could as well be true glands.
The log-likelihood of the soft-core model with uniformly distributed error is given by $$\label{seqmodelnoise_loglik}
l_M(R, \kappa, \theta; \vec{\bf x}_n) = -\log |W| + \sum_{k=2}^n \log f_M(x_k; \vec{\bf x}_{k-1}, R, \kappa, \theta).$$
Application to the sweat gland data {#sec:seqmodel_data}
-----------------------------------
The soft-core model was fitted without and with noise to each sweat gland point pattern independently. First, we compared the maximum likelihood estimates of the soft-core parameters obtained without or with added noise. Then, we fitted the model with noise to the data in a Bayesian framework to be able to better compare the goodness-of fit of the sequential soft-core model and the generative model presented in Section \[sec:genmodel\]. We used regular grid based integration with 10800 integration points to evaluate the likelihood in all cases.
### Parameter estimates without and with added noise
The parameter estimates obtained by maximizing the log likelihood or with respect to the parameters can be seen in Figure \[fig:parestSoftcore\], where black dots belong to the sequential soft-core model without noise and the yellow dots to the model with noise. The estimates obtained without noise for the range parameter $R$ are on average smaller and the “softness” parameter $\kappa$ larger in the control group than in the neuropathy groups. However, for the model fitted with noise, only the mixture parameter $\theta$, which is estimated larger for the control group than for the neuropathy groups, differs between the groups.
We investigated the goodness-of-fit of the fitted softcore models by using the pair-correlation function. We generated samples from the sequential soft-core models with parameters $R$ and $\kappa$ estimated with and without noise. The uniform noise was not simulated. Figure \[fig:envelopesNoise\] shows the empirical pair-correlation functions for subject 205 for the softcore model estimated with and without noise together with 95% global envelopes [@MyllymakiEtal2017; @MyllymakiMrkvicka2019] calculated from 25000 samples of each model. It can be seen that for this subject, the range parameter is clearly underestimated if estimation is done without accounting for noise. For the other subjects, the goodness-of-fit of the model with noise was also as good or better than the goodness-of-fit of the model without noise. The bad fit of the model at short distances is explained by the incorrectly recorded close pairs of points that are present in the data but not in the simulations.
![Parameter estimates of the softcore model without (Softcore) and with (Mixture) noise fitted separately to each subject of the three groups (subject numbers shown on the left).[]{data-label="fig:parestSoftcore"}](parest_seq.pdf){width="\textwidth"}
![Empirical pair correlation functions (black lines) for subject 205 in the end of the video recording together with 95% global envelopes (grey areas) constructed from 25000 simulations from the soft core model estimated without (left) and with (right) noise.[]{data-label="fig:envelopesNoise"}](envelopes_example_noise.pdf){width="\textwidth"}
### Bayesian inference of the model with noise
We fitted the soft-core model with noise to the sweat gland data also by using standard likelihood-based Bayesian approach with Robust Adaptive Metropolis algorithm [@Vihola2012]. We ran the MCMC for 120000 iterations and discarded the first 20000 iterations as burn-in. As the prior distribution for the range parameter $R$ we used Gamma distribution with shape parameter $3$ and scale parameter $70/3$
and the prior for $\kappa$ and $\theta$ was the uniform distribution on $[0,1]$. The posterior histograms in Figure \[fig:parestSoftcoreBayes\] show some variation within the groups but no clear differences between the groups: The arrival density parameters $R$ and $\kappa$ were estimated to be rather similar in all groups. The $\theta$ parameter related to the errors appears to be somewhat larger in the control group than in the other two groups.
Figure \[fig:envelopesSoftcoreBayes\] shows the empirical pair-correlation functions for each subject together with the global envelopes [@MyllymakiEtal2017; @MrkvickaEtal2018] calculated from 25000 simulations from the posterior predictive distribution of the fitted softcore models with noise. In most cases, the envelopes cover the empirical curves. For some subjects, especially for the controls, the empirical pair-correlation function is not covered by the envelopes at very short distances. This is expected, as mentioned earlier, since according to the model used, this behaviour is caused mainly by noise, which was not simulated. The envelopes are quite wide close to the peak of the curves and do not always capture the shape of the peak particularly for the patients who have smaller number of activated sweat glands. To conclude, we did not find any differences in the arrival density of the sweat glands between the groups based on the fifteen studied subjects.
![Posterior marginals for each subject (rows) and each parameter (columns) for the soft core model estimated with noise.[]{data-label="fig:parestSoftcoreBayes"}](parest_seq_bayes.pdf){width="\textwidth"}
![Empirical pair correlation functions (black lines) for each subject in the end of the video recording together with 95% global envelopes (grey areas) constructed from 25000 simulations from the posterior predictive distribution of the soft core model estimated with noise.[]{data-label="fig:envelopesSoftcoreBayes"}](envelopes_seqbayes_g.pdf){width="\textwidth"}
Generative point process model {#sec:genmodel}
==============================
In our second approach, we first model the underlying unobserved sweat glands and then, model the activated sweat glands as an independent thinning of the underlying gland pattern. Modelling the glands and the activation of them separately allows one to answer questions regarding specifically the activation process. One possible hypothesis is that the underlying gland pattern itself is not different between controls and subjects with neuropathy, but the activation process is different. More specifically, almost all glands should activate on healthy subjects while the glands of the subjects with neuropathy could have a tendency to leave larger holes in the activation process [@ProviteraEtal2010].
Model specification
-------------------
It seems reasonable to assume that the underlying (unobserved) sweat gland pattern is a rather densely packed regular point pattern covering the whole skin. To obtain such a structure, some type of soft-core sequential inhibition process, where points are added as long as it is possible (we do not know the actual number of glands), would be appropriate. However, it is not straightforward to decide when to stop adding points since theoretically, soft-core type of interaction always allows new points. Instead, we start by generating a simple sequential inhibition (SSI) model[@IllianEtal2008], which is then disturbed to obtain a soft-core structure. A sample from the SSI model is generated sequentially by proposing points from the uniform distribution and accepting them if the pattern satisfies the hardcore condition with hardcore distance $R$, i.e. the new proposed point does not lie within distance $R$ from any earlier point. This is continued until there is no space left for new points. The disturbed SSI model is obtained from the “pure” SSI model by displacing the location of each point with an independent zero mean isotropic Gaussian random variable with covariance $\sigma^2 I$.
We assume independent gland activation, i.e. that the final pattern is a result of an independent thinning of the underlying disturbed SSI process. Therefore, the model has three parameters: inhibition range $R$, hardness of inhibition $\sigma$, and probability of activation $p$.
Parameter estimation using approximate Bayesian computation
-----------------------------------------------------------
For the generative model, we cannot write down the likelihood. However, if we make the simplification that the generation of SSI points is continued until, for example, 300 failed attempts are proposed in a row, sampling from the model is easy. Approximate Bayesian computation (ABC) is a method for Bayesian inference in situations where the likelihood of the model is intractable[@MarinPudloRobertRyder2012; @SunnokerEtal2013], but it is possible to simulate the model. It is based on sampling from the (pseudo-) posterior distribution $$\begin{aligned}
\pi_\epsilon(\theta) = \pi(\theta){\mathbf{P}}(\| s(Y_{\theta}) - s(y) \| < \epsilon),\end{aligned}$$ where $Y_{\theta}$ follows the model with parameter vector $\theta$, $y$ is the data, $\pi(\cdot)$ the prior distribution for the parameters, $s$ an appropriately chosen summary statistic, and $\epsilon$ a tolerance level.
### ABC-MCMC
A simple ABC rejection sampler is expressed in the following algorithm
Generate parameter vector $\theta'$ from the prior distribution $\pi$ Generate a realisation $z$ from the model with parameter vector $\theta'$ $\theta_i \gets \theta'$
This basic algorithm can be rather inefficient, but fortunately, there are several more efficient algorithms for performing ABC. We used an adaptive ABC-MCMC algorithm [@ViholaFranks2019]. In our data study below, the MCMC was run for $10\,000\,000$ iterations and the $250\,000$ simulated parameter values with the smallest distances $\|s(z) - s(y)\|$ were taken as the posterior sample.
### Summary statistics
The choice of summary statistics is crucial for the ABC method to work. For a regular point process model, it is natural to use summary statistics based on the pair correlation function $g$. Instead of using the full pair correlation function $g$, we tried to find a specific part of it that would be sufficient for our purpose following the rule of thumb[@LiFearnhead2018] that the number of summary statistics in the ABC approach should approximately match with the number of parameters to be estimated. The location of the first peak of the pair correlation function is intuitively connected to the inhibition range $R$. However, the location of the first peak can be difficult to estimate exactly and thus, we used the smallest distance $r_1 > 10$ pixels where $g(r_1) = 0.75$ as the location of the uphill before the first peak. Furthermore, the slope of the uphill provides information on the “softness” parameter $\sigma$ and we chose the smallest distance $r_2 > 10$ pixels where $g(r_2) = 1$ as the second summary statistic. Finally, the smallest distance $r_3$ in the empty space function $F$ where $F(r_3) = 0.5$ was taken as the third summary statistic to represent the activation probability $p$. The empty space function was chosen because it gives information on the number of points but is not greatly affected by erroneous nearby points. Since all the chosen summary statistics, $r_1, r_2$ and $r_3$, have a similar order of magnitude, we did not have to add any weights in the ABC algorithm. The specific values 0.75 and 1 were chosen to be somewhat separated and not too small to account for possible errors caused by splitting of spots into multiple glands that would cause the pair-correlation function not to start from zero. In addition, we only considered distances greater than 10 pixels since at very short distances the kernel estimator of the pair correlation function is not very reliable. These choices worked well for the sweat gland data, as demonstrated below.
Application to the sweat gland data {#application-to-the-sweat-gland-data}
-----------------------------------
The generative model was fitted to the sweat gland data using the ABC approach described above. In addition to the above specifications, we needed to set the priors. For $R$ we used improper, uniform prior on $[40, \infty)$ restricting that $R$ could not be unreasonably small, while in addition to being unrealistic, small $R$ values result in a large number of points in the SSI process which is computationally challenging. The prior of $p$ was uniform on $[0.1, 1]$, stating that at least 10% of the glands (modelled by the underlying disturbed SSI process) needed to activate and thus be observed. Furthermore, for $\sigma$ we used the gamma distribution with the shape parameter equal to $10/3$ and scale parameter equal to $3$. While the priors $R$ and $p$ can be considered rather non-informative, the prior for $\sigma$ was somewhat informative suggesting positive, but not too large $\sigma$. Note that if $\sigma$ was very large in comparison to $R$ it would break all the structure of the SSI process, which is unreasonable.
The posterior marginal histograms for the parameters can be seen in Figure \[fig:parestABC\] and 95% global envelopes for the pair correlation function constructed from 25000 simulations from the posterior predictive distribution in Figure \[fig:envelopesSSIABC\]. As can be seen in Figure \[fig:parestABC\], the parameter estimates vary somewhat between the subjects and groups. Differences in the softness of the model, i.e. in the values of the parameter $\sigma$, are small. However, there seems to be a slight tendency for the inhibition range $R$ to be a little smaller in the control group than in the MNA groups, but the difference is not clear based on the limited amount of data we have. The range was always between 60 pixels and 100 pixels. Furthermore, the control subjects tend to have a larger activation probability than the MNA patients, but the within group variation is large. This is in agreement with earlier studies, which indicate that a larger number of sweat glands of controls than of MNA patients activate[@LoavenbruckEtal2017].
According to the visual evaluation of the global envelopes of the pair-correlation function (see Figure \[fig:envelopesSSIABC\]) and empty space function (see Figure \[fig:envelopesSSIABCF\]), the model seems to fit quite well to the data. It captures the behavior of the pair correlation function both at small distances and around the first top. It should also be mentioned that the envelopes for the pair correlation function are rather wide at small distances covering the observed functions almost in all cases, even though the model did not include any error term. The wide envelopes are due to the relatively wide posterior distribution of $\sigma$. Namely, large $\sigma$ can lead to some close pairs in the patterns and consequently also positive values of the pair correlation function at small distances. Another reason for the relatively wide envelopes may be that the summary statistics used in the ABC approach were chosen such that they do not use any information at very short distances.
We explored a few other priors for $\sigma$, namely improper uniform and exponential distributions with means $1, 2$ and $4$. The posterior distributions of the other parameters were not affected by the choice of the prior for $\sigma$, but the posterior of $\sigma$ itself was somewhat sensitive to the choice and also the goodness-of-fit of the model measured by the pair correlation was affected. Namely, improper uniform prior led to wider posterior distribution of $\sigma$ and large $\sigma$ caused the variation of the pair correlation function to be even higher at small distances. On the other hand, the strict exponential priors shrank the posterior distribution towards zero, and very small $\sigma$ caused the peak of the pair correlation function to be too sharp. Thus, the disturbance parameter $\sigma$ needed a somewhat informative prior to lead to a good fit of the model.
We simulated patterns from the posterior predictive distribution and the simulated patterns mimic the data patterns rather well, see Figure \[fig:pointpatternexamples\]. Note, in particular, that the independent thinning seems to produce rather similar empty spots as there are in the data, as also indicated by the empty space function (Figure \[fig:envelopesSSIABCF\]).
![Posterior marginals for each subject (rows) and each parameter (columns) for the generative model. []{data-label="fig:parestABC"}](parest_packing_abc15_GenerativeModel_Gamma3_10_3_25000.pdf){width="\textwidth"}
![Empirical pair correlation functions (black lines) for each subject in the end of the video recording together with 95% global envelopes (grey areas) and means (dashed lines) constructed from 25000 simulations from the posterior predictive distribution of the generative model.[]{data-label="fig:envelopesSSIABC"}](envelopes_thinSSI_pcf_abc15_GenerativeModel_Gamma3_10_3_25000.pdf){width="\textwidth"}
![Empirical empty space functions (black lines) for each subject in the end of the video recording together with 95% global envelopes (grey areas) and means (dashed lines) constructed from 25000 simulations from the posterior predictive distribution of the generative model. []{data-label="fig:envelopesSSIABCF"}](envelopes_thinSSI_F_abc15_GenerativeModel_Gamma3_10_3_25000.pdf){width="\textwidth"}
![The original point patterns (top) and patterns generated from the corresponding posterior predictive distributions of the generative model (bottom) for three subjects.[]{data-label="fig:pointpatternexamples"}](pointpatternexamples_abc15_v3.pdf){width="\textwidth"}
Discussion {#sec:discussion}
==========
We suggested two point process models for the activation of sweat glands, a sequential softcore model describing the appearance of the activated sweat glands and a thinned disturbed SSI process, that we call a generative model, where we start by modelling the underlying unobserved sweat gland pattern. Data were videos of sweat gland activation recorded from 15 subjects. As one of the image analysis steps needed to extract the locations of the sweat glands, we proposed a change point detection approach to decide whether a pixel belongs to a wet area. For automatic selection of the threshold for detecting a change point, we investigated the possibility to use a simple statistical model (not presented). However, choosing the thresholds manually turned out to give better results. A manual choice of thresholds in fact gave a reasonable way to adapt to some pecularities in the videos such as darkening in time due to other reasons than sweat.
Maximizing the log-likelihood function of a sequential point process has been regarded as computationally costly due to the integrals in the normalizing constants[@PenttinenYlitalo2016]. However, for the sequential softcore model these integrals have a particular sum form which allows to compute them efficiently and to perform Bayesian inference. The same efficient computation scheme is applicable for any sequential point process having an arrival density with a similar sum structure. To estimate the parameters of the generative model, we employed an ABC algorithm since the likelihood function was not easily available.
Even though our proposed image analysis approach worked well, there were some incorrectly identified close pairs of glands in the extracted point patterns. To take into account such errors, we added an error term in the sequential softcore model resulting in a mixture model having a softcore component and a uniform noise component. For the generative model, on the other hand, the summary statistics in the ABC approach were chosen such that they were robust to close pairs of points.
The proposed models were fitted to the data and the parameters estimated from the patterns from healthy subjects and from subjects suffering from neuropathy were compared. The activation probability (in the generative model) was higher in the control group than in the neuropathy groups. Based on the limited amount of data, we were not able to find any further differences between the groups. Our generative model with independent activation fitted the data well. It also seems that the independent activation can result in similar holes in the point patterns as observed earlier in the sweat gland patterns[@ProviteraEtal2010], see e.g. the bottom right plot in Figure \[fig:pointpatternexamples\].
We believe that the models suggested here, especially the generative model, are good starting points for further studies using larger data sets including more subjects and replicates from each subject. It would certainly be interesting to include the sizes of the sweat spots into the analysis and explore whether the independent activation is adequate even when more data and information are available.
Acknowledgements {#acknowledgements .unnumbered}
================
Mikko Kuronen and Mari Myllym[ä]{}ki have been financially supported by the Academy of Finland (Project Numbers 306875 and 295100) and Aila Särkkä by the Swedish Research Council (VR 2013-5212) and by the Swedish Foundation for Strategic Research (SSF AM13-0066). The authors thank Matti Vihola for useful discussions.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The relativistically broadened Fe K$\alpha$ line, originating from the accretion disc in a vicinity of a super massive black hole, is observed in only less than 50% of type 1 Active Galactic Nuclei (AGN). In this study we investigate could this lack of detections be explained by the effects of certain parameters of the accretion disc and black hole, such as the inclination, the inner and outer radius of disc and emissivity index. In order to determine how these parameters affect the Fe K $\alpha$ line shape, we simulated about 60,000 Fe K $\alpha$ line profiles emitted from the relativistic disc.
Based on simulated line profiles, we conclude that the lack of the Fe K$\alpha$ line detection in type 1 AGN could, be caused by the specific emitting disc parameters, but also by the limits in the spectral resolution and sensitivity of the X-ray detectors.
address:
- |
Faculty of Sciences and Mathematics, University of Niš,\
Višegradska 33, 18000 Niš, Serbia\
[email protected]
- |
Physics and Astronomy, University of Southampton,\
Southampton, SO17 1BJ, UK\
[email protected]
- |
Astronomical Observatory,\
Volgina 7, 11060 Belgrade, Serbia\
$^*[email protected]\
$^\[email protected]
author:
- 'Milan Miloševi'' c'
- 'Miika A. Pursiainen'
- 'Predrag Jovanovi'' c$^*$ and Luka Č. Popović$^\dagger$'
title: |
The shape of Fe K$\alpha$ line emitted from\
relativistic accretion disc around AGN black holes
---
Introduction
============
Active galaxies are galaxies that have a small core of emission embedded at the center of an otherwise typical galaxy. This core is typically highly variable and very bright compared to the rest of the galaxy. Active galaxies most likely represent one phase in galaxy evolution. Their cores, Active Galactic Nuclei (AGN), are one of the powerful radiation sources in the universe. The luminosity of typical AGN is in the range of $10^8 - 10^{14} L_\odot$. The enormous amount of radiation is coming from an accretion disc that surrounding a Supermassive Black Hole (SMBH) that is supposed to be in the centre of an AGN.
The structure off all AGN seems to be similar: the central SMBH is surrounded by a optically thick and geometrically thin accretion disc that emits in a wide wavelength range from the X-ray to the optical spectral band, mostly in the continuum. The X-ray and UV radiation of the disc is ionizing the gas in so called the Broad Line Region (BLR) that emits broad emission lines. BLR is surrounded by a cold gas in the form of a torus, that emits in the infrared spectral band, and can obscure the BLR (and the accretion disc) emission. Therefore, we observe AGN with the broad lines (unobscured by the torus, so called type 1 AGN), and without broad emission lines (obscured AGN, so called type 2 AGN) [@Peterson1997].
As we noted above, the accretion disc emits mostly in the continuum, but the inner part of the accretion disc (beside the X-ray continuum) emits X-ray lines, among them Fe K$\alpha$ spectral line at 6.4 keV. The line usually has an asymmetric shape with narrow bright blue and wide faint red peak. Since this line is produced close to the first marginally stable orbit, it is an important indicator of accreting flows around SMBH, as well as of the spacetime geometry in these regions [@Jovanovic2012; @Jovanovic2012a].
The first results from satellite ASCA showed that Fe K$\alpha$ line is very common in spectra of the type 1 AGN and statistical evidence of broadening was found in $\sim 75\% $ of sample [@Fabian1989; @Nandra1997]. However, more recent studies of the same type of AGN showed that there is relativistic line broadening in only $54\pm 10\% $ of the sample, and only around $30\%$ require the line to originate from the vicinity of the SMBH [@Nandra2007].
In this paper we study influence of disc outer radius on the shape of Fe K$\alpha$ spectral line for different disc parameters.
The paper is organized as follows. In Sec. \[sec:modeling\] we present method for modeling the Fe K$\alpha$ spectral line profile. In the following Sec. \[sec:simulation\] we present parameters we used in out simulations. In Sec. \[sec:results\] obtained results are shown and discussed. Finally in Sec. \[sec:conclusions\], we summarize our results and give conclusions.
The Fe K$\alpha$ line and SMBHs of AGN {#sec:modeling}
======================================
The relativistic component of the Fe K$\alpha$ line was discovered by Tanaka et al. in 1999. They obtained the first convincing proof for the existence of the Fe K$\alpha$ line in AGN spectra after four-day observations of Seyfert 1 galaxy MCG-6-30-15 [@Tanaka1995]. The Fe K$\alpha$ in this object has a pretty broad profile. If the line originated from an arbitrary radius of a nonrelativistic (Keplerian) accretion disc it would have a symmetrical profile (due to Doppler effect) with two peaks: a ”blue” one which is produced by emitting material from the approaching side of the disc in respect to an observer, and a ”red” one which corresponds to emitting material from the receding side of the disc (Fig. \[fig:bluered\]). The widest parts of the Fe K$\alpha$ line arise from the innermost regions of the disc, where the rotation of emitting material is the fastest. It was found that, in case of 14 Seyfert 1 galaxies, Full-Widths at Half-Maximum (FWHM) of their Fe K$\alpha$ lines correspond to velocities of $\approx 50,000$ km/s, however in some special cases (like Seyfert 1 galaxy MCG-6-30-15) FWHM corresponds to the velocity of 30% of the speed of linght [@Nandra2007]. It means that in the vicinity of the central black hole, orbital velocities of the emitting material are relativistic, causing the enhancement of the Fe K$\alpha$ line “blue” peak in regard to its “red” peak.
![Schematic figure of the calculated parameters of the profile of the spectral line. The asymmetricity ratio was found by dividing the area colored in blue by the area colored red. Dashed black line represents the Full-Widths at Half-Maximum (FWHM).[]{data-label="fig:bluered"}](fig2.jpg){width="0.5\linewidth"}
In the case of the line that originates from a relativistically rotating acration disc of an AGN the resulting profile of the Fe K$\alpha$ is a composition of three different effects [@Jovanovic2012]:
- Doppler shift due to rotation of emitting material, which is responsible for occurrence of two peaks;
- Special relativistic effect - the relativistic beaming, which is responsible for enhancement of the blue peak with respect to the red one;
- General relativistic effect - the gravitational redshift, which is responsible for smearing of the line profile.
These characteristics of the observed Fe K$\alpha$ line profiles represent a fundamental tool for investigating the plasma conditions and the spacetime geometry in the vicinity of the SMBH of AGN.
![Schematic illustration of the ray-tracing method in the Kerr metric, showing a light ray emitted from some radius of accretion disc around a rotating BH with angular momemntum $a$ and inclination $\theta_{obs}$. The image is visible on observer’s sky with coordinates (impact parameters) $\alpha$ and $\beta$. (Figure courtesy: Vesna Borka Jovanovi' c [@Jovanovic2009]) []{data-label="fig:raytrace"}](fig53_1.jpg){width="\linewidth"}
Numerical simulations {#sec:simulation}
=====================
The disc emission can be analyzed by numerical simulations taking into account only photon trajectories reaching the observer’s sky plane. This method is based on so called ray-tracing method in Kerr metric [@Bao1994; @Bromley1997; @Fanton1997; @Cadez1998]. The image of the disc on the observer’s sky is divided into a number of small elements (pixels). The color images of the accretion disc which a distant observer would see by a high resolution telescope can be obtained in the following way: for each pixel of the image the photon is traced backward from the observer by following the geodesics in a Kerr space-time, until it crosses the plane of the disc. Then, the flux density of the radiation emitted by the disc at that point, as well as the redshift factor of the photon are calculated. The simulated line profiles can be calculated taking into account the intensities and received photon energies of all pixels of the corresponding disc image.
The method used in simulations is based on the pseudo-analytical integration of the geodesic equations which describe the photon trajectories in the general case of a rotating BH having some angular momentum $J$, which gravitational field is therefore described by the Kerr metric [@Cadez1998; @Jovanovic2009]: $$ds^2=-\left(1-{\dfrac{2Mr}{\Sigma}}\right)dt^2
-{\dfrac{4Mar}{\Sigma}}\sin^2{\theta}dt d\phi
+{\dfrac{A}{\Sigma}}\sin^2{\theta}d\phi^2
+{\dfrac{\Sigma}{\Delta}}dr^2+\Sigma d\theta^2,
\label{eq32_1}$$ where $(r,\theta,\phi,t)$ are the usual Boyer-Lindquist coordinates, with $c=G=1$ and $\Sigma=r^2+a^2\cos^2{\theta}$, $\Delta=r^2+a^2-2Mr$, and $A=(r^2+a^2)^2-a^2\Delta\sin^2{\theta}$.
The Kerr metric depends on the angular momentum normalized to the mass $M$ of black hole: $a=J/Mc$, $0\le a \le M$.
A photon trajectory in the Kerr metric can be described by three constants of motion (the energy at infinity and two constants related to the angular momentum, respectively) which, in natural units $c=G=M=1$, have the following forms [@Cadez1998; @Jovanovic2009]: $$E=-p_t,\quad \Lambda=p_\phi,\quad Q=p^2_\theta-a^2 E^2 cos^2\theta+\Lambda^2 cot^2\theta,$$ where $p$ is the 4-momentum.
Now, two dimensionless parameters $\lambda=\Lambda/E$ and $q=Q^{1/2}/E$ can be introdced to express the trajectory of the photon, because it is independent on energy of the photon. Parameters $\lambda$ and $q$ are related to the two impact parameters $\alpha$ and $\beta$ which describe the apparent position on the observer’s celestial sphere: $$\alpha = -{\dfrac{{\lambda}} {{\sin \theta _{obs}}} }, \qquad
\beta = \pm \left( {q^{2} + a^{2}\cos ^{2}\theta _{obs} - \lambda ^{2}\cot ^{2}\theta _{obs}} \right)^{{\frac{{1}}{{2}}}},$$ where the sign of $\beta$ is determined by $\left( {{\dfrac{{dr}}{{d\theta}}
}}\right)_{obs}$.
The solution of integral equation [@Cadez1998]: $$\pm \int\limits_{r_{em}} ^{\infty} {{\dfrac{{dr}}{{\sqrt {R\left({r,\lambda ,q} \right)}}} }} = \pm \int\limits_{\theta _{em}} ^{\theta_{obs}} {{\dfrac{{d\theta}} {{\sqrt {\Theta \left( {\theta ,\lambda ,q} \right)}}} }},
\label{eq53_1}$$ $$\begin{array}{c}
R\left( {r,\lambda ,q} \right) = \left( {r^{2} + a^{2} - a\lambda} \right)^{2} - \Delta {\left[ {\left( {\lambda - a} \right)^{2} + q^{2}} \right]}, \\
\Theta \left( {\theta ,\lambda ,q} \right) = q^{2} + a^{2}\cos ^{2}\theta -\lambda ^{2}\cot ^{2}\theta .
\end{array}
\label{eq53_2}$$ provide the photon trajectories (null geodesics) which originate in the accretion disc at some emission radius $r_{em}$ and reach the observer at infinity. The integral Equation (\[eq53\_1\]) can be solved in terms of Jacobian elliptic functions, and therefore it is a pseudo-analytical integration.
Photons emitted at frequency $\nu_{em}$ will reach infinity at frequency $\nu_{obs}$ because of relativistc effects. Their ratio $g = \dfrac{{\nu _{obs}}}
{{\nu _{em}}}$ determines the shift due to these effects. The total observed flux at the observed energy $E_{obs}$ is given by [@Fanton1997]: $$F_{obs} \left( {E_{obs}} \right) = {\int\limits_{image} {\varepsilon \left({r} \right)}} g^{4}\delta \left( {E_{obs} - gE_{0}} \right)d\Xi ,
\label{eq53_3}$$ where $\varepsilon \left( {r} \right)$ is the disc emissivity, $d\Xi$ is the solid angle subtended by the disc in the observer’s sky and $E_{0}$ is the rest energy.
Image of a simulated accretion disc is obtained in the following way [@Jovanovic2009]
1. values of the input parameters are specified: inner ($R_{in}$) and outer ($R_{out}$) radii of the disc, angular momentum $a$ of the central BH, disc inclination (observer’s viewing angle) $\theta_{obs}$ (also, denoted by $i$) and parameters defining the disc emissivity
2. constants of motion $\lambda$ and $q$ are calculated for each pair of impact parameters $\alpha$ and $\beta$ (i.e. for each pixel on imaginary observer’s photographic plate)
3. geodesic Equation (\[eq53\_1\]) is integrated for each pair of $\lambda$ and $q$
4. values of shift due to relativistic effects $g$ and observed flux $F_{obs}$ are calculated
5. pixels on imaginary observer’s photographic plate are colored according to the value of shift $g$ and a simulated disc image is obtained.
![The illustration of simulated an acretion disc (left) and the corresponding Fe K$\alpha$ line profile (right). Parameters for simulation are $q=2.5$, $i=65$, $R_{in}=r_{ms}$, $R_{out}=20$, $a=0.05$, $nres=5000$ and $nbin=80$.[]{data-label="fig:lineprofile"}](i-DD-000295.jpg){width="\linewidth"}
From the corresponding disc images the simulated line profiles can be calculated by binning the observed flux at all pixels over the bins of shift $g$. In left panels of Fig. \[fig:lineprofile\] the examples of simulated disc images obtained in such way are presented. The corresponding simulated line profiles are presented in the right panels of the same figure.
Disc parameters
---------------
All simulated line profiles are done using the ray-tracing method discribed in previous section and proposed by A. Čadež et al. [@Cadez1998]. About 60,000 accretion discs and corresponding Fe K$\alpha$ lines were simulated for various set of parameters (Table \[tab:params\]). We varied values of the emissivity index $q$, the inclination $i$, the outer radius $R_{out}$ of the disc and the spin $a$ of BH.
The emissivity index $q$ defines the emissivity profile of the disc with radius $R$ according to the law $\epsilon(R) \propto R^{-q}$. Inclination ranges from $5^{\circ}$ to $80^{\circ}$ and the spin of the BH from almost non-rotating ($a=0.05$) up to maximally rotating Kerr BH ($a=0.998$). The inner radius $R_{in}$ was determined as the innermost stable orbit around the SMBH, also known as *the marginally stable orbit*, $r_\textrm{ms}$. The values are $1.24R_\textrm{g}$ for $a=0.998$ and $5.84R_\textrm{g}$ for $a=0.05$.
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
{width="\textwidth"}\
Results {#sec:results}
=======
The effects of the spectral and spatial resolutions of the disc
---------------------------------------------------------------
The obtained results show that during binning procedure one has to assume an appropriate number of line bins, since they could have significant effect on the resulting simulated line profiles. Namely, too small number of line bins will smooth the line profiles, and potentially hide some of the line’s important features, such as its red peak (as demonstrated in the top left panel of Fig. \[nbin\] for number of the bins less than $\approx 80$). Even in the case of highly inclined discs, when read peak is relatively strong (see second row of Fig. \[nbin\]), its intensity and position could be affected by such smoothing. Besides, this smoothing can artificially increase asymmetricity ratio of the line profile (see the right panels of Fig. \[nbin\]), and induce inaccuracies in its FWHM estimates (see the midle panels of the same figure), depending also on the spin of the central SMBH (as it can be seen by comparing the corresponding panels in the second and third row of Fig. \[nbin\]). These effects are especially emphasized for higher values of emissivity index, since in this case it could also have significant influence on intensity and position of the blue peak (see the bottom row of Fig. \[nbin\]).
The results obtained by simulation can be compared with the properties of the past, current and future X-ray detectors. It is known that the cameras of the XMM-Newton provide spectral resolving power $E/\Delta E \sim 20-50$ [@Turner2001; @Struder]. The energy resolution of Suzaku satellite was 10 eV at 6 keV, and it provided a spectral resolving power $E/\Delta E \sim 600$ [@trumper; @mitsuda]. For the future X-ray Integral Field Unit (X-IFU), that will be a part of the Athena X-ray Observatory planned energy resolution is $E/\Delta E \sim 2800$ in 0.2 - 12 keV range [@barret2016].
In our simulation the energy resolution $E/\Delta E$ is taken to be in the range of the XMM-Newton. The energy resolution at 6.4 keV used in simulation is $E/\Delta E =$ 25, 35, 40 and 50, for $nbin = $ 50, 70, 80 and 100, respectively.
Regarding the number of photons received from the accretion disc ($nres\times
nres$), in most cases it is sufficient to take $nres\approx 1000$, i.e. to collect $\propto 10^6$ of them in order to obtain the simulated Fe K$\alpha$ line profiles of with resonable quality, as it can be seen in the Fig. \[nres\]. Only in the case of high emissivity index (see the bottom row of Fig. \[nres\]) it is necessary to significantly increase the number of photons (i.e. the ”spatial resolution” of the disc) in order to achieve this goal.
The above results clearly demonstrate that both spectral and spatial resolutions of the X-ray detectors are of crucial significance for accurate measurements of FWHM and asymmetricity ratio in the observed Fe K$\alpha$ line profiles, and thus, for potential identification of these line profiles as relativistically broadened. In this paper we assumed spectral resolution which is similar to XMM-Newton resolution and investigated the influence of spectral resolution on the detection of relativistic Fe K$\alpha$ line in order to explore the ability of current detectors to observe (or not observe) this line. However, next generation of X-ray observatories (as e.g. ATHENA) will provide a higher spectral resolution (around 100 times better than current missions), and it is a task that we are going to explore (investigate) in a following paper.
The effects of other disc parameters and SMBH spin
--------------------------------------------------
Additionally, we show that the FWHM and asymmetry ratio of the observed Fe K$\alpha$ line profiles could be used for investigating the physics and geometry in the vicinity of SMBHs even with spectral resolution of current X-ray telescopes, and for this purpose we simulated the effects of the disc parameters and SMBH spin on these two quantities. The effects of SMBH spin $a$ on the simulated profiles of the Fe K$\alpha$ line, its FWHM and asymmetricity ratio are presented in Fig. \[spin\]. As it can be seen in Fig. \[spin\], the asymmetricity ratio increases, at first rapidly, with $R_{out}$ for the low disc inclination ($i=20^\circ$). For high disc inclination ($i=60^\circ$) asymmetricity ratio decrease with $R_{out}$ (see the right panel in the first and the second row, respectively). In the case of low emissivity index ($q=2$), asymmetricity ratio increase with $R_{out}<50$ and for the higher $R_{out}$ is almost constant, especially for high BH spin ($a>0.8$). For a high emissivity index ($q=4$), asymmetricity ratio decreses at first ($R_{out}<25$) and after that it is constant as $R_{out}$ increases. In this case the asymmetricity ratio is almost constant for all values of $R_{out}$ for high BH spin ($a\ge 0.9$). For the low disc inclination ($i=20^\circ$), FWHM decreses with $R_{out}$ for $a\ge 0.4$ and increses for lower spins. In all cases FWHM $\approx 1.2$ for $R_{out}=20$ and starts to decrese for higher $R_{out}$ (see plot in the middle, first row). For higher inclinations FWHM increses with $R_{out}$. FWHM increses for $R_{out}<20$, independently of emissivity indexes, and becomes nearly constant for higher $R_{out}$. In the cases of hight BH spins ($a\ge 0.9$), FWHM is nearly constant with $R_{out}$.
Fig. \[incl\] shows influence of the disc inclination $i$ (i.e. viewing angle $\theta_{obs}$) on the line profile, FWHM and asymmetricity ratio. The presented results indicate that for lower disc inclinations ($i<40^\circ$) asymmetricity ratio increases with $R_{out}$ (see the right panels of Fig. \[incl\]), for $i\approx 40^\circ$ it becomes nearly constant (especially for larger outer radii $R_{out}$), while for highly inclined discs ($i>40^\circ$) it decreases with $R_{out}$. This result implicates that asymmetricity ratio of the Fe K$\alpha$ line could be used for determining the outer radius of the line emitting region.
Influence of power law emissivity index $q$ on the simulated line profiles, its FWHM and the asymmetricity ratio is presened in Fig. \[emis\], from which it can be seen that for all disc inclinations asymmetricity ratio increases, at first rapidly, with $R_{out}$. For high emissivity indexes ($q\ge 3$) asymmetricity ratio becomes nearly constant for $R_{out}>25$. The asymmetricity ratio is affected by SMBH spins in such a way that in the case of a non-rotating Schwarzschild SMBH ($a=0.005$) the asymmetricity ratio is decresing with $R_{out}$. However, in the case of a rapidly rotating Kerr SMBH ($a=0.998$) the asymmetricity ratio is nearly constant with $R_{out}$ for emissivity indexes $q>2.5$. For the inclination $i=20^\circ$, FWHM increases rapidly for $R_{out}<20$. In tha cases of emissivity indexes $q\le3$, FWHM reaches the maximum at $R_{out}\approx 20$ and decreases as $R_{out}$ increases; however, for $q=4$ FWHM becomes almost constant for $R_{out}>20$.
As it can be seen from Figs. \[nbin\]-\[emis\], in most cases both, the FWHM and asymmetricity ratio of the Fe K$\alpha$ line strongly depend on disc outer radius $R_{out}$ and its inclination $i$.
Conclusions {#sec:conclusions}
===========
We developed a model of an accretion disc around SMBH hole using numerical simulations based on a ray-tracing method in the Kerr metric
This model allows us to study the radiation which originates in the vicinity of SMBHs. The shape of the emitted broad Fe K$\alpha$ line is strongly affected by three types of shifts: classical Doppler shift - causing double-peaked profile, special relativistic transverse Doppler shift and relativistic beaming - enhancing blue peak relative to red one and general relativistic gravitational redshift - smearing blue emission into red one.
Comparisons between the modelled and observed Fe K$\alpha$ line profiles allow us to determine the parameters of the line emitting region as well as to study plasma physics and spacetime metrics in vicinity of SMBHs. Two of them are of an especial importance for the strong gravitational field investigation in AGN, i.e. the mass of central BH and its angular momentum. Other parameters can give us information about the plasma conditions in vicinity of the central BH of the AGN.
From our simulations, we find that number of line bins and photons taken in calculations are of crucial significance for obtain the correct Fe K$\alpha$ line profiles, especially in the case of higher the disc emissivity index. Also, the lack of observed Fe K$\alpha$ line can be caused by the low resolution (our bin simulation) and sensitivity (our number of photon simulation) of the X-ray detectors. In addition, we conclude that in most cases the FWHM and the asymmetricity ratio of the Fe K$\alpha$ line strongly depends on the parameters of the disc, especially the outer radius and inclination.
Acknowledgments {#acknowledgments .unnumbered}
===============
This study is part of projects “Astrophysical Spectroscopy of Extragalactic Objects” (No. 176001), “Gravitation and the large scale structure of the Universe” (No. 176003) and “Visible and Invisible Matter in Nearby Galaxies: Theory and Observations” (No. 176021) supported by the Ministry of Education, Science and Technological development of Serbia. The work is partially supported by ICTP — SEENET-MTP project NT-03 ”Cosmology - Classical and Quantum Challenges".
[10]{} urlstyle \[1\][doi:\#1]{}
B. M. B. M. Peterson, [*[An introduction to active galactic nuclei]{}*]{} (Cambridge University Press, 1997).
P. Jovanovi[ć]{}, [*New Astronomy Reviews*]{} [**56**]{}, 37 (2012).
P. Jovanovic, [*Serbian Astronomical Journal*]{} [**185**]{}, 1 (2012).
A. C. Fabian, M. J. Rees, L. Stella and N. E. White, [*Monthly Notices of the Royal Astronomical Society*]{} [**238**]{}, 729 (1989).
K. Nandra, I. M. George, R. F. Mushotzky, T. J. Turner and T. Yaqoob, [*The Astrophysical Journal*]{} [**477**]{}, 602 (1997).
K. Nandra, P. M. O’Neill, I. M. George and J. N. Reeves, [*Monthly Notices of the Royal Astronomical Society*]{} [**382**]{}, 194 (2007).
Y. Tanaka, K. Nandra, A. C. Fabian, H. Inoue, C. Otani, T. Dotani, K. Hayashida, K. Iwasawa, T. Kii, H. Kunieda, F. Makino and M. Matsuoka, [ *Nature*]{} [**375**]{}, 659 (1995).
G. Bao, P. Hadrava and E. Ostgaard, [*The Astrophysical Journal*]{} [**435**]{}, 55 (1994).
B. C. Bromley, K. Chen and W. A. Miller, [*The Astrophysical Journal*]{} [ **475**]{}, 57 (1997).
C. Fanton, M. Calvani, F. de Felice and A. [Č]{}ade[ž]{}, [*Publications of the Astronomical Society of Japan*]{} [**49**]{}, 159 (1997).
A. [Č]{}ade[ž]{}, C. Fanton and M. Calvani, [*New Astronomy*]{} [**3**]{}, 647 (1998).
P. Jovanovi[ć]{} and L. [Č]{}. Popovi[ć]{} (2009), [arXiv:0903.0978]{}.
M. J. L. Turner et al, [*Astronomy & Astrophysics*]{} [**365**]{} L27–L35 (2001).
L. Strüder et al, [*Astronomy & Astrophysics*]{} [**365**]{}, L18–L26 (2001).
J. E. Trümper, G. Hasinger (Eds.), The Universe in X-Rays. (Springer-Verlag Berlin Heidelberg, 2008).
K. Mitsuda et al, [*Publications of the Astronomical Society of Japan*]{} [**59**]{}, S1–S7 (2007).
Barret, D. et al. [*Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray*]{} [**9905**]{} 99052F (2016).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose a new model for pricing Quanto CDS and risky bonds. The model operates with four stochastic factors, namely: hazard rate, foreign exchange rate, domestic interest rate, and foreign interest rate, and also allows for jumps-at-default in the FX and foreign interest rates. Corresponding systems of PDEs are derived similar to how this is done in [@BieleckiPDE2005]. A localized version of the RBF partition of unity method is used to solve these 4D PDEs. The results of our numerical experiments presented in the paper qualitatively explain the discrepancies observed in the marked values of CDS spreads traded in domestic and foreign economies.'
address:
- |
Tandon School of Engineering, New York University,\
12 Metro Tech Center, RH 517E, Brooklyn NY 11201, USA
- |
Department of Information Technology, Division of Scientific Computing,\
Box 337, 751 05 Uppsala, Sweden
- 'HSBS, New York, USA'
author:
- 'A. Itkin'
- 'V. Shcherbakov'
- 'A. Veygman'
title: 'Influence of jump-at-default in IR and FX on Quanto CDS prices'
---
Quanto Credit Default Swaps, Reduced Form Models, jump-at-default, stochastic interest rates, Radial Basis Function method.
C51, C63, G12, G13
Introduction
============
Quanto CDS is a credit default swap (CDS) with a special feature that the swap premium payments, and/or the cashflows in the case of default, are done in a different currency to that of the reference asset. A typical example would be a CDS that has its reference as a dollar-denominated bond for which the premium of the swap is payable in euros. And in case of default the payment equals the recovery rate on the dollar bond payable in euros. In other words, this CDS is written on a dollar bond, while its premium is payable in euros. These contracts are widely used to hedge holdings in bonds or bank loans that are denominated in a foreign currency (other than the investor’s home currency).
As mentioned in [@Citi2010], this product enables investors to take views on joint spread and FX moves with value a function of spread, the FX rate and FX volatility. Given the increased correlation between FX moves and credit spreads, interest in this product has increased recently, although like recovery swaps, it is still rather a niche market.
A Quanto CDS is quoted as a spread between the standard CDS and that of a different currency, and they are available for different maturities. For instance, one can observe that CDS on European sovereigns are usually traded in US dollars. That is because in case of default a euro-denominated credit protection would significantly drop down reflecting the default of the corresponding economy. So, the term structure of Quanto CDS tells us how financial markets view the likelihood of a foreign default and associated currency devaluations at different horizons, see e.g., discussion in [@ACS2017] and references therein.
In Fig. \[histSpread\] historical time series of some European 5Y sovereign CDS traded in USD are presented for the period from 2006 to 2015. It can be seen that these spreads reach their maximum around 2011, and then drop down by factors 2-5 to their current level. However, since high levels of the spreads have been recorded, later in this paper when choosing test parameters of our numerical experiments we will look at the cases corresponding just to the period of raised spreads around 2011.
![Historical time-series of some European 5Y sovereign CDS traded in USD (Markit).[]{data-label="histSpread"}](histSpreads.png){width="\textwidth"}
As far as the value of the Quanto CDS spread is concerned, there are various data in the literature. For instance, in [@ACS2017] the term structure of spreads, defined as the difference between the USD and EUR denominated CDS spreads, is presented for six Eurozone countries: Germany , Belgium, France, Ireland, Italy, Portugal and for maturities 3, 5, 7, 10, and 15 years relative to the 1 year Quanto spread. This difference could reach 30 bps at the time horizon 15 years (France, Ireland). In [@Simon2015] the 5 years Quanto CDS spreads are presented for Germany, Italy and France over the period from 2004 to 2013, which, e.g., for Italy could reach 500 bps in 2012. The results presented in [@Brigo] indicate a significant basis across domestic and foreign CDS quotes. For instance, for Italy, a USD CDS spread quote of 440 bps can translate into a EUR quote of 350 bps in the middle of the Euro-debt crisis in the first week of May 2012. More recently, from June 2013, the basis spreads between the EUR quotes and the USD quotes are in the range of around 40 bps.
Quanto effects drew a lot of attention on a modelling side. Various aspects of the problem were under investigation including the relationship between sovereign credit and currency risks, the pricing of sovereign CDS, the impact of contagion on credit risk, see survey in [@ACS2017] and references therein. But in this paper our particular attention will be directed to pricing Quanto CDS, or, more rigorously, to determining and testing an appropriate framework that provides a reasonable explanation of these effects from a mathematical finance point of view. Our approach is close to that in [@Brigo] where a model of Quanto CDS is built based on the reduced form model for credit risk. Within this setting the default time is modeled as a Cox process with explicit diffusion dynamics for default intensity/hazard rate and exponential jump to default, similar to the approach of [@ES2006; @Mohammadi2006]. But what is more important, [@Brigo] introduce an explicit jump-at-default in the FX dynamics. Then they show that this provides a more effective way to model credit/FX dependency as the results of simulation are able to explain the observed basis spreads during the Euro-debt crisis. In contrast, taking into account the instantaneous correlation between the driving Brownian motions of the default intensity and the FX rate alone is not sufficient for doing so.
However, in [@Brigo] only deterministic domestic and foreign interest rates (IR) were considered. While it could be important to extend this approach by relaxing this assumption and letting the rates be stochastic. It would be more important to account not just for the jump-at-default in the FX rate, but also for a simultaneous jump-at-default in the interest rate of the defaulted country. Relevant data on the subject could be found, e.g., in [@Catao2015]. This investigation shows that the interest rate premium on past default has been underestimated. This is partly due to narrower credit history indicators and, crucially, to the narrower data coverage of previous studies. Once this correction is made for these problems, a sizeable and persistent default premium emerges, and one which rises on the duration of the default. This means that the longer a country stays in default the higher premium it will pay once it resumes borrowing from private capital markets.
Another example is given in [@Katselas2010]. He provides a plot of the overnight interbank cash rate as quoted by the Reserve Bank of Australia for the period starting on 4 January 2000 and finishing on 31 December 2009. This rate serves as an approximation to the risk-free short rate applicable to borrowing/lending in Australia, and the plot indicates that not only are jumps evident in the short rate, but that a pure jump process may act as a suitable model for short rates. This observation prompted, e.g., [@Borovkov2003], to consider using a marked Poisson point process to model the short rate as a pure jump process.
Therefore, in this paper we extend the framework of [@Brigo] by introducing stochastic interest rates and account for jump-at-default in both FX and foreign (defaulted) interest rates. Our goal is to compare contribution of both jumps into the value of Quanto CDS spread. As this problem has four stochastic drivers, plus time, we show that the corresponding CDS price solves a four-dimensional partial differential equation. It is well-known that this dimensionality is such that finite-difference method already immensely suffer from the curse of dimensionality, while using Monte Carlo methods is too computationally expensive. Therefore, here we used another method, namely, a radial basis function (RBF) method, which has already demonstrated its efficiency when solving various problems of intermediate ($10 > d > 3$) dimensionality including those in mathematical finance, see, e.g., [@YCHon3; @Fasshauer2; @Pettersson], thanks to its high order convergence. The latter allows for obtaining a high resolution scheme using just a few discretization nodes. In particular, in this paper a localized version of the RBF method is used. It is based on the partition of unity method (or RBF-PUM). The partition of unity was originally introduced by [@Melenk] for finite element methods, and later adapted for the RBF methods by several authors, [@Safdari; @Shcherbakov]. This approach enables a significant reduction in the number of non-zero elements that remain in the coefficient matrix, hence, lowering the computational intensity required for solving the system.
The rest of the paper is organized as follows. In Section \[model\] we describe our model, and derive the main partial differential equation (PDE) for the risky bond price under this model. In Section \[modelJumps\] we extend this framework by adding jumps-at-default into the dynamics of the FX and foreign (defaulted) interest rates. Again, the main PDE is derived for the risky bond (the detailed derivation is given in Appendix). The connection of this price with the prices of the Quanto CDS is established in Section \[bond2cds\]. In Section \[numMethod\] the RBF-PUM method is described in detail. In Section \[experiments\] we present numerical results of our experiments with this model and discussion of the observed effects. Finally, Section \[sec:Conclusion\] concludes the paper.
Model
=====
We begin describing our model by giving some useful definitions which are heavily utilized throughout the rest of the paper.
By the [*domestic currency*]{} or the [*liquid currency*]{} we denote the most liquidly traded currency among all contractual currencies. In what follows this is the US dollar (USD).
The other contractual currency we denote as [*contractual*]{} or [*foreign currency*]{}. In this paper it can be both USD and EUR. The premium and protection leg payments are settled in this currency.
Since in this paper we focus on pricing credit default swap (CDS) contracts, it is assumed that their market quotes are available in both domestic and foreign currencies. Let us denote these prices as $\mathrm{CDS}_d$ and $\mathrm{CDS}_f$ respectively. If so, every price $\mathrm{CDS}_f$ expressed in the foreign currency can be translated into the corresponding price in the domestic currency if the exchange rate $Z_t$ for two currencies is provided by the market. In other words, the theoretical price of the CDS contract in the foreign currency would be $Z_t \mathrm{CDS}_d$. However, it is known that the market demonstrates a spread $\mathrm{CDS}_f - Z_t \mathrm{CDS}_d$ which could reach hundreds of bps, [@Brigo]. Hence, the availability of the market quotes on CDS contracts in both currencies together with the corresponding exchange rates allows one to capture these spreads.
We continue our description by considering a framework where all underlying stochastic processes do not experience a jump-at-default except the default process itself. So, this is similar to what is presented in [@Brigo] with an exception that the interest rates in our model are stochastic. This will then be generalized with the allowance for jumps-at-default in other processes in Section \[modelJumps\].
Simple jump-at-default framework
--------------------------------
Below we chose the risk neutral probability measure $\mathbb{Q}$ corresponding to the domestic (liquid) currency money market. Also, by $\mathbb{E}_t[\,\cdot\,]$ we denote the expectation conditioned on the information received by time $t$, i.e. $\mathbb{E}[\,\cdot\, | \mathcal{F}_t]$.
Consider two money markets: $B_t$ associated with the domestic currency (USD), and $\hatB_t$ associated with the foreign currency (EUR), where $t\geq 0$ is the calendar time. We assume that the dynamics of the two money market accounts are given by $$\begin{aligned}
\label{mmDyn}
dB_t & = R_t B_t dt, \quad B_0=1,\\
d\hatB_t & = \hatR_t \hatB_t dt, \quad \hatB_0=1, \nonumber\end{aligned}$$ where the stochastic interest rates $R_t, \hatR_t$ follow the Cox-Ingersoll-Ross (CIR) process, [@cir:85] $$\begin{aligned}
\label{dynR}
dR_t &= a(b-R_t)dt + \sigma_r \sqrt{R_t} dW_t^{(1)}, \quad R_0=r,\\
d\hatR_t &= \hat a(\hat b - \hatR_t) dt + \sigma_{\hat{r}} \sqrt{\hatR_t}dW_t^{(2)}, \quad \hatR_0=\hatr. \nonumber\end{aligned}$$ Here $a, \hat a$ are the mean-reversion rates, $b, \hat b$ are the mean-reversion levels, $\sigma_r, \sigma_{\hat{r}}$ are the volatilities, and $W_t^{(1)}, W_t^{(2)}$ are the Brownian motions. Without loss of generality, further we assume $a, \hat a, b, \hat b, \sigma_r, \sigma_{\hatr}$ to be constant. This assumption can be easily relaxed.
We assume that the exchange rate $Z_t$ of the two currencies is also stochastic, and its dynamics is driven by the following stochastic differential equation (SDE) $$\label{Z}
dZ_t = \mu_z Z_t dt + \sigma_z Z_t dW_t^{(3)}, \quad Z_0=z,$$ where $\mu_z, \sigma_z$ are the corresponding drift and volatility, and $W_t^{(3)}$ is another Brownian motion. From the financial point of view $Z_t$ denotes the amount of domestic currency one has to pay to buy one unit of foreign currency. Loosely speaking, this means that 1 euro could be exchanged for $Z_t$ US dollars.
As the underlying security of a CDS contract is a risky bond, we need a model of a credit risk implied by the bond. For modeling the credit risk we use a reduced form model approach, see e.g., [@jarrow/turnbull:95; @DuffieSingleton99; @Bielecki2004; @jarrow2003robust] and references therein. We define the hazard rate $\lambda_t$ to be a stochastic process given by $$\label{lambda}
\lambda_t = e^{Y_t}, \quad t \ge 0,$$ where $Y_t$ follows the Ornstein-Uhlenbeck process defined by the SDE $$\label{Y}
dY_t = \kappa(\theta-Y_t)dt + \sigma_y dW_t^{(4)}, \quad Y_0=y, \\$$ with $\kappa, \theta, \sigma_y$ to be the corresponding mean-reversion rate, mean-reversion level and volatility, and $W_t^{(4)}$ to be another Brownian motion. Both $Z_t$ and $\lambda_t$ are defined and calibrated in the domestic measure.
We assume all Brownian motions $W_t^{(i)}, \ i \in [1,4]$ to be dependent, and this dependence can be specified through the instantaneous correlation $\rho$ between each pair of the Brownian motions, i.e., $<d W_t^{(i)}, d W_t^{(j)}> = \rho_{ij} dt$. Hence, the whole correlation matrix in our model is $$\cal P =
\begin{bmatrix}
1 & \rho_{r\hatr} & \rho_{rz} & \rho_{ry} \\
\rho_{\hatr r} & 1 & \rho_{\hatr z} & \rho_{\hatr y} \\
\rho_{zr} & \rho_{z \hatr} & 1 & \rho_{z y} \\
\rho_{yr} & \rho_{y \hatr} & \rho_{yz} & 1 \\
\end{bmatrix},$$ where all correlations $|\rho_{ij}| \le 1, \ i,j \in [r,\hatr, z, y]$ are assumed to be constant.
Finally, we define the default process $(D_t, \ t \ge 0)$ as $$\label{defProc}
D_t = {\bf 1}_{\tau \le t},$$ where $\tau$ is the default time of the reference entity. In order to exclude trivial cases, we assume that $\QM(\tau > 0) = 1$, and $\QM(\tau \le T) > 0$.
Jumps-at-default in FX and foreign IR {#modelJumps}
-------------------------------------
In this section we extend the above described framework by assuming the value of the foreign currency as well as the foreign interest rate to experience a jump at the default time.
As shown in [@Brigo] and mentioned in the introduction, including jump-at-default into the FX rate provides a more effective way of modeling the credit/FX dependency than the instantaneous correlations imposed among the driving Brownian motions of default intensity and FX rates. Moreover, the authors claim that it is not possible to explain the observed basis spreads during the Euro-debt crisis by using the latter mechanism alone.
However, looking at historical time-series, an existence of jump-at-default in the foreign interest rate could also be justified, especially in case when sovereign obligations are in question. For example, after the default of Russia in 1998, the Russian ruble lost about $75$% of its value within $1.5$ months, which in turn resulted in a jump of the corresponding FX rates. On the other hand, the jump in the interest rate can be even more pronounced since the default also lowers the creditability and dramatically increases the cost of borrowing. For the above mentioned example of the Russian crisis of 1998, the short interest rate grew from $20$% in April 1998 to $120$% in August 1998.
Therefore, it would be interesting to see a relative contribution of each jump into the value of the Quanto CDS spread.
To add jumps to the dynamics of the FX rate in , we follow [@Brigo; @BieleckiPDE2005] who assume that at the time of default the FX rate experiences a single jump which is proportional to the current rate level, i.e. $$\label{jumpZ}
d Z_t = \gamma_z Z_{t^-} d M_t,$$ where $\gamma_z \in [-1,\infty)$ [^1] is a devaluation/revaluation parameter.
The hazard process $\Gamma_t$ of a random time $\tau$ with respect to a reference filtration is defined through the equality $e^{-\Gamma_t} = 1 - \QM\{\tau \le t|\calF_t\}$. It is well known that if the hazard process $\Gamma_t$ of $\tau$ is absolutely continuous, so $$\label{hazard}
\Gamma_t = \int_0^t (1-D_s) \lambda_s ds,$$ and increasing, then the process $M_t = D_t - \Gamma_t$ is a martingale (which is called as the compensated martingale of the default process $D_t$) under the full filtration $\calF_t \vee {\mathcal H}_t$ with ${\mathcal H}_t$ being the filtration generated by the default process. So, $M_t$ is a martingale under $\QM$, [@BieleckiPDE2005].
It can be shown that under the risk-neutral measure associated with the domestic currency, the drift $\mu_z$ is, ([@Brigo]) $$\label{na-drift}
\mu_z = R_t-\hatR_t.$$
Therefore, with the allowance for , we obtain $$\label{dzJump}
dZ_t = (R_t - \hatR_t) Z_t dt + \sigma_z Z_t dW_t^{(3)} + \gamma_z Z_t d M_t.$$ Thus, $Z_t$ is a martingale under the $\mathbb{Q}$-measure with respect to $\calF_t \vee {\mathcal H}_t$ as it should be, since it is a tradable asset.
Certainly, we are more interested in the negative values of $\gamma_z$ because a default of the reference entity has to negatively impact the value of its local currency. For instance, we expect the value of EUR expressed in USD to fall if some European country defaults.
Similarly, we add jump-at-default to the stochastic process for the foreign interest rate $\hatR_t$ as $$d \hatR_t = \gamma_\hatr \hatR_{t^-} d D_t,$$ so transforms to $$\label{rJump}
d\hatR_t = \hat a(\hat b-\hatR_t )dt + \sigma_{\hatr} \sqrt{\hatR_t}dW_t^{(2)} + \gamma_{\hatr} R_t d D_t.$$ Here $\gamma_{\hatr} \in [-1,\infty)$ is the parameter that determines the post-default cost of borrowing. We are interested in positive values of $\gamma_{\hatr}$ as the interest rate most likely will grow after a default has occurred. Note that $\hatR_t$ is not tradable, and so is not a martingale under the $\mathbb{Q}$-measure.
Pricing zero-coupon bonds {#zcbPrice}
=========================
To price contingent claims where the contractual currency differs from the pricing currency, e.g., Quanto CDS, we first need to determine the price of the underlying defaultable zero-coupon bond settled in foreign currency. The bond price under the foreign money market martingale measure $\hat \QM$ reads $$\hat U_t(T) = \hat{\mathbb{E}}_t\left[ \frac{\hatB_t}{\hatB_T} \hat \Phi(T) \right],$$ where $\hatB_t/\hatB_T = \hat B(t,T)$ is the stochastic discount factor from time $T$ to time $t$ in the foreign economy, and $\Phi(T)$ is the payoff function. However, we are going to find this price under the domestic money market measure $\mathbb{Q}$. Hence, converting the payoff to the domestic currency and discounting by the domestic money market account yields $$U_t(T) = \mathbb{E}_t\left[ B(t,T) Z_t \hat \Phi(T) \right],$$ where without loss of generality it is assumed that the notional amount of the contract is equal to one unit of the foreign currency. This implies the payoff function to be $$\hat \Phi(T) = \m1_{\tau>T}.$$ Further, we assume that if this bond defaults, the recovery rate $\calR$ is paid at the time of default. Therefore, the price of a defaultable zero-coupon bond, which pays out one unit of the foreign currency in the domestic economy reads $$\begin{aligned}
\label{payoff}
U_t(T) &= \mathbb{E}_t\left[ B(t,T) Z_T \m1_{\tau>T}
+ \calR B(t,\tau) Z_\tau \m1_{\tau \le T} \right] \\
&= \mathbb{E}_t\left[ B(t,T) Z_T \m1_{\tau>T} \right]
+ \calR \int_t^T \mathbb{E}_t \left[ B(t,\nu) Z_\nu \m1_{\tau \in (\nu-d\nu,nu]} \right] = w_t(T) + \calR \int_t^T g_t(\nu)d \nu, \nonumber \\
w_t(T) &:= \mathbb{E}_t \left[ Z_{T} B(t,T) \m1_{\tau > T} \right], \qquad
g_t(\nu) := \mathbb{E}_t \left[ B(t,\nu) Z_\nu \frac{\m1_{\tau \in (\nu-d\nu,nu]}}{d\nu} \right]. \nonumber\end{aligned}$$
As the whole dynamics of our underlying processes is Markovian, [@BieleckiPDE2005], to find the price of such an instrument we use a PDE approach, so that the defaultable bond price just solves it. This is more efficient from the computationally point of view as compared, e.g., with the Monte Carlo method, despite the resulting PDE becomes four-dimensional. We discuss various approaches to its numerical solution in Section \[numMethod\].
Further, conditioning on $R_t = r, \hatR_t = \hatr, Z_t = z, Y_t = y, D_t = d$, and using the approach of [@BieleckiPDE2005] (see Appendix \[apDeriv\]), we obtain that under the risk-neutral measure $\QM$ the price $U_t(T)$ is $$\label{bondPrice}
U_t(T, r, \hatr, y, z) = \m1_{\tau > t} f(t, T, r,\hatr, y, z, 0) +
\m1_{\tau \le t} f(t, T, r,\hatr, y, z, 1).$$ Here the function $f(t, T, r,\hatr, y, z, 1) \equiv u(t, T, X), \ X = \{r,\hatr, y, z\}$ solves the PDE $$\label{PDE1}
\fp{u(t,T,X)}{t} + {\cal L} u(t,T,X) - r u(t,T,X) = 0,$$ where the diffusion operator $\cal L$ reads $$\begin{aligned}
\label{Ldiff}
\cal L &= \frac{1}{2}\sigma_{r}^2 r\sop{}{r} + \frac{1}{2} \sigma_{\hatr}^2 \hatr \sop{u}{\hatr} + \frac{1}{2}\sigma_z^2 z^2 \sop{}{z} + \frac{1}{2}\sigma_y^2\sop{}{y}
+ \rho_{r \hatr} \sigma_r \sigma_{\hatr} \sqrt{r \hatr}\cp{}{r}{\hatr} \\
&+ \rho_{rz}\sigma_r \sigma_z z\sqrt{r} \cp{}{r}{z}
+ \rho_{\hatr z} \sigma_{\hatr} \sigma_z z \sqrt{\hatr} \cp{}{z}{\hatr}
+ \rho_{ry}\sigma_r \sigma_y \sqrt{r} \cp{}{r}{y}
+ \rho_{\hatr y} \sigma_{\hatr} \sigma_y \sqrt{\hatr} \cp{}{y}{\hatr}
\nonumber \\
&+ \rho_{yz} \sigma_y \sigma_z z \cp{}{y}{z}
+ a(b-r)\fp{}{r}
+ \hat a(\hat b - \hatr) \fp{}{\hatr}
+ (r - \hatr) z \fp{}{z}
+ \kappa(\theta - y) {\partial}{}{y}. \nonumber\end{aligned}$$
The second function $f(t, T, r,\hatr, y, z, 0) \equiv v(t, T, X)$ solves the PDE $$\begin{aligned}
\label{PDE2}
\fp{v(t,T,X)}{t} &+ {\cal L} v(t,T,X) - r v(t,T,X)
- \lambda \gamma_z z \fp{v(t,T,X)}{z} \\
&+ \lambda \left[ u(t, T, X^+) -
v(t, T, X) \right] = 0, \qquad X^+ = \{r, \hatr(1+\gamma_\hatr), y, z(1+\gamma_z)\}. \nonumber\end{aligned}$$ where according to , $\lambda = e^y$.
The boundary conditions for this problem should be set at the boundaries of the unbounded domain $(r, \hatr, y, z) \in [0,\infty] \times [0,\infty] \times [-\infty,0] \times[0,\infty]$. However, this can be done in many different ways. As the value of the bond price is usually not known at the boundary, similarly to [@Brigo] we assume the second derivatives to vanish towards the boundaries $$\begin{aligned}
\label{bc}
& \sop{u}{\nu}\Big|_{\nu \uparrow 0} = \sop{u}{\nu}\Big|_{\nu \uparrow \infty} = 0, \quad \nu \in [r, \hatr], \\
&\sop{u}{y}\Big|_{y \uparrow 0} = \sop{u}{y}\Big|_{y \uparrow -\infty} = 0, \qquad
\sop{u}{z}\Big|_{z \uparrow 0} = \sop{u}{z}\Big|_{y \uparrow \infty} = 0. \nonumber\end{aligned}$$
We assume that the default has not yet occured at the validation time $t$, therefore, reduces to $$\label{bondPrice1}
U_t(T, r, \hatr, y, z) = v(t, T, X).$$ Therefore, it could be found by solving , as follows. Since the payoff in is a sum of two terms, and our PDE is linear, it can be solved independently for each term. Then the solution is just a sum of the two.
Solving the PDE for $w_t(T)$ {#wtT}
----------------------------
The function $w_t(T)$ solves exactly the same set of PDEs as in , [^2]. Therefore, it can be found in two steps.
#### Step 1
We begin by solving the PDE in for function $u$. Since this function corresponds to $d=1$, it describes the evolution of the bond price [*at or after*]{} default. Accordingly, the terminal condition for $u$ becomes $u(T, T, X) = 0$. Indeed, this payoff does not assume any recovery paid at default, therefore, the bond expires worthless. Then, a simple analysis shows that the function $u(t, T, X) \equiv 0$ is the solution at $d=1$ as it solves the equation itself and obeys the terminal and boundary conditions. Therefore, at this step the solution can be found analytically.
#### Step 2
As the solution of the first step vanishes, it implies that $u(t, T, X^+) \equiv 0$ in .
By the definition before , the function $v$ corresponds to the states with no default. Accordingly, from the payoff function (which is the terminal condition for at $t=T$) reads $$\label{tc2}
v(T,T,X) = z.$$ The boundary conditions again are set as in .
The PDE for $v(t,T,X)$ now takes the form $$\begin{aligned}
\fp{v(t,T,X)}{t} &+ {\cal L} v(t,T,X) - (r + \lambda) v(t,T,X)
- \lambda \gamma_z z \fp{v(t,T,X)}{z} = 0,\end{aligned}$$ subject to the terminal condition $v(T,T,X) = z$. Then, obviously $w_t(T) = v(t,T,X)$.
It can be seen, that in case of no recovery, the defaultable bond price does depend on jump in the FX rate, but does not depend on the jump in the foreign interest rate.
Solving the PDE for $g_t(\nu)$ {#gtT}
------------------------------
As far as the second part of the payoff in is concerned, it could be noticed that the integral in is a Riemann–Stieltjes integral in $\nu$. Therefore, it can be approximated by a Riemann–Stieltjes sum where the continuous time interval $[t,T]$ could be replaced by a discrete uniform grid with sufficiently small step $\Delta \nu = h$. So $$\label{eq:Integral24}
\int_t^T g_t(\nu) d\nu \approx h \sum_{i=1}^N g_t(t_i),$$ where $t_i = t + i h, \ i \in [0,N]$, $N = (T-t)/h$. Accordingly, each term in this sum can be computed independently by solving the corresponding pricing problem in , with the maturity $t_i$.
Note, that since the pricing problem in , is formulated via backward PDEs, computation of $g_t(t_i)$ for every maturity $t_i, \ i \in [1,m]$ requires an independent solution of such a problem. This could be significantly improved if instead of the backward PDE we would work with the forward one for the corresponding density function. In that case all $U_t(t_i), \ i \in [1,m]$ can be computed in one run (by a marching method). However, we leave this improvement to discuss in detail elsewhere. Do not confuse $m$ and $N$ since $m$ is the total number of coupon payments, while $N$ is the number of discretisation steps in the integral .
Again, it can be observed that the function $g_t(T)$ solves exactly the same set of PDEs as in , , and, thus, again it can be found in two steps.
#### Step 1
The problem for $u$ should be solved subject to the terminal condition $$\label{tcG}
g_T(T) = z (1+\gamma_z).$$ Indeed, by the definition of $g_t(T)$, we can set $t=T$ and condition on $R_t = r, \hatR_t = \hatr, Z_t = z, Y_t = y, d=1$. Then $$\begin{aligned}
g_T(T)dT &= \mathbb{E}_t \left[ B(t,T) Z_T \m1_{\tau \in (T-dT,T]} \Big| t=T \right]
= z \mathbb{E}_t \left[\lambda_t dt | t=T \right] = z e^y d T,\end{aligned}$$ see [@Schonbucher2003], Section 3.2. However, the dynamics of $Z_t$ in implies that when the default occurs, the value of $Z_{\tau^-}$ jumps proportionally to the value $Z_\tau = Z_{\tau^-}(1+\gamma_z)$. Thus, we arrive at .
#### Step 2
Having an explicit representation of the function $u(t, T, X)$ obtained as the solution of the previous step, one can find $u(t, T, X^+)$ as the values of parameters $\gamma_z, \gamma_\hatr$ are known, and the values of $\lambda$ are also given (for instance, at some grid which is used to numerically solve the PDE problem in Step 1). Then, can be solved with respect to $v(t, T, X)$.
By the definition before , the function $v$ corresponds to states with no defaults. Accordingly, the recovery is not paid, and the terminal condition for this step is $v(T,T,X) = 0$. This, however, does not mean that $v=0$ solves the problem. That is because contains the term $\lambda u(t, T, X^+) \ne 0$ (since the terminal condition at the previous step is not zero), and so $v \ne 0$ if $\lambda \ne 0$.
It can be seen that according to this structure in case of non-zero recovery the defaultable bond price does depend on jumps in both FX and foreign IR rates.
From bond prices to CDS prices {#bond2cds}
==============================
As this paper is mostly dedicated to modeling Quanto CDS contracts, we use the setting developed in the previous sections for risky bonds and apply it to CDS contracts. Let us remind that a CDS is a contract in which the protection buyer agrees to pay a periodic coupon to a protection seller in exchange for a potential cashflow in the event of default of the CDS reference name before the maturity of the contract $T$.
We assume that a CDS contract is settled at time $t$ and assures protection to the CDS buyer until time $T$. We consider CDS coupons to be paid periodically with the payment time interval $\Delta t$, and there will be totally $m$ payments over the life of the contract, i.e., $m \Delta t = T-t$. Assuming unit notional, this implies the following expression for the CDS coupon leg $L_c$, [@LiptonSavescu2014; @BrigoMorini2005] $$L_c = \mathbb{E}_t\left[\sum_{i=1}^{m} c B(t,t_i)\Delta t \m1_{\tau > t_i}\right],$$ where $c$ is the CDS coupon, $t_i$ is the payment date of the $i$-th coupon, and $B(t,t_i) = B_t/B_{t_i}$ is the stochastic discount factor.
However, if the default occurs in between of the predefined coupon payment dates, there must be an accrued amount from the nearest past payment date till the time of the default event $\tau$. The expected discounted accrued amount $L_a$ reads $$L_a = \mathbb{E}_t\left[c B(t,\tau) (\tau - t_{\beta(\tau)}) \m1_{t < t_\beta(\tau) \le \tau < T}\right],$$ where $t_{\beta(\tau)}$ is the payment date preceding the default event. In other words, $\beta(\tau)$ is a piecewise constant function of the form $$\beta(\tau) = i, \quad \forall \tau: \ t_i < \tau < t_{i+1}.$$ These cashflows are paid by the contract buyer and received by the contract issuer. The opposite expected protection cashflow $L_p$ is $$L_p = \mathbb{E}_t\left[(1 - \calR)B(t,\tau)\m1_{t < \tau \le T}\right],$$ where the recovery rate $\calR$ is unknown beforehand, and is determined at or right after the default, e.g., in court. In modern mathematical finance theory it is customary to consider the recovery rate to be stochastic, see e.g., [@Cohen2017]) and references therein, however, throughout this paper we assume the recovery rate being constant and known in advance.
Further, we define the so-called *premium* ${\cal L}_{pm} = L_c + L_a$ and *protection* ${\cal L}_{pr} = L_p$ legs, and, as usual, define the CDS par spread $s$ as the coupon which equalizes these two legs and makes the CDS contract fair at time $t$. Similar to Section \[zcbPrice\], if we price all instruments under the domestic money market measure $\QM$ we need to convert the payoffs to the domestic currency and discount by the domestic money market account. Then $s$ solves the equation $$\begin{aligned}
\label{eq:CDSequation}
\sum_{i=1}^{m} & \mathbb{E}_t \left[s Z_T B(t,t_i)\Delta t \m1_{\tau > t_i}\right]
+ \mathbb{E}_t\left[s Z_T B(t,\tau)(\tau - t_{\beta(\tau)})\m1_{t<\tau<T}\right] \\
&= \mathbb{E}_t\left[(1 - \calR)Z_\tau B(t,\tau)\m1_{t<\tau\leq T}\right]. \nonumber\end{aligned}$$
In the spirit of [@ES2006] and [@BrigoSlide], we develop a numerical procedure for finding the par spread $s$ from the bond prices. Consider each term in separately.
#### Coupons
For the coupon payment one has $$\begin{aligned}
\label{eqCoupon}
L_c &= \mathbb{E}_t \left[ \sum_{i=1}^{m} s Z_{t_i} B(t,t_i) \Delta t \m1_{\tau \ge t_i} \right] = s\Delta t \sum_{i=1}^m \mathbb{E}_t \left[ Z_{t_i} B(t,t_i) \m1_{\tau \ge t_i} \right] = s \Delta t \sum_{i=1}^m w_t(t_i).\end{aligned}$$ where $t_m = T$. Computation of $w_t(T)$ is described in Section \[wtT\].
Note, that as follows from the analysis of the previous section, $w_t(T)$ (and, respectively, the coupon payments) does depend on the jump in the FX rate, but does not depend on the jumps in the foreign interest rate which is financially reasonable.
#### Protection leg
A similar approach is provided for the protection leg $$\begin{aligned}
\label{eqProtection}
L_p &= \mathbb{E}_t \left[(1-\calR) Z_{\tau} B(t,\tau)\m1_{t < \tau \le T} \right]
= (1-\calR) \int_{t}^{T} \mathbb{E}_t \left[Z_\nu B(t,\nu) \m1_{\tau \in (\nu-d\nu,\nu]} \right] d\nu \\
&= (1-\calR)\int_{t}^{T} g_t(\nu) d \nu, \nonumber\end{aligned}$$ where computation of $g_t(T)$ is described in Section \[gtT\].
#### Accrued payments
For the accrued payment one has $$\begin{aligned}
\label{eqAccrued}
L_a &= \mathbb{E}_t \left[ s Z_\tau B(t,\tau) (\tau - t_{\beta(\tau)}) \frac{\m1_{t < \tau < T}}{d\nu} \right]
= s \int_t^T \mathbb{E}_t \left[ Z_\nu B(t,\nu) (\nu - t_{\beta(\nu)}) \frac{\m1_{\tau \in (\nu-d\nu,\nu]}}{d\nu} \right] d\nu \\
&= s \sum_{i=0}^{m-1} \Big\{\int_{t_i}^{t_{i+1}} (\nu - t_i) \mathbb{E}_t \left[ Z_\nu B(t,\nu) \frac{\m1_{\tau \in (\nu-d\nu,\nu]}}{d\nu} \right] d\nu \Big\}
= s \sum_{i=0}^{m-1} \int_{t_i}^{t_{i+1}} (\nu - t_i)g_t(\nu) d\nu, \nonumber\end{aligned}$$ where $t_0 \equiv t$, and $t_m \equiv T$.
As was mentioned in Section \[gtT\], both final integrals in , are Riemann–Stieltjes integrals in $\nu$. Therefore, each one can be approximated by a Riemann–Stieltjes sum where the continuous time interval $[t,T]$ could be replaced by a discrete uniform grid with a sufficiently small step $\Delta \nu = h$.
Now we have all necessary componets to compute the CDS spread. Introducing new notation $$\begin{aligned}
\label{approx1}
A_i &= \int_{t_i}^{t_{i+1}} w_t(\nu) d\nu \approx h \sum_{k=1}^N w_t(\nu_k), \\
B_i &= \int_{t_i}^{t_{i+1}} g_t(\nu) d\nu \approx h \sum_{k=1}^N g_t(\nu_k) \nonumber \\
C_i &= \int_{t_i}^{t_{i+1}} \nu g_t(\nu) d\nu \approx h \sum_{k=1}^N \nu_k g_t(\nu_k) \nonumber \\
\nu_k &= t_i + k h, \quad k=1,\ldots,N, \quad h = (t_{i+1} - t_i)/N, \nonumber\end{aligned}$$ we re-write , and in the form $$\begin{aligned}
\label{approx2}
L_p &= (1-\calR) \sum_{i=1}^{m} B_i, \qquad
L_c = s \Delta t \sum_{i=1}^m A_i, \qquad
L_a = s \sum_{i=1}^{m} \left[ C_i - t_i B_i \right].\end{aligned}$$ Finally, combining together and we obtain $$\begin{aligned}
\label{eqParSpread}
s = (1-\calR) \dfrac{\sum_{i=1}^{m} B_i}{\sum_{i=1}^{m} \left[ \Delta t A_i + C_i - t_i B_i\right]}.\end{aligned}$$
Radial Basis Function Partition of Unity Method {#numMethod}
===============================================
In order to numerically solve , subject to the corresponding terminal and boundary conditions we use a radial basis function method. Radial basis function methods become increasingly popular for applications in computational finance, e.g., [@YCHon3; @Fasshauer2; @Pettersson], thanks to their high order convergence that allows for obtaining a high resolution scheme using just a few discretization nodes. This is a crucial property when solving various multi-dimensional problems, e.g., pricing derivatives written on several assets (basket options), or those for models whose settings use several stochastic factors. Indeed, all these models suffer immensely from the curse of dimensionality, in particular, an increasing storage (memory) becomes the dominant limiting factor. This, however, can be successfully overcome by using the RBF methods. For instance, in [@Shcherbakov] it is shown that standard finite difference methods require about three times as many computational nodes per dimension as RBF methods to obtain the same accuracy, thus, significantly reducing the memory consumption.
Nevertheless, it should be emphasized, that the original global RBF method is computationally very expensive and rather unstable due to dense and ill-conditioned coefficient matrices[^3]. This is a consequence of the global connections between the basis functions. Therefore, here we eliminate from the global RBF method in favour of its localised version based on the idea of partition of unity. The partition of unity method was originally introduced by [@Melenk] for finite element methods, and later adapted for the RBF methods by several authors, [@Safdari; @Shcherbakov]. This approach (which further on is referred as RBF-PUM) enables a significant reduction in the number of non-zero elements that remain in the coefficient matrix, hence, lowering the computational intensity required for solving the system. In addition, this concept is supported, say in Matlab, by making use of sparse operations. Typically, as applied to our problem of pricing Quanto CDS, only about one percent of all elements remain to have non-zero values.
In order to construct an RBF-PUM approximation we start by defining an open cover $\{\Omega_j\}_{j=1}^{P}$ of our computational domain $\Omega$ such that $$\Omega \subseteq \bigcup_{j=1}^{P} \Omega_j.$$ We select the patches $\Omega_j$ to be of a spherical form. Inside each patch a local RBF approximation of the solution $u$ is defined as $$\label{localRBF}
\tilde u_j(x)= \sum_{i=1}^{n_j}\lambda_i^j \phi(\varepsilon, || x - x_i^j ||),$$ where $n_j$ is the number of computational nodes belonging to the patch $\Omega_j$, $\phi(\varepsilon, ||x - x_i^j ||)$ is the $i$-th basis function centred at $x_i^j$, which is the $i$-th local node in the $j$-th patch $\Omega_j$, $\varepsilon$ is the shape parameter that determines the widths of basis functions, and $\lambda_i^{j}$ are the unknown coefficients. Some popular choices of the basis functions are listed in Table \[TabRBF\], while their behavior as a function of the parameter $\varepsilon$ is presented in Fig. \[BasisFunc\].
RBF $\phi(\varepsilon, r)$
---------------------------- -- -- -- -------------------------------
Gaussian (GA) $\exp{(-\varepsilon^2r^2)}$
Multiquadric (MQ) $\sqrt{1+\varepsilon^2r^2}$
Inverse Multiquadric (IMQ) $1/\sqrt{1+\varepsilon^2r^2}$
Inverse Quadratic (IQ) $1/(1+\varepsilon^2r^2)$
: Commonly used radial basis functions.[]{data-label="TabRBF"}
In addition to the patches, we also construct partition of unity weight functions $w_j(x),\, j = 1, \ldots, P$, subordinated to the open cover, such that $$\sum_{j=1}^{P} w_j(x) = 1, \quad \forall x\in \Omega.$$ Functions $w_j(x)$ can be obtained, e.g., by Shepard’s method, [@Shepard], from compactly supported generating functions $\varphi_j(x)$ $$w_j(x) = \frac{\varphi_j(x)}{\sum_{i=1}^{P} \varphi_i(x)}, \quad j=1,\ldots,P, \quad \forall x\in \Omega.$$ The generation functions $\varphi_j(x)$ must fulfil some smoothness requirements. For instance, for the problem considered in this paper they should be at least $C^{2}(\Omega)$. To proceed, as a suitable candidate for $\varphi_j(x)$ we choose fifth-order Wendland’s functions, [@Wendland] $$\varphi(r) = (5r+1)(1-r)^5_{+}, \quad r \in \mathbb{R},$$ with the support $\varphi(r) \in \mathbb{B}^{4}(0, 1)$, where $\mathbb{B}^4(0,1)$ is a unit four-dimensional ball centred at the origin. In order to map the generating function to the patch $\Omega_j$ with the centre $c_j$ and radius $\rho_j$, it is shifted and scaled as $$\varphi_{j}(x) = \varphi_j \left( \frac{||x- c_j||}{\rho_j}\right), \quad \forall x \in \Omega.$$ Further we blend the local RBF approximations with the partitions of unity weight and obtain a combined RBF-PUM solution $\tilde u(x)$ as $$\label{RBFPUMapprox}
\tilde u(x) = \sum_{j=1}^{P}w_j(x) \tilde u_j(x).$$ The RBF-PUM approximation in the given form allows to maintain accuracy similar to that of the global method while significantly reducing the computational effort (see e.g., [@Shcherbakov], [@Ahlkrona]). Moreover, it was shown in [@vonSydow] that RBF-PUM is the most efficient numerical method for higher-dimensional problems among deterministic methods that rely on a node discretization.
Numerical Experiments {#experiments}
=====================
In this section we perform numerical experiments to find the Quanto-adjusted CDS par spread value $s$ and its sensitivity to market conditions. The par spread is computed as in while the bond price is obtained from by approximating the PDEs in , using radial basis function partition of unity method with $1296$ patches. We select Gaussian functions to construct a finite RBF basis on $28561$ nodes. As $[r,\hatr, z] \in [0,\infty)$ and $y \in (-\infty,\infty)$, we truncate each semi-infinite ot infinite domain of definition sufficiently far away from the evaluation point, so an error brought by this truncation is relatively small. In particular, we use $r_{\min} = \hat r_{\min} = z_{\min} = 0$, $y_{\min} = -6$, $r_{\max} = \hat r_{\min} = z_{\min} = 4$, $y_{\max} = -2$. Accordingly, we move the boundary conditions, defined in , to the boundaries of this truncated domain.
Note, that in our numerical method (see Section \[numMethod\]), we substitute into the pricing PDEs , and then derive a corresponding reduced form discrete (boundary) operator. As this explicitly incorporates the boundary conditions into the pricing scheme, the latter can be implemented uniformly with no extra check that the boundary conditions are satisfied[^4].
For marching in time we use the backward differentiation formula of second order (BDF-2), [@BDFbook]. In order to compute the accrued amount $L_a$ as in we use the time discretisation with two-weeks intervals. The method is implemented in Matlab 2017a, and the experiments were run on a MacBook Pro with a Core i7 processor with 16 GB RAM. To investigate Quanto effects and their impact on the price of a CDS contract, we consider two similar CDS contracts. The first one is traded in the foreign economy, e.g., in Italy, but is priced under the domestic risk-neutral $\QM$-measure, hence is denominated in the domestic currency (US dollars). To find the price of this contract our approach described in the previous sections is utilized. The second CDS is the same contract which is traded in the domestic economy and is also priced in the domestic currency. As such, its price can be obtained by solving the same problem as for the first CDS, but where the equations for the foreign interest rate $\hatR_t$ and the FX rate $Z_t$ are excluded from consideration. Accordingly, all related correlations which include index $z$ and $\hatr$ vanish, and the no-jumps framework is used. However, the terminal conditions remain the same as in Section \[bond2cds\] as they are already expressed in the domestic currency[^5].
Below we denote the CDS spread found by using the first contract as $s$, and the second one as $s_d$. So the impact of Quanto effects could be determined as the difference between these two spreads $$\Delta s = s - s_d,$$ which below is quoted as “basis" spread.
A default set of parameter values used in our numerical experiments is given in Table \[TabParam\]. It is also assumed that in this default set all correlations are zero. If not stated otherwise, we use these values and assume the absence of jumps in the FX and foreign interest rates. The reference 5Y CDS par spread value $s_d$ under these assumptions is $s_d = 365$ bps.
[c c c c c c c c]{}\
$r$ & $a$ & $b$ & $\sigma_r$ & $\hat r$ & $\hat a$ & $\hat b$ & $\sigma_{\hat{r}}$\
0.02 & 0.08 & 0.1 & 0.01 & 0.03 & 0.08 & 0.1 & 0.08\
&&&&&&&\
\
$y$ & $a_y$ & $b_y$ & $\sigma_y$ & $z$ & $\sigma_z$ & $T$ & $\mathcal{R}$\
-4.089 & 0.0001 & -210 & 0.4 & 1.15 & 0.1 & 5 & 0.45\
The impact of the jump amplitude on the basis spread is presented in Fig. \[fig:Gammas\] for jumps in the interest rate (left panel) and exchange rate (right panel). In the absence of jumps ($\gamma_\hatr = 0$ or $\gamma_z = 0$) the domestic and foreign spreads have a basis about 3 bps. This is close to the normal situation where no currency and interest rate depreciation occurs. In fact, this was the case until recently when Quanto effects were not taken into account. For example, Greek CDS with payments in dollars and in euros were traded with a 1 bp difference in 2006, [@IFR2011].
The results displayed in the left panel of Fig. \[fig:Gammas\] demonstrate that the impact of jump in $\hat{R}_t$ increases rapidly for $\gamma_{\hatr}\in[0,2]$ and then saturates at some level. We explain this saturation by investor’s indifference to whether the interest rate increases by $300\%$ or $400\%$ since the interest rate level does not directly affect the protection amount, rather it influences the investment climate in the foreign economy. In contrast, the FX rate has an immediate impact (right panel) on the protection since a depreciation of the foreign currency diminishes the amount being paid out when converted to the US dollars. Through the well-known approximation of the hazard rate via the spread and bond recovery rate[^6] $$\lambda\approx \frac{s}{1-\mathcal{R}},$$ and using the results in [@Brigo], we identify that $$s \approx (1+\gamma_z)s_d.$$ That is, the CDS spread in the foreign currency is approximately proportional to the reference USD spread with the coefficient $(1+\gamma_z)$. Therefore, in the case of the foreign currency devaluation the coupon payments in the foreign currency should be lower. It can be observe that the results provided by our model perfectly align with this intuition.
We emphasize, that since the effect of the jump-at-default in the FX rate was thoroughly investigated in [@Brigo][^7], in this paper we mainly focus on examining the impact of the jump-at-default in the foreign interest rate. However, influence of the other model parameters is also investigated and reported.
![Basis spread as a function of the jump amplitude in the foreign exchange and interest rates.[]{data-label="fig:SurfGammas"}](MeshGamZGamR1.pdf){width="70.00000%"}
In Fig. \[fig:SurfGammas\] the joint influence of jumps in the FX and foreign IR on the value of the basis spread is presented. It can be seen that the jump-at-default in $\hatR_t$, which occurs simultaneously with the jump-at-default in $Z_t$, decreases the basis spread magnitude as compared with a similar case where $\hatR_t$ does not jump. This decrease slightly depends on the level of $\gamma_z$ and for our set of parameters is about 10 bps. To better illustrate this point Fig. \[fig:GamR\_GamZ\] represents some slices of the surface in Fig. \[fig:SurfGammas\]. It can be seen that the smaller is $\gamma_z$ the bigger is the impact of $\gamma_\hatr$, which, however, reaches some saturation at $\gamma_\hatr \approx 4$.
![The influence of the jump amplitude $\gamma_\hatr$ at various values of the jump amplitude $\gamma_z$. Note, the lines are shifted to start from the same point.[]{data-label="fig:GamR_GamZ"}](GamR_GamZall.pdf){width="70.00000%"}
In the next series of experiments we look at the influence of correlations among the stochastic factors on the Quanto-adjusted CDS value. The results presented in Fig. \[fig:Correlations\] indicate that only the correlations between the hazard rate $\lambda_t$ (or $Y_t$) and the stochastic factors that experience a jump-at-default, $\hatR_t, Z_t$, are relevant. The impact of the correlation between the hazard and FX rates $\rho_{yz}$ can range in 45 bps, while the impact of the correlation $\rho_{y\hatr}$ between the hazard rate and the foreign interest rate does not exceed 3 bps.
Fig. \[fig:GamRcorrs\] shows how the level of correlation between the foreign interest rate $\hatR_t$ and the other three stochastic factors affects the basis spread at various values of $\gamma_\hatr$ at $\gamma_z = 0$. In accordance with what was already mentioned, the results show that the correlations just slightly affect the basis spread value, except correlations with the hazard rate $\rho_{yz}, \rho_{y\hatr}$.
Fig. \[fig:Volatility\] shows the sensitivity of the foreign CDS to volatilities of the stochastic factors. We notice that the impact of the hazard rate volatility $\sigma_y$ is the strongest, and under the jump-free setup can make the CDS quotes varying in range in 17 bps. The effect of the FX rate volatility $\sigma_z$ is slightly weaker, while the effect of the interest rate volatilities $\sigma_r, \sigma_\hatr$ is almost negligible.
To analyze the impact of the two most influential volatilities under the presence of jumps in $\hatR_t$, we test how the volatility level affects the foreign CDS par spread with respect to the jump amplitude. These results are presented in Fig. \[fig:GamRhazard\], \[fig:GamRfx\]. Increasing $\sigma_y$ in combination with a $100\%$ raise in $\hatR$ gives rise to the basis spread changing the sign from being negative to positive, while the absolute value of the growth in $\Delta s$ is about 15 bps. However, the influence of $\sigma_z$ is just opposite. Larger values of $\sigma _z$ give rise to a negative basis spread, which though can be somewhat compensated by the increasing amplitude $\gamma_\hatr$ of the jump-at-default in the foreign interest rate $\hatR_t$.
\
\
Thus, we observe that the jump-at-default in the FX rate is the most prominent factor that explains the largest portion of the known discrepancies between Quanto CDS quotes in US dollars and the foreign currency. Nevertheless, the potential jump in the foreign interest rate might be responsible for about 20 bps in the basis spread value. However, it is important to notice that the two jumps have opposite effects: the jump in the FX rate decreases the value of the foreign CDS, while the jump in the IR increases the value of the foreign CDS.
Conclusion {#sec:Conclusion}
==========
This paper introduces a new model which can be used, e.g., for pricing Quanto CDS. The model operates with four stochastic factors, namely, hazard rate, foreign exchange rate, domestic interest rate, and foreign interest rate, and also allows for jumps-at-default in the FX and foreign interest rates. Corresponding systems of PDEs for both the risky bond price and the CDS price is derived similar to how this is done in [@BieleckiPDE2005].
In order to solve these equations we develop a localized radial basis function method that is based on the partition of unity approach. The advantage of the method is that in our four-dimensional case it maintains high accuracy while uses less resources than, for example, corresponding finite difference or Monte Carlo methods. Potentially, the RBF method can be a subject of parallelization which would improve the computational efficiency.
The results of our numerical experiments presented in the paper qualitatively explain the discrepancies observed in the marked values of CDS spreads traded in domestic and foreign economies and, accordingly, denominated in the domestic (USD) and foreign (euro, ruble, real, etc.) currencies. The Quanto effect (the difference between the prices of the same CDS contract traded in different economies, but represented in the same currency) can, to a great extent, be explained by the devaluation of the foreign currency. This would yield a much lower protection payout if converted to the US dollars. These results are similar to those obtained in [@Brigo]. We underline, however, that in [@Brigo] only constant foreign and domestic interest rates are considered, while in this paper they are stochastic even in the no-jumps framework.
In contrast to [@Brigo], in this paper we also analyze the impact of the jump-at-default in the foreign interest rate which could occur simultaneously with the default in the FX rate. We found that this jump is a significant component of the process and is able to explain about 20 bps of the basis spread value. However, it is worth noticing that the jumps in the FX rate and IR have opposite effects. In other words, devaluation of the foreign currency will decrease the value of the foreign CDS, while the increase of the foreign interest rate will increase the foreign CDS value.
The other important parameters of the model are correlations between the hazard rate and the factors that incorporate jumps, i.e., $\rho_{yz}$ and $\rho_{y\hatr}$, and volatilities of the hazard process $\sigma_y$ and the FX rate $\sigma_z$. Therefore, they have to be properly calibrated. Varying the other correlations just slightly contributes to the basis spread value. Large values of the volatilities can in some cases explain up to 15 bps of the basis spread value.
We also have to mention that the pricing problem was formulated via backward PDEs. Therefore, computation of the CDS spread requires to independently solve these PDEs for every discrete time point on a temporal grid lying below the contract maturity. This could be significantly improved if instead of the backward PDE we would work with the forward one for the corresponding density function. We leave this improvement to be implemented elsewhere.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Peter Carr and Damiano Brigo for their useful comments and discussions. Victor Shcherbakov acknowledges the support from H F Sederholms stipendiestiftelse, Rektors resebidrag frn Wallenbergstiftelsen, and Anna Maria Lundins stipendiefond. Victor Shcherbakov also thanks the Department of Finance and Risk Engineering at Tandon School, NYU where he worked on this paper as a visiting scholar. We assume full responsibility for any remaining errors.
References {#references .unnumbered}
==========
Ahlkrona, J. and Shcherbakov, V. (2017). A meshfree approach to non-[N]{}ewtonian free surface ice flow: [A]{}pplication to the [H]{}aut [G]{}lacier d’[A]{}rolla. , 330:633–649.
Augustin, P., Chernov, M., and Song, D. (2017). . Available at <https://sites.google.com/site/mbchernov/ACS_quanto_latest.pdf>.
Babu[š]{}ka, I. and Melenk, J. M. (1997). The partition of unity method. , 40(4):727–758.
Bielecki, T. R., Jeanblanc, M., and Rutkowski, M. (2005). Pde approach to valuation and hedging of credit derivatives. , 5(3):257–270.
Bielecki, T. R. and Rutkowski, M. R. (2004). . Springer.
Borovkov, K., Klebaner, F. C., and Virag, E. (2003). Random step functions model for interest rates. , 7(1):123–143.
Brigo, D. (2011). Arbitrage free credit valuation adjustments. LGS on mathematical finance. Credit and counterparty risk models. Single name credit derivatives. www.damianobrigo.it.
Brigo, D. and Morini, M. (2005). . Technical report, Banko IMI.
Brigo, D., Pede, N., and Petrelli, A. (2015). Multi currency credit default swaps quanto effects and [FX]{} devaluation jumps. arXiv:1512.07256.
Catao, L. A. V. and Mano, R. (2015). How big is the sovereign default interest rate premium? Technical report, World Economic Forum.
Cohen, A. and Costanzino, N. (2017). Bond and [CDS]{} pricing via the stochastic recovery [B]{}lack–[C]{}ox model. , 5(2):26.
Cox, J. C., Ingersoll, J. E., and Ross, S. R. (1985). A theory of the term structure of interest rates. , 53(2):385–408.
Crosby, J. (2013). Introduction to jump and [Lévy]{} processes. <http://www.john-crosby.co.uk/pdfs/JCrosby_OxfordJune2013_Levy.pdf>.
Duffie, D. and Singleton, K. (1999). Modeling term structures of defaultable bonds. , 12(4):687–720.
Ehlers, P. and Sch[ö]{}nbucher, P. (2006). The influence of fx risk on credit spreads. Technical report, ETH.
El-Mohammadi, R. (2009). . .
Endre, S. and David, M. (2003). . Cambridge University Press, ISBN 0-521-00794-1.
Ethier, S. and Kurtz, T. (2009). . Wiley Series in Probability and Statistics. Wiley.
Fasshauer, G. E. (2007). , volume 6 of [ *Interdisciplinary Mathematical Sciences*]{}. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ.
Fasshauer, G. E., Khaliq, A. Q. M., and Voss, D. A. (2004). Using meshfree approximation for multi-asset [A]{}merican option problems. , 27(4):563–571.
Hampden-Turner, M. and Goves, P. (2010). . Technical report, Citibank. Available at <https://finance.broad.msu.edu/files/2013/11/Citi-Credit-Derivatives-Primer.pdf>.
Hon, Y. C. and Mao, X. Z. (1999). A radial basis function method for solving options pricing model. , 8(1):31–49.
Itkin, A. (2017). , volume 12 of [ *Pseudo-Differential Operators. Theory and Applications*]{}. Birkhäuser/Springer, New York.
Jacod, J. and Shiryaev, A. (1987). . Springer-Verlag, Berlin.
Jarrow, R., van Deventer, D. R., and Wang, X. (2003). A robust test of merton’s structural model for credit risk. , 6:39–58.
Jarrow, R. A. and Turnbull, S. M. (1995). Pricing derivatives on financial securities subject to credit risk. , 50(1):53–86.
Jeanblanc, M., Yor, M., and Chesney, M. (2009). . Springer Finance. Springer-Verlag London, Ltd., London.
Katselas, G. A. (2010). . PhD thesis, The University of Melbourne.
Lipton, A. and Savescu, I. (2014). Pricing credit default swaps with bilateral value adjustments. , 14(1):171–188.
Papapantoleon, A. (2008). An introduction to [Lé]{}vy processes with applications in finance. Available at http://arxiv.org/abs/0804.0482.
Pettersson, U., Larsson, E., Marcusson, G., and Persson, J. (2008). Improved radial basis function methods for multi-dimensional option pricing. , 222(1):82–93.
Safdari-Vaighani, A., Heryudono, A., and Larsson, E. (2015). A radial basis function partition of unity collocation method for convection-diffusion equations. , 64(2):341–367.
Schonbucher, P. J. (2003.). . Wiley,, Chichester ;.
Shcherbakov, V. and Larsson, E. (2016). Radial basis function partition of unity methods for pricing vanilla basket options. , 71(1):185–200.
Shepard, D. (1968). A two-dimensional interpolation function for irregularly-spaced data. In [*Proceedings of the 1968 23rd ACM National Conference*]{}, ACM ’68, pages 517–524, New York, NY, USA. ACM.
Simon, Z. (2015). . , 11(074).
Thomson-Reuters (2011). . .
von Sydow, L., Höök, L. J., Lindström, E., Milovanović, S., Persson, J., Shcherbakov, V., Shpolyanskiy, Y., Sirén, S., Toivanen, J., Waldén, J., Wiktorsson, M., Levesley, J., Li, J., Oosterlee, C., Ruijter, M., Toropov, A., and Zhao, Y. (2015). — [T]{}he [BENCH]{}marking project in [O]{}ption [P]{}ricing. , 92(12):2361–2379.
Wendland, H. (1995). Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. , 4(4):389–396.
Wilmott, P. (1998). . Willey, New York.
Derivation of main PDE \[apDeriv\]
==================================
Below we give a sketch of derivation of the main PDE for the defaultable zero-coupon bond price which follows from our model introduced in Section \[modelJumps\], as the detailed derivation is rather long. Therefore, we utilize some results known in the literature, and explain only the main steps of the derivation.
According to our model setting which is presented in Section \[modelJumps\], all underlying stochastic processes $R_t,\hatR_t, Y_t, Z_t, D_t$ possess a strong Markovian property, see, e.g., [@BieleckiPDE2005]. Denote by $r,\hatr, y, z$, and $d$ the initial values of these processes at time $t$, respectively. For Markovian underlyings it is well-known, e.g., [@ethier2009markov], that evolution of $U_t$ represented as a function of variables $(t, r, \hatr, y, z, d)$ can be described by a corresponding PDE (or PIDE if jumps are also taken into account). In this section we derived such a PIDE in the explicit form.
Let us remind that in the jump-at-default framework the dynamics of $Z_t$ and $\hatR_t$ is given by , $$\begin{aligned}
\label{jumpRZ}
dZ_t &= (R_t - \hatR_t) Z_t dt + \sigma_z Z_t dW_t^{(3)} + \gamma_z Z_t d M_t, \\
d\hatR_t &= \hat a(\hat b-\hatR_t ) dt + \sigma_{\hatr} \sqrt{\hatR_t}dW_t^{(2)} + \gamma_{\hatr} R_t d D_t. \nonumber\end{aligned}$$ For the sake of convenience the second SDE could be re-written in the form of the first one $$\begin{aligned}
\label{hat_R}
d\hatR_t &= \hat a \left(\hat b-\hatR_t \right) dt + d \Gamma_t + \sigma_{\hatr} \sqrt{\hatR_t}dW_t^{(2)} + \gamma_{\hatr} R_t d M_t \\
&= \hat a \left(\hat b - \hatR_t - \frac{\lambda_t}{\hat a} (1-D_t) \right) dt + d \Gamma_t + \sigma_{\hatr} \sqrt{\hatR_t}dW_t^{(2)} + \gamma_{\hatr} R_t d M_t. \nonumber\end{aligned}$$ So we replaced $D_t$ with a compensated martingale $M_t$ by subtracting a compensator of $D_t$, and, accordingly, added this compensator to the drift. When doing so, we take into account to obtain $d \Gamma_t = (1 - D_t) \lambda_t dt$.
Below we need the following theorem from [@JacodShiryaev:87] (see also [@ItkinBook] and references therein), which provides a generalization of Itô’s lemma to the class of semimartingales
\[itoLevy\] Let $X = (X_t)_{0\le t \le T}$ be a process which is a real-valued semimartingale with the triplet $(b, c, \nu)$, and $f$ be a function on $\mathbb{R}$, $f \in C^2$. Then, $f(X)$ is a semimartingale, and $\forall t \in [0,T]$ the following representation holds $$\begin{aligned}
\label{itoLevyForm}
f(X_t) &= f(X_0) + \int_0^t f'(X_{s^-}) d X_s + \dfrac{1}{2} \int_0^t f''(X_{s^-}) d\langle X^c\rangle_s \\
&+ \sum_{0 \le s \le t} \left[ f(X_{s}) - f(X_{s^-}) - f'(X_{s^-}) \Delta X_s \right]. \nonumber\end{aligned}$$ Here $X_{s^-} = \lim_{u \nearrow s}$ is the value just before a potential jump, $\Delta X_s = X_s - X_{s^-}$, $X^c$ is the continuous martingale part of $X_t$, i.e. $X^c_t = \sqrt{c}W_t$, and $\langle \cdot \rangle$ determines a quadratic variation.
Alternatively, if the random measure of jumps $\mu^X(ds,dx)$ is used, we have $$\begin{aligned}
\label{JumpInt}
f(X_t) &= f(X_0) + \int_0^t f'(X_{s^-}) d X_{s^-} + \dfrac{1}{2} \int_0^t f''(X_{s^-}) d\langle X^c\rangle_{s^-} \\
&+ \int^t_0 \int_{\mathbb{R}} \left[ f(X_{s^-} + x) - f(X_{s^-}) - x f'(X_{s^-})\right] \mu^X(ds,dx). \nonumber\end{aligned}$$
See Theorem I.4.57 in [@JacodShiryaev:87].
Further let us consider only jumps of a finite variation and finite activity, so $$\sum_{0 \le s \le t} f(X_{s}) < \infty, \qquad
\sum_{0 \le s \le t} f'(X_{s^-}) \Delta X_s < \infty.$$
Our model allows only for a single jump to occur at the default time $\tau$. Therefore, $$\label{measureMu}
\mu^X(ds dx) = \delta(s-t) D_s \nu(dx) ds , \nonumber$$ with $\nu(dx)$ being a measure of jumps in $\mathbb{R}$, and where $\delta(x)$ is the Dirac delta function.
Respectively, in the differential form and for a multidimensional case reads $$\begin{aligned}
\label{JumpIntMult}
d f(\Xs) &= \fp{f(\Xsm)}{\Xsm} * d \Xs + \dfrac{1}{2} \sop{f(\Xsm)}{\Xsm} * d\langle \mathbf{X}^c\rangle_s \\
&+ \int_{\mathbb{R}} \left[ f(\Xsm + \mathbf{x}) - f(\Xsm) - \mathbf{x} * \fp{f(\Xsm)}{\Xsm} \right] \nu(d\mathbf{x}) d D_t, \nonumber\end{aligned}$$ where $\Xs$ is a vector of independent variables, $\mathbf{x}$ is a vector of the corresponding jump values, and $\left<*\right>$ is an inner product.
Also, according to the size of the jump in both the foreign interest rate and the FX rate is proportional to the value of the corresponding process right before the jump occurs at time $\tau$ with a constant rate $\gamma$.
Combining and gives rise to the measure $\nu(d \mathbf{x})$ of this multi-dimensional jump process to be $$\begin{aligned}
\label{levyM}
\nu(d \mathbf{x}) &= \delta(x_z - \gamma_z z)
\delta(x_\hatr - \gamma_\hatr \hatr) d x_z d x_\hatr,\end{aligned}$$ (compare, e.g., with [@Crosby2013]).
Therefore, the last line in changes to $$\begin{aligned}
\label{jump2d}
J &= \left[f(t, \Xs) - f(t, \Xsm) - \Delta \Xsm*\fp{ f(t, \Xsm)} {\Xsm }\right] d D_t, \\
\Xs &= \Xsm + \Delta \Xsm = f(t, r, \hatr(1+\gamma_\hatr), y, z(1+\gamma_z), d=1), \nonumber \\
\Xsm &= f(t, r, \hatr, y, z, d=0). \nonumber\end{aligned}$$ Having all these results, the PDE for the discounted defaultable bond price can be derived by using a standard technique for jump-diffusion processes, see, e.g., [@PAPAPANTOLEON2008]. However, for the sake of brevity, we will utilize the approach of [@BieleckiPDE2005], where a similar problem is considered, and, hence, making a reference to the corresponding theorems proved in that paper.
Note, that $$\begin{aligned}
\label{dDer}
\mathbb{E}_t[d D_t | D_t & d, Y_t = y] = d \mathbb{E}_t[D_t | D_t = d, Y_t = y] &= \lambda_t \m1_{t \le \tau} \Big|_{(D_t = d, Y_t = y)} dt = (1 - d) e^y dt,\end{aligned}$$ where the last by one equality follows from Lemma 7.4.1.3 in [@Jeanblanc2009].
Using , it can be seen that after the default occurs, $D_t = 1_{\tau \le t} = 1$, and thus the jump term $J$ disappears. However, before the default at time $t < \tau$ the jump term is $$\label{Jfinal}
J = f(t, \Xs) - f(t, \Xsm) - \Delta \Xsm * \fp{ f(t, \Xsm)} {\Xsm }.$$
So, conditional on the value of $D_t$ the solution could be represented in the form $$f(t, \Xs) = \m1_{t < \tau} f(t, \Xsm) + \m1_{\tau \le t} f(t, \Xs).$$ Then the remaining derivation of the PDE could be done based on the following Proposition:
\[Prop\] Let the price processes $Y^i, \ i=1,2,3$ satisfy $$d Y^i_t = Y^i_{t^-} \left[ \mu_i dt + \sigma_i d W^i_t + k_i d M_t\right]$$ with $k_i > -1$ for $i = 1,2,3$, $\mu, \sigma$ being the corresponding drifts and volatilities. Then the arbitrage price of a contingent claim $Y$ with the terminal payoff $G(t, Y^1_T, Y^2_T, Y^3_T, D_T)$ equals $$\pi_t(Y) = \m1_{t \le \tau} C(t, Y^1_t, Y^2_t, Y^3_t, 0) + \m1_{t \ge \tau} C(t, Y^1_t, Y^2_t, Y^3_t, 1)$$ for some function $C: [0,T] \times \mathbb{R}^3_+ \times {0,1} \to \mathbb{R}$. Assume that for $d = 0$ and $d=1$ the auxiliary function $C(\cdot,d): [0,T] \times \mathbb{R}^3_+ \to \mathbb{R}$ belongs to the class $C^{1,2}([0,T] \times \mathbb{R}^3_+)$. Then the functions $C(\cdot,0)$ and $C(\cdot,1)$ solve the following PDEs: $$\begin{aligned}
{\partial}_t C(\cdot,0) &+ \sum_{i=1}^3 (\alpha - \lambda k_i)y_i {\partial}_i C(\cdot,0) +
\frac{1}{2} \sum_{i,j=1}^3 \rho_{ij} \sigma_i \sigma_j y_i y_j {\partial}_{ij} C(\cdot,0) \\
&+ \lambda \left[ C(t, y_1(1+k_1), y_2(1+k_2), y_3(1+k_3), 1) - C(t, y_1, y_2, y_3, 0)\right] - \alpha C(\cdot,0) = 0, \\
{\partial}_t C(\cdot,1) &+ \sum_{i=1}^3 \alpha y_i {\partial}_i C(\cdot,1) +
\frac{1}{2} \sum_{i,j=1}^3 \rho_{ij} \sigma_i \sigma_j y_i y_j {\partial}_{ij} C(\cdot,1)
- \alpha C(\cdot,1) = 0.\end{aligned}$$ subject to the terminal conditions $$\begin{aligned}
C(T, y_1, y_2, y_3, 0) &= G((T, y_1, y_2, y_3, 0), \\
C(T, y_1, y_2, y_3, 1) &= G((T, y_1, y_2, y_3, 1). \\\end{aligned}$$
See [@BieleckiPDE2005].
Two important notes should be made in order to apply this proposition to our problem.
#### Tradable assets
In [@BieleckiPDE2005] all underlying assets are assumed to be tradable. Therefore, they have to be martingales under some unique martingale measure (a particular choice of $Y^1$ as is made to be a numeraire). To achieve this, additional conditions on the drifts, volatilities and the jump rates $k_i$ should be imposed. In particular, this requires that the coefficient $\alpha$ in Proposition \[Prop\] would be $$\alpha = \mu_i + \sigma_i \frac{c}{a},$$ where the determinants $c,a$ are the explicit functions of $\mu_i, \sigma_i, k_i, \ i=1,2,3$ and given in [@BieleckiPDE2005]. Moreover, it is shown that the right-hand side of this formula does not depend on $i$.
However, for our problem among all the underlying processes the only tradable one is that for the FX rate. This allows one to fully eliminate these conditions on $\mu_i, \sigma_i, k_i, \ i=1,2,3$. As a consequence, e.g., the term $$\sum_{i=1}^3 (\alpha - \lambda k_i)y_i {\partial}_i C(\cdot,0)$$ in the Proposition \[Prop\] is now replaced with $$\sum_{i=1}^3 (\mu_i - \lambda k_i)y_i {\partial}_i C(\cdot,0).$$
#### Risk-neutrality
Proposition \[Prop\] derives an arbitrage price (under real measure) of the contingent claim written on the given underlyings. To get this price under a risk-neutral measure $\QM$, one needs to construct a replication ([*self-financing*]{}) strategy of a generic claim. In particular, to hedge out the risk of $\hat R_t$ and $R_t$, corresponding non-defaultable zero-coupon bonds (perhaps, of a longer maturity) should be used as a hedge, [@Bielecki2004; @WIlmott1998].
This problem is solved by Proposition 3.3 of [@BieleckiPDE2005]. Accordingly, the previously derived PDEs remain the same, with the only change of the killing term where the coefficient $\alpha$ is replaced with the interest $r$ corresponding to measure $\QM$ (as expected based on a general theory of asset pricing).
We proceed by combining these results together and applying them to our model. First, we revert the notation back to that used in this paper. Then, taking into account an explicit form of the stochastic differential equations describing the dynamics of our underlying processes, and conditioning on $R_t = r, \hatR_t = \hatr, Z_t = z, Y_t = y, D_t = d$, we obtain that under the risk-neutral measure $\QM$ the price $U_t(T)$ is $$\label{bondPriceA}
U_t(T, r, \hatr, y, z) = \m1_{t < \tau} f(t, T, r,\hatr, y, z, 0) +
\m1_{t \ge \tau} f(t, T, r,\hatr, y, z, 1).$$
Here the function $f(t, T, r,\hatr, y, z, 1) \equiv u(t, T, X), \ X = \{t,r,\hatr, y, z\}$ solves the PDE $$\label{PDE1A}
\fp{u(t,T,X)}{t} + {\cal L} u(t,T,X) - r u(t,T,X) = 0,$$ where the diffusion operator $\cal L$ reads $$\begin{aligned}
\label{LdiffA}
\cal L &= \frac{1}{2}\sigma{_r}^2 r\sop{}{r} + \frac{1}{2} \sigma_{\hatr}^2 \hatr \sop{u}{\hatr} + \frac{1}{2}\sigma_z^2 z^2 \sop{}{z} + \frac{1}{2}\sigma_y^2\sop{}{y}
+ \rho_{r \hatr} \sigma_r \sigma_{\hatr} \sqrt{r \hatr}\cp{}{r}{\hatr} \\
&+ \rho_{rz}\sigma_r \sigma_z z\sqrt{r} \cp{}{r}{z}
+ \rho_{\hatr z} \sigma_{\hatr} \sigma_z z \sqrt{\hatr} \cp{}{z}{\hatr}
+ \rho_{ry}\sigma_r \sigma_y \sqrt{r} \cp{}{r}{y}
+ \rho_{\hatr y} \sigma_{\hatr} \sigma_y \sqrt{\hatr} \cp{}{y}{\hatr}
\nonumber \\
&+ \rho_{yz} \sigma_y \sigma_z z \cp{}{y}{z}
+ a(b-r)\fp{}{r}
+ \hat a(\hat b - \hatr) \fp{}{\hatr}
+ (r - \hatr) z \fp{}{z}
+ \kappa(\theta - y) {\partial}{}{y}. \nonumber\end{aligned}$$
The second function $f(t, T, r,\hatr, y, z, 0) \equiv v(t, T, X)$ solves the PDE $$\begin{aligned}
\label{PDE2A}
\fp{v(t,T,X)}{t} &+ {\cal L} v(t,T,X) - r v(t,T,X)
- \lambda \gamma_z z \fp{v(t,T,X)}{z} \\
&+ \lambda \left[ u(t, T, X^+) -
v(t, T, X) \right] = 0, \qquad X^+ = \{r, \hatr(1+\gamma_\hatr), y, z(1+\gamma_z)\}, \nonumber\end{aligned}$$ where according to , $\lambda = e^y$. Note, that the term $\lambda \gamma_\hatr \hatr v_\hatr(t,T,X)$ in the drift of cancels out with the corresponding compensator in as it should be as the process $\hatR_t$ is not a martingale.
[^1]: This is to prevent $Z_t$ to be negative, [@BieleckiPDE2005].
[^2]: The PDEs remain unchanged since the model is same, and only the continent claim $G(t,T,r,\hatr, y,z,d)$, which is a function of the same underlying processes, changes.
[^3]: More details could be found, e.g., in [@Fasshauer].
[^4]: Our experience shows that this approach works better and provides a more stable RBF approximation.
[^5]: Alternatively, the whole four-dimensional framework could be used if one sets $z=1, \hatr = r, \gamma_z = \hat a = \sigma_\hatr = \gamma_\hatr = 0$, and $\rho_{\cdot,z} = \rho_{\cdot,\hatr} = \rho_{z,\hatr} = 0$, where $\langle \cdot \rangle \in [r, z, y]$.
[^6]: Which is correct if the hazard rate $\lambda_t$ is constant.
[^7]: In [@Brigo], however, only constant foreign and domestic interest rates are considered, while in this paper they are stochastic even in the no-jumps framework.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We present an on-shell scheme to renormalize the Cabibbo-Kobayashi-Maskawa (CKM) matrix. It is based on a novel procedure to separate the external-leg mixing corrections into gauge-independent self-mass and gauge-dependent wave-function renormalization contributions, and to implement the on-shell renormalization of the former with non-diagonal mass counterterm matrices. Diagonalization of the complete mass matrix leads to an explicit CKM counterterm matrix, which automatically satisfies all the following important properties: it is gauge independent, preserves unitarity, and leads to renormalized amplitudes that are non-singular in the limit in which any two fermions become mass degenerate.
PACS: 11.10.Gh, 12.15.Ff, 12.15.Lk, 13.38.Be
author:
- |
Bernd A. Kniehl[^1] and Alberto Sirlin[^2]\
\
[*Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),*]{}\
[*Föhringer Ring 6, 80805 Munich, Germany*]{}
title: |
-3cm
DESY 06-141ISSN 0418-9833
MPP-2006-108
NYU-TH/06/08/29
hep-ph/0608306
August 2006
1.5cm **Simple Approach to Renormalize the Cabibbo-Kobayashi-Maskawa Matrix**
---
The Cabibbo-Kobayashi-Maskawa (CKM) [@cab] flavor mixing matrix, which rules the charged-current interactions of the quark mass eigenstates and describes how the heavier ones decay to the lighter ones, is one of the fundamental cornerstones of the Standard Model of elementary particle physics and, in particular, it is the key to our understanding why the weak interactions are not invariant under simultaneous charge-conjugation and parity transformations. In fact, the detailed determination of this matrix is one of the major aims of recent experiments carried out at the $B$ factories [@pdg], as well as the objective of a wide range of theoretical studies [@pdg; @Czarnecki:2004cw]. An important theoretical problem associated with the CKM matrix is its renormalization. An early discussion, in the two-generation framework, was given in Ref. [@Marciano:1975cn], focusing mostly on the cancellation of ultraviolet divergences. More recently, there have been a number of interesting papers that address the renormalization of both the divergent and finite contributions at various levels of generality and complexity [@Denner:1990yz].
![\[fig:one\]Fermion mixing self-energy diagrams. $H$ and $\phi^\pm$ denote Higgs and charged Goldstone bosons, respectively. Diagram (b) is included to cancel the gauge dependence in the diagonal contribution of diagrams (a).](fig1ab.ps){width="49.00000%"}
In this Letter we propose an explicit on-shell framework to renormalize the CKM matrix at the one-loop level, based on a novel procedure to separate the external-leg mixing corrections into gauge-independent “self-mass” (sm) and gauge-dependent “wave-function renormalization” (wfr) contributions, and to implement the on-shell renormalization of the former with non-diagonal mass counterterm matrices. This procedure may be regarded as a simple generalization of Feynman’s approach in Quantum Electrodynamics (QED) [@Feynman:1949zx]. We recall that, in QED, the self-energy contribution to an outgoing fermion is given by $$\begin{aligned}
\Delta{\cal M}^{\rm leg}&=&\overline{u}(p)\Sigma({{\slashed{p}}})\frac{1}{{{\slashed{p}}}-m},
\label{eq:dm}\\
\Sigma({{\slashed{p}}})&=&A+B({{\slashed{p}}}-m)+\Sigma_{\rm fin}({{\slashed{p}}}),
\label{eq:sig}\end{aligned}$$ where $\Sigma({{\slashed{p}}})$ is the self-energy, $A$ and $B$ are divergent constants, and $\Sigma_{\rm fin}({{\slashed{p}}})$ is a finite part which is proportional to $({{\slashed{p}}}-m)^2$ in the vicinity of ${{\slashed{p}}}=m$ and, therefore, vanishes when inserted in Eq. (\[eq:dm\]). The contribution of $A$ to Eq. (\[eq:dm\]) exhibits a pole at ${{\slashed{p}}}=m$ and is gauge independent, while that of $B$ is regular at this point, but gauge dependent. They are referred to as sm and wfr contributions, respectively. $A$ is canceled by the mass counterterm. On the other hand, since the factor $({{\slashed{p}}}-m)$ cancels the propagator’s singularity, in Feynman’s approach $B$ is combined with the proper vertex diagrams leading to a gauge-independent result.
In the case of the CKM matrix, one encounters not only diagonal terms as in Eq. (\[eq:dm\]), but also off-diagonal external-leg contributions generated by the Feynman diagrams of Fig. \[fig:one\](a). As a consequence, the self-energy corrections to an external leg are of the form $$\Delta{\cal M}_{ii^\prime}^{\rm leg}=\overline{u}_i(p)\Sigma_{ii^\prime}({{\slashed{p}}})
\frac{1}{{{\slashed{p}}}-m_{i^\prime}},
\label{eq:dmii}$$ where $i$ denotes the external quark of momentum $p$ and mass $m_i$, and $i^\prime$ the virtual quark of mass $m_{i^\prime}$.
We evaluate the contributions of Fig. \[fig:one\] in $R_\xi$ gauge, treating the $i$ and $i^\prime$ quarks on an equal footing. (A detailed account of our analytical work will be presented in a later, longer manuscript [@long].) For example, we write $$\begin{aligned}
2{{\slashed{p}}}a_-&=&{{\slashed{p}}}a_-+a_+{{\slashed{p}}}\\
&=&({{\slashed{p}}}-m_i)a_-+a_+({{\slashed{p}}}-m_{i^\prime})+m_ia_-+m_{i^\prime}a_+,
\nonumber\end{aligned}$$ where $a_\pm=(1\pm\gamma_5)/2$ are the chiral projectors. Using this approach, we find that the contributions of Fig. \[fig:one\] can be classified in four classes: (i) terms with a left factor $({{\slashed{p}}}-m_i)$; (ii) terms with a right factor $({{\slashed{p}}}-m_{i^\prime})$; (iii) terms with a left factor $({{\slashed{p}}}-m_i)$ and a right factor $({{\slashed{p}}}-m_{i^\prime})$; and (iv) constant terms not involving ${{\slashed{p}}}$. When inserted in Eq. (\[eq:dmii\]), the terms of class (iii) obviously vanish, in analogy with $\Sigma_{\rm fin}({{\slashed{p}}})$ in Eqs. (\[eq:dm\]) and (\[eq:sig\]). The terms of classes (i) and (ii) contain gauge-dependent parts but, when inserted in Eq. (\[eq:dmii\]), they combine to cancel the propagator $({{\slashed{p}}}-m_{i^\prime})^{-1}$ in both the diagonal ($i=i^\prime$) and off-diagonal ($i\ne i^\prime$) contributions. Thus, they lead to expressions suitable for combination with the proper vertex diagrams. In analogy with $B$ in Eqs. (\[eq:dm\]) and (\[eq:sig\]), such expressions are identified as wfr contributions. They satisfy the following important property: all the gauge-dependent and all the divergent wfr contributions to the basic $W\to q_i+\overline{q}_j$ amplitude are independent of $i^\prime$. Using the unitarity relation $V_{il}V_{li^\prime}^\dagger V_{i^\prime j}=V_{il}\delta_{lj}$ (since the cofactor of this expression depends on $m_l$, the summation over $l$ is performed later), one then finds that the gauge-dependent and the divergent wfr contributions to the $W\to q_i+\overline{q}_j$ amplitude are independent of CKM matrix elements, except for an overall factor $V_{ij}$, and depend only on the external-quark masses $m_i$ and $m_j$. Since the one-loop proper vertex diagrams also only depend on $m_i$, $m_j$, and an overall factor $V_{ij}$, this observation implies that the proof of gauge independence and finiteness of the remaining one-loop corrections to the $W\to q_i+\overline{q}_j$ amplitude is the same as in the unmixed, single-generation case!
In contrast to the contributions of classes (i) and (ii) to Eq. (\[eq:dmii\]), those of class (iv) lead to a multiple of $({{\slashed{p}}}-m_{i^\prime})^{-1}$ with a cofactor that involves $a_\pm$, but is independent of ${{\slashed{p}}}$. Thus, they are unsuitable to be combined with the proper vertex diagrams and are expected to be separately gauge independent, as we indeed find. In analogy with $A$ in Eqs. (\[eq:dm\]) and (\[eq:sig\]), they are identified with sm contributions. Specifically, in the case of an outgoing up-type quark, the sm contributions from Fig. \[fig:one\] are given by the gauge-independent expression $$\begin{aligned}
\Delta{\cal M}_{ii^\prime}^{\rm sm}&=&
\frac{g^2}{32\pi^2}V_{il}V_{li^\prime}^\dagger
\overline{u}_i(p)\left\{m_i\left(1+\frac{m_i^2}{2m_W^2}\Delta\right)
\right.
\nonumber\\
&&{}+\left[m_ia_-+m_{i^\prime}a_+
+\frac{m_im_{i^\prime}}{2m_W^2}(m_ia_++m_{i^\prime}a_-)\right]
\nonumber\\
&&{}\times
\left[I\left(m_i^2,m_l\right)-J\left(m_i^2,m_l\right)\right]
\nonumber\\
&&{}-\frac{m_l^2}{2m_W^2}(m_ia_-+m_{i^\prime}a_+)
\left[3\Delta+I\left(m_i^2,m_l\right)
\right.
\nonumber\\
&&{}+\left.\left.J\left(m_i^2,m_l\right)\right]
\vphantom{\frac{m_i^2}{2m_W^2}}
\right\}
\frac{1}{{{\slashed{p}}}-m_{i^\prime}},
\label{eq:legsm}\end{aligned}$$ where $g$ is the SU(2) gauge coupling, $\Delta=1/(n-4)+[\gamma_E-\ln(4\pi)]/2+\ln(m_W/\mu)$, $n$ is the space-time dimension, $\mu$ is the ’t Hooft mass, $\gamma_E$ is Euler’s constant, $$\begin{aligned}
\lefteqn{\{I(p^2,m_l);J(p^2,m_l)\}
=\int_0^1dx\,\{1;x\}}
\nonumber\\
&&{}\times
\ln\frac{m_l^2x+m_W^2(1-x)-p^2x(1-x)-i\varepsilon}{m_W^2},\end{aligned}$$ and $m_l$ are the masses of the virtual down-type quarks in Fig. \[fig:one\](a). Terms independent of $m_l$ within the curly brackets of Eq. (\[eq:legsm\]) lead to diagonal contributions on account of $V_{il}V_{li^\prime}^\dagger=\delta_{ii^\prime}$. There are other sm contributions involving virtual $Z^0$, $\phi^0$, $\gamma$, and $H$ bosons, as well as additional tadpole diagrams, but these are again diagonal expressions of the usual kind.
In order to generate mass counterterms, we proceed as follows. In the weak-eigenstate basis, the bare mass terms are of the form $-\overline{\psi}_R^{\prime Q}m_0^{\prime Q}\psi_L^{\prime Q}+\mbox{h.c.}$, where $\psi_L^{\prime Q}$ and $\psi_R^{\prime Q}$ are left- and right-handed column spinors involving the three up-type ($Q=U$) and down-type ($Q=D$) quarks, and $m_0^{\prime Q}$ are non-diagonal matrices. Writing $m_0^{\prime Q}=m^{\prime Q}-\delta m^{\prime Q}$, where $m^{\prime Q}$ and $\delta m^{\prime Q}$ are the renormalized and counterterm mass matrices, we consider a biunitary transformation of the quark fields that diagonalizes $m^{\prime Q}$ leading to diagonal and real renormalized mass matrices $m^Q$ and to new non-diagonal mass counterterm matrices $\delta m^Q$. In the new framework, the mass term is given by $$\begin{aligned}
\lefteqn{-\overline{\psi}\left(m-\delta m^{(-)}a_--\delta m^{(+)}a_+\right)
\psi}
\nonumber\\
&=&-\overline{\psi}_R\left(m-\delta m^{(-)}\right)\psi_L
-\overline{\psi}_L\left(m-\delta m^{(+)}\right)\psi_R,\quad
\label{eq:mass}\end{aligned}$$ where $m$ is real, diagonal, and positive, and $\delta m^{(-)}$ and $\delta m^{(+)}$ are arbitrary non-diagonal matrices subject to the hermiticity constraint $$\delta m^{(+)}=\delta m^{(-)\dagger}.
\label{eq:her}$$ Here we have not exhibited the superscript $Q$, but it is understood that $m$ and $\delta m^{(\pm)}$ stand for two different sets of matrices involving the up- and down-type quarks. As usual, the mass counterterms are included in the interaction Lagrangian. Their contribution to the external-leg corrections is given by $-\overline{u}_i(p)\left(\delta m_{ii^\prime}^{(-)}a_-
+\delta m_{ii^\prime}^{(+)}a_+\right)/$$({{\slashed{p}}}-m_{i^\prime})$. Next we adjust $\delta m_{ii^\prime}^{(\pm)}$ to cancel, as much as possible, the sm contributions given in Eq. (\[eq:legsm\]). The cancellation of the divergent parts is achieved by choosing $$\begin{aligned}
\left(\delta m_{\rm div}^{(-)}\right)_{ii^\prime}&=&
\frac{g^2m_i}{64\pi^2m_W^2}\Delta
\left(\delta_{ii^\prime}m_i^2-3V_{il}V_{li^\prime}^\dagger m_l^2\right),
\nonumber\\
\left(\delta m_{\rm div}^{(+)}\right)_{ii^\prime}&=&
\frac{g^2m_{i^\prime}}{64\pi^2m_W^2}\Delta
\left(\delta_{ii^\prime}m_i^2-3V_{il}V_{li^\prime}^\dagger m_l^2\right),\quad
\label{eq:div}\end{aligned}$$ which satisfies the hermiticity constraint of Eq. (\[eq:her\]). Because the functions $I(p^2,m_l)$ and $J(p^2,m_l)$ are evaluated at $p^2=m_i^2$ in the $ii^\prime$ channel (where $i$ and $i^\prime$ are the external and virtual quarks, respectively) and at $p^2=m_{i^\prime}^2$ in the $i^\prime i$ channel (where $i^\prime$ and $i$ are the external and virtual quarks, respectively), it is easy to see that it is not possible to cancel all the finite pieces of Eq. (\[eq:legsm\]) in all channels without contradicting Eq. (\[eq:her\]). In particular, we note that once the $\delta m_{ii^\prime}^{(\pm)}$ are chosen, the $\delta m_{i^\prime i}^{(\pm)}$ are fixed by Eq. (\[eq:her\]). For this reason, we employ the following renormalization prescription: the mass counterterms are chosen to exactly cancel all the contributions to Eq. (\[eq:legsm\]) in the $i^\prime=i$, $uc$, $ut$, and $ct$ channels, and all the sm contributions in the $j^\prime=j$, $sd$, $bd$, and $bs$ channels in the corresponding down-type-quark expression. (Here $j$ and $j^\prime$ are the incoming and virtual down-type quarks, respectively.) This implies that, after mass renormalization, there are residual sm contributions in the $cu$, $tu$, $tc$, $ds$, $db$, and $sb$ channels. However, these residual contributions are finite, gauge independent, and numerically very small. In fact, the fractional corrections they induce in the real parts of $V_{ij}$ reach a maximum value of ${\cal O}(4\times10^{-6})$ for $V_{ts}$, and they are much smaller in the case of several other CKM matrix elements. Since they are regular in the limits $m_{i^\prime}\to m_i$ or $m_{j^\prime}\to m_j$, they may be regarded as additional finite and gauge-independent contributions to wave-function renormalization that happen to be very small.
We emphasize that with this renormalization prescription the sm corrections are fully canceled in all channels in which the external particle is a $u$, $\overline{u}$, $d$, or $\overline{d}$ quark. This is of particular interest since $V_{ud}$, the parameter associated with $W\to u+\overline{d}$, is by far the most precisely determined CKM matrix element [@Czarnecki:2004cw].
It is also interesting to note that, since Eq. (\[eq:div\]) satisfies Eq. (\[eq:her\]), the modified minimal-subtraction ($\overline{\mathrm{MS}}$) renormalization, in which only the $1/(n-4)+[\gamma_E-\ln(4\pi)]/2$ terms are subtracted, can be implemented in all non-diagonal channels. More generally, one can consider a renormalization prescription that satisfies the hermiticity condition in all channels by choosing the mass counterterms to cancel the off-diagonal terms in Eq. (\[eq:legsm\]) and the corresponding down-type-quark expression with the functions $I(p^2,m_l)$ and $J(p^2,m_l)$ evaluated at the same fixed $p^2$ value for all flavors. Since Eq. (\[eq:legsm\]) is explicitly gauge independent, in our formulation there is no restriction in the choice of $p^2$ other than that it should not generate imaginary parts in the integrals $I(p^2,m_l)$ and $J(p^2,m_l)$. In particular, $p^2$ can have any value $p^2\le m_W^2$. Of course, since it is desirable to cancel the sm contributions as much as possible, it is convenient to choose $0\le p^2\ll m_W^2$. It should be pointed out, however, that the $\overline{\mathrm{MS}}$ and fixed-$p^2$ subtraction prescriptions of mass renormalization are not on-shell schemes and lead to residual sm contributions in all off-diagonal channels, which diverge in the limits $m_{i^\prime}\to m_i$ or $m_{j^\prime}\to m_j$.
An alternative formulation, equivalent to the one discussed so far, is obtained by diagonalizing the complete mass matrix $m-\delta m^{(-)}a_--\delta m^{(+)}a_+$ in Eq. (\[eq:mass\]). This is achieved by a biunitary transformation $$\psi_L=U_L\hat\psi_L,\qquad\psi_R=U_R\hat\psi_R.$$ At the one-loop level, it is sufficient to approximate $$U_L=1+ih_L,\qquad U_R=1+ih_R,$$ where $h_L$ and $h_R$ are hermitian matrices of ${\cal O}(g^2)$. The diagonalization is implemented by choosing $$i(h_L)_{ii^\prime}=\frac{m_i\delta m_{ii^\prime}^{(-)}
+\delta m_{ii^\prime}^{(+)}m_{i^\prime}}{m_i^2-m_{i^\prime}^2}
\qquad (i\ne i^\prime),
\label{eq:hlii}$$ while $i(h_R)_{ii^\prime}$ is obtained by exchanging $\delta m^{(-)}\leftrightarrow\delta m^{(+)}$ in Eq. (\[eq:hlii\]). Since the only effect of the diagonal terms of $h_L$ and $h_R$ on the $Wq_i\overline{q}_j$ interaction is to introduce phases that can be absorbed in a redefinition of the quark fields, it is convenient to set $(h_L)_{ii}=(h_R)_{ii}=0$. This analysis is carried out separately to diagonalize the mass matrices of the up- and down-type quarks. Thus, we obtain two pairs of matrices: $h_L^U$ and $h_R^U$ for the up-type quarks and $h_L^D$ and $h_R^D$ for the down-type quarks. Next we consider the effect of this biunitary transformation on the $Wq_i\overline{q}_j$ interaction $${\cal L}_{Wq_i\overline{q}_j}=-\frac{g_0}{\sqrt2}
\overline{\psi}_L^UV\gamma^\lambda\psi_L^DW_\lambda+\mbox{h.c.}.$$ We readily find that $${\cal L}_{Wq_i\overline{q}_j}=-\frac{g_0}{\sqrt2}\overline{\hat\psi}_L^U
(V-\delta V)\gamma^\lambda\hat\psi_L^DW_\lambda+\mbox{h.c.},
\label{eq:hc}$$ where $$\delta V=i\left(h_L^UV-Vh_L^D\right).
\label{eq:dv}$$ It is important to note that $V-\delta V$ satisfies the unitarity condition through ${\cal O}(g^2)$: $$(V-\delta V)^\dagger(V-\delta V)=1+{\cal O}(g^4).$$ In the $(\hat\psi_L,\hat\psi_R)$ basis, in which the complete quark mass matrices are diagonal, $\delta V$ and $V_0=V-\delta V$ represent the counterterm and bare CKM matrices, respectively. One readily verifies that the term $ih_L^UV$ in $\delta V$ leads to the same off-diagonal contribution to the $W\to q_i+\overline{q}_j$ amplitude as $\delta m^{U(-)}$ and $\delta m^{U(+)}$ in our previous discussion in the $(\psi_L,\psi_R)$ basis. Similarly, the term $-iVh_L^D$ leads to the same contributions as $\delta m^{D(-)}$ and $\delta m^{D(+)}$. It is important to emphasize that this formulation is consistent with the unitarity and gauge independence of both the renormalized and bare CKM matrices, $V$ and $V_0$, respectively.
For completeness, we exhibit the CKM counterterm matrix in component form: $$\begin{aligned}
\delta V_{ij}&=&i\left[\left(h_L^U\right)_{ii^\prime}V_{i^\prime j}
-V_{ij^\prime}\left(h_L^D\right)_{j^\prime j}\right]
\nonumber\\
&=&\frac{m_i^U\delta m_{ii^\prime}^{U(-)}
+\delta m_{ii^\prime}^{U(+)}m_{i^\prime}^U}{\left(m_i^U\right)^2
-\left(m_{i^\prime}^U\right)^2}V_{i^\prime j}
\nonumber\\
&&{}-V_{ij^\prime}\frac{m_{j^\prime}^D\delta m_{j^\prime j}^{D(-)}
+\delta m_{j^\prime j}^{D(+)}m_j^D}{\left(m_{j^\prime}^D\right)^2
-\left(m_j^D\right)^2},
\label{eq:dvii}\end{aligned}$$ where it is understood that $i^\prime\ne i$ in the first term on the r.h.s. and $j^\prime\ne j$ in the second, and $\delta m_{ii^\prime}^{U(\pm)}$ and $\delta m_{j^\prime j}^{D(\pm)}$ are the off-diagonal mass counterterms determined by the on-shell renormalization prescriptions proposed in our first formulation. The coefficient of $1/(n-4)$ in Eq. (\[eq:dvii\]) is, of course, common to all renormalization prescriptions for the CKM matrix [@Denner:1990yz] and also appears in its renormalization group equation [@Babu:1987im].
In summary, after introducing a novel procedure to separate the external-leg mixing corrections into gauge-independent sm and gauge-dependent wfr contributions, in analogy with Feynman’s treatment in QED, we have implemented their renormalization in two equivalent frameworks. The first one is carried out in a basis in which the renormalized quark matrices are diagonal and the non-diagonal mass counterterm matrices are employed to cancel all the divergent sm contributions, and also their finite parts up to hermiticity constraints. In particular, the sm corrections are fully canceled in the $W\to u+\overline{d}$ amplitude, associated with $V_{ud}$, the most accurately measured CKM parameter. Residual finite contributions in other channels are very small. We have also pointed out that the proof of gauge independence and finiteness of the remaining one-loop corrections to the $W\to q_i+\overline{q}_j$ amplitude reduces to that in the unmixed, single-generation case. Alternative renormalization prescriptions that are “democratic,” in the sense that they do not single out particular off-diagonal channels, were briefly outlined. However, strictly speaking, they are not on-shell schemes and lead to residual sm contributions in all off-diagonal channels, which diverge in the limits $m_{i^\prime}\to m_i$ or $m_{j^\prime}\to m_j$.
The second formulation was obtained by diagonalizing the complete mass matrices, namely the renormalized plus counterterm mass matrices derived in the first approach. In the second framework a CKM counterterm matrix $\delta V$ was generated which again cancels the divergent and, to the extent allowed by the hermiticity constraints, also the finite parts of the off-diagonal sm contributions. As usual, the diagonal sm contributions are canceled by the mass counterterms, which in this approach are also diagonal. An important feature is that this formulation is consistent with the unitarity and gauge independence of both the renormalized and bare CKM matrices, $V$ and $V_0=V-\delta V$, respectively.
As is well known, an enduring difficulty, thirty years old, in a satisfactory treatment of the one-loop electroweak corrections to all charged-current processes involving fermions is due to the external off-diagonal self-energy effects depicted in Fig. \[fig:one\](a). Since the mass renormalization of the usual, diagonal effects must necessarily involve a complete subtraction of the sm contributions to avoid the propagator’s singularity \[see Eq. (\[eq:dm\])\], it is natural to follow the same strategy in the off-diagonal contributions. Thus, an on-shell renormalization procedure to treat all these effects is highly desirable and strongly motivated. Such an objective has been achieved for the first time in this Letter in a way that the following important properties are manifestly satisfied: the CKM counterterm matrix is gauge independent, preserves unitarity, and leads to renormalized amplitudes that are non-singular in the limit in which any two fermions become mass degenerate. Because of the close analogy with QED and the fact that our decomposition procedure is algebraic in nature, it is likely that this approach can be naturally generalized to higher orders.
We are grateful to the Max Planck Institute for Physics in Munich for the hospitality during a visit when this manuscript was prepared. The work of B.A.K. was supported in part by the German Research Foundation through the Collaborative Research Center No. 676 [*Particles, Strings and the Early Universe—the Structure of Matter and Space-Time*]{}. The work of A.S. was supported in part by the Alexander von Humboldt Foundation through the Humboldt Research Award No. IV USA 1051120 USS and by the National Science Foundation Grant No. PHY-0245068.
[99]{}
N. Cabibbo, Phys. Rev. Lett. [**10**]{}, 531 (1963); M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}, 652 (1973).
W.-M. Yao [*et al.*]{} (Particle Data Group), J. Phys. G [**33**]{}, 1 (2006), and references cited therein.
A. Czarnecki, W.J. Marciano, and A. Sirlin, Phys. Rev. D [**70**]{}, 093006 (2004), and references cited therein; W.J. Marciano and A. Sirlin, Phys. Rev. Lett. [**96**]{}, 032002 (2006).
W.J. Marciano and A. Sirlin, Nucl. Phys. [**B93**]{}, 303 (1975).
A. Denner and T. Sack, Nucl. Phys. [**B347**]{}, 203 (1990); B.A. Kniehl and A. Pilaftsis, [*ibid.*]{} [**B474**]{}, 286 (1996); P. Gambino, P.A. Grassi, and F. Madricardo, Phys. Lett. B [**454**]{}, 98 (1999); B.A. Kniehl, F. Madricardo, and M. Steinhauser, Phys. Rev. D [**62**]{}, 073010 (2000); A. Barroso, L. Brücher, and R. Santos, [*ibid.*]{} [**62**]{}, 096003 (2000); Y. Yamada, [*ibid.*]{} [**64**]{}, 036008 (2001); K.-P.O. Diener and B.A. Kniehl, Nucl. Phys. [**B617**]{}, 291 (2001); A. Pilaftsis, Phys. Rev. D [**65**]{}, 115013 (2002); D. Espriu, J. Manzano, and P. Talavera, [*ibid.*]{} [**66**]{}, 076002 (2002); Y. Zhou, Phys. Lett. B [**577**]{}, 67 (2003); J. Phys. G [**30**]{}, 491 (2004); Y. Liao, Phys. Rev. D [**69**]{}, 016001 (2004); A. Denner, E. Kraus, and M. Roth, [*ibid.*]{} [**70**]{}, 033002 (2004).
R.P. Feynman, Phys. Rev. [**76**]{}, 769 (1949) (see especially Section 6); Quantum Electrodynamics: A Lecture Note and Reprint Volume, (W.A. Benjamin, Inc., New York, 1962), p. 145 [*et seqq.*]{}.
B.A. Kniehl and A. Sirlin (in preparation).
K.S. Babu, Z. Phys. C [**35**]{}, 69 (1987).
[^1]: Electronic address: [[email protected]]{}; permanent address: II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany.
[^2]: Electronic address: [[email protected]]{}; permanent address: Department of Physics, New York University, 4 Washington Place, New York, New York 10003, USA.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
-
title: '**A FUSE View of the Stellar Winds of Planetary Nebula Central Stars**'
---
Introduction
============
Fast stellar winds driven by radiation pressure are characteristics of the central stars of planetary nebulae (CSPNe). These stellar winds, with terminal velocities ($v_\infty$) up to 4,000 km s$^{-1}$, carry large amounts of energy and momentum and interact with the slow, 5-30 km s$^{-1}$ [@ELT88], dense wind of the Asymptotic Giant Branch (AGB) phase. This interaction plays an important role in the shaping and evolution of PNe, as recognized by the canonical Interacting Stellar Wind model of formation of PNe [@KPF78; @B87].
The fast stellar winds in CSPNe can be discovered through the P Cygni profiles of lines in the UV of high excitation ions. The International Ultraviolet Explorer *IUE* satellite obtained useful UV spectra in the 1,150-3,350 Å range for $\sim$160 CSPNe (Patriarchi & Perinotto 1991, and references therein). A significant fraction of these CSPNe presented P Cygni profiles in the N [v]{} $\lambda\lambda$1239,1243 Å, C [iv]{} $\lambda\lambda$1548,1551 Å, and O [v]{} $\lambda$1371 Å lines, among others. These P Cygni profiles implied fast stellar winds with edge velocities ranging from 600 to 3,500 km s$^{-1}$ [@CSP85].
Launched in June 1999, the Far Ultraviolet Spectroscopic Explorer (*FUSE*) opened a new window in the far-UV range of the spectrum from 905 Å to 1,195 Å. This spectral range includes information on a variety of resonance lines of high excitation species (O [vi]{}, P [v]{}, Si [iv]{}, C [iii]{}, ...) that can be present in the spectra of CSPNe. The occurrence of P Cygni profiles of these lines and their properties ($v_\infty$, variability, main ionization stage, ...) is a valuable tool to assess the importance of stellar winds in the formation of PNe. We have therefore started a program aimed to use the high-resolution spectra of CSPNe in the final archive of the *FUSE* mission to investigate stellar winds in CSPNe. Here, we present preliminary results of this on-going project.
Results
=======
The final *FUSE* archive includes high-resolution spectra for $\sim$90 CSPNe. The inspection of these spectra has revealed P Cygni profiles indicative of stellar winds in 40 PNe. For a dozen of them, this is the first time that fast stellar winds have been reported. The CSPNe with useful *FUSE* observations that do not show evidence of P Cygni profiles overimposed on their stellar continuum are: A7, A31, A35, A39, DeHt2, HDW4, Hen2-86, Hen2-138, Hen3-1357, K1-26, K2-2, NGC1360, NGC3132, NGC3587, NGC7293, Ps1, PuWe1, and Sh2-174. These are either (a) CSPNe of high $T_{\rm eff}$ and $g$ at the center of old PNe (e.g., NGC7293), i.e., these CSPNe are subdwarfs that have already initiated their evolution towards the white dwarf phase, or (b) post-AGB stars at the center of young PNe (e.g., Hen3-1357) that have not developed yet a stable wind or whose $T_{\rm eff}$ is not high enough to excite these emission lines in the stellar wind.
We list in Table 1 the CSPNe exhibiting P Cygni profiles and their edge velocities. Previous information obtained by *IUE* has been incorporated into this table to allow the straightforward comparison with the new *FUSE* measurements. The comparison between *IUE* and *FUSE* data shows general agreement, but there are a few CSPNe where this is not the case. The poorer spectral resolution of the *IUE* data (e.g., NGC2392) or the difficulties in the determination of the edge velocity in CSPNe severely affected by H$_2$ and atomic absorptions (e.g., NGC6826) may be blamed for these differences. There are, however, CSPNe for which the different edge velocities between *IUE* and *FUSE* data seem real (e.g., IC418).
We note that the terminal velocity of a stellar wind is usually determined from the blue edge velocity (i.e., the maximum velocity at which the P Cygni profile joins back to the stellar continuum). Consequently, we have provided in this work the edge velocity to allow a fair comparison with works in the literature that used *IUE* data (e.g., Patriarchi & Perinotto 1991). Some authors, however, argue that the so-called black velocity describes better $v_\infty$. A detailed modeling using a SEI (Sobolev with Exact Integration) code results in a more accurate determination of $v_\infty$. This method is illustrated in Figure \[fig1\] for the CSPN of PB8. The terminal velocity of the fit, $\sim$1,200 km s$^{-1}$, is very similar in this case to the edge velocity of 1,250 km s$^{-1}$ given in Table 1.
![ PB8 P Cygni profile of the P [v]{} $\lambda\lambda$1118,1128Å line. The edge and black velocities are marked. The red curve corresponds to a SEI fit of the line profiles. []{data-label="fig1"}](poster1.eps)
As shown in Table 1, many CSPNe have P Cygni profiles of a variety of resonance lines of species of different excitation levels. A close examination of the P Cygni profiles of the different lines for every single CSPN reveals cases when the line profiles are dramatically different. Moreover, as in the case of the comparison between the edge velocities derived from *IUE* and *FUSE* data, there are notable cases of CSPNe for which different edge velocities are associated with different lines in the *FUSE* spectral range.
![ Cn3-1 P Cygni profiles of the C [iii]{} $\lambda$1175Å (red) and S [iv]{} $\lambda\lambda$1073.0,1073.5Å (black) lines. Note the interstellar/circumstellar absorptions in the blue edge of the S [iv]{} P Cygni profile. []{data-label="fig2"}](poster2.eps)
First, we shall note that the shape of the P Cygni profile depends both on the different components and levels of the line, as well as on the dominant physical processes involved in its formation. The shape of different lines can vary owing to these factors, but their terminal velocities can be the same. This is the case for Cn 3-1 (Figure \[fig2\]), for which the profiles of the C [iii]{} $\lambda$1175Å and S [iv]{} $\lambda\lambda$1073.0,1073.5Å lines are very different, but the black and edge velocities are similar. The different profile shapes can be explained as a result of the different components that form these two lines: the C [iii]{} $\lambda$1175Å line is a triplet which has 5 separate, closely spaced, levels, while the S [iv]{} $\lambda\lambda$1073.0,1073.5Å line is one resonance doublet.
There are more extremes cases on which both the black and edge velocities and the profile shapes are notably different. This situation is illustrated by the P Cygni profiles of the C [iii]{} $\lambda$1175Å and Si [iv]{} $\lambda$1122Å lines of Hen2-131 shown in Figure \[fig3\]. The C [iii]{} $\lambda$1175Å line is a triplet, which can act much like a resonance line in dense winds, scattering radiation in any region wherever C$^{++}$ is present. On the other hand, the Si [iv]{} $\lambda$1122Å is a line from a radiatively excited state. As the lower level of an excited state line is the upper level of a resonance line transition, its population depends strongly on the local radiation field and decreases rapidly with distance from the star [@O81]. Therefore, the distinct physical processes that dominate these lines determine not only there shapes, but also their terminal velocities.
![ Hen2-131 P Cygni profiles of the C [iii]{} $\lambda$1175Å (red) and Si [iv]{} $\lambda$1122Å (black) lines. As in Figure 1, the dotted lines correspond to a SEI fits of the line profiles. []{data-label="fig3"}](poster3.eps)
A statistical comparison of $v_\infty$ with the stellar properties (spectral type, effective temperature, $T_{\rm eff}$, and gravity, $g$) is in progress. As it could be expected, stars of high gravity ($\log g > 5$) show the largest $v_\infty$ ($\sim$ 4,000 km s$^{-1}$). Two of these stars (NGC246 and Lo4) are of the PG1159 type. In contrast, stars of low gravity and effective temperature show low edge velocities. Among these CSPNe, we should mention the low edge velocities of several lines of Cn3-1, Hen2-131, and NGC2392, in the range 200-400 km s$^{-1}$, whose measurement has been possible because the high-spectral resolution of the *FUSE* data.
Summary and Future Work
=======================
Using *FUSE* data, we have found evidence of fast stellar winds in 40 CSPNe. For a dozen of them, this is the first time that fast stellar winds have been reported. We have determined the edge velocities of these lines, finding notable cases for which different edge velocities are associated to different lines. A more detail modeling using a SEI code and incorporating into the models the absorptions produced by circumstellar and/or interstellar H [i]{}, H$_2$, and atomic lines is on-going to determine more accurately $v_\infty$.
A statistical comparison of $v_\infty$ with stellar properties is also underway. The edge velocity is clearly correlated with the surface gravity and effective temperature, with the most evolved CSPNe having the fastest stellar winds. Young post-AGB stars as well as excessively evolved CSPNe do not show evidence of stellar winds.
------------------- --------------------------------------------- -------------------- ----------------------------------------------------------- --------------------
CSPN *FUSE* Lines Edge velocity *IUE* Lines Edge velocity
\[km s$^{-1}$\] \[km s$^{-1}$\]
A30 O [vi]{}, C [iii]{} 4,200 N [v]{}, O [v]{}, C [iv]{} 3,400
A43 O [vi]{}, C [iii]{} 3,900 $\dots$
A78 O [vi]{}, C [iii]{} 4,000 N [v]{}, O [v]{}, C [iv]{} 3,500
BD+30$^\circ$3639 S [iv]{}, P [v]{}, Si [iv]{}, C [iii]{} 850 N [v]{}, O [v]{}, Si [iv]{}, C [iv]{}, N [iv]{} 1,000
Cn3-1 S [iv]{}, C [iii]{} 530 $\dots$
Si [iv]{} 360 : $\dots$
Hb7 S [vi]{} 1,000 $\dots$
O [vi]{} 1,500 $\dots$
Hen2-99 P [v]{}, C [iii]{} 1,200 $\dots$
S [iv]{}, Si [iv]{} 900 $\dots$
Hen2-131 P [v]{}, C [iii]{} 500 N [v]{}, O [v]{}, Si [iv]{}, C [iv]{}, N [iv]{} 850
S [iv]{}, Si [iv]{} 300 $\dots$
Hen2-274 S [iv]{}, C [iii]{} 600 $\dots$
Hen2-341 S [vi]{}, O [vi]{} 1,950 $\dots$
IC418 S [iv]{}, P [v]{}, C [iii]{} $\lambda$1175Å 500 Si [iv]{}, C [iv]{}, N [iv]{} 1,050
O [vi]{}, C [iii]{} $\lambda$977Å 850 $\dots$
IC2149 S [vi]{}, O [vi]{}, C [iii]{} 1,050 N [v]{}, Si [iv]{}, C [iv]{} 1,300
IC2448 O [vi]{} 2,550 $\dots$
IC2501 S [vi]{}, O [vi]{}, P [v]{} 1,400 N [v]{}, C [iv]{} 1,280
IC2553 O [vi]{} 2,750 $\dots$
IC3568 O [vi]{} $>$1,600 N [v]{}, O [v]{}, C [iv]{} 1,850
IC4593 P [v]{}, C [iii]{} 700 N [v]{}, O [iv]{}, Si [iv]{}, C [iv]{}, N [iv]{} 1,100
O [vi]{} 1,400 $\dots$
IC4776 S [vi]{}, O [vi]{} 2,050 $\dots$
IC5217 O [vi]{} 2,600 $\dots$
K1-16 O [vi]{}, C [iii]{} 3,700 $\dots$
Lo4 O [vi]{}, C [iii]{} 3,800 $\dots$
LSS1362 O [vi]{} 2,630 $\dots$
NGC40 S [iv]{}, P [v]{}, C [iii]{} 1,350 N [v]{}, O [iv]{}, O [v]{}, Si [iv]{}, C [iv]{} 1,600
O [vi]{} 1,000 : $\dots$
NGC246 C [iii]{} 4,300 C [iv]{} $>$3,300
O [vi]{} 3,700 $\dots$
NGC1535 O [vi]{}, S [vi]{} 2,100 N [v]{}, O [v]{} 2,150
NGC2371 O [vi]{}, C [iii]{} 4,000 C [iv]{} $<$3,750
NGC2392 O [vi]{}, C [iii]{} 200 N [v]{}, N [iv]{} 600
NGC2867 O [vi]{}, C [iii]{} 2,600 $\dots$
NGC5882 O [vi]{}, S [vi]{} 1,950 N [v]{}, O [v]{}, C [iv]{} 1,525
NGC6058 O [vi]{} 2,750 N [v]{} $\dots$
NGC6543 S [vi]{}, O [vi]{} 1,900 N [v]{}, O [iv]{}, O [v]{}, Si [iv]{}, C [iv]{}, N [iv]{} 1,900
P [v]{} 1,650 $\dots$
NGC6826 S [vi]{}, O [vi]{}, P [v]{}, C [iii]{} 1,350 N [v]{}, O [iv]{}, O [v]{}, Si [iv]{}, C [iv]{}, N [iv]{} 1,600
NGC6891 S [vi]{}, O [vi]{}, P [v]{} 1,400 N [v]{}, O [iv]{}, O [v]{}, C [iv]{}, N [iv]{} 1,950
NGC7009 O [vi]{} 3,000 N [v]{}, O [v]{} 2,750
NGC7094 O [vi]{}, C [iii]{} 3,750 C [iv]{} 3,600
NGC7662 O [vi]{} 2,550 $\dots$
PB6 O [vi]{} 3,500 $\dots$
PB8 S [vi]{}, C [iii]{} 1,000 N [v]{}, O [iv]{}, O [v]{}, Si [iv]{}, C [iv]{}, N [iv]{} 1,060
O [vi]{}, P [v]{} 1,250 $\dots$
SwSt1 O [vi]{} 1,120 N [v]{}, O [iv]{}, O [v]{}, Si [iv]{}, C [iv]{}, N [iv]{} 1,580
P [v]{} 800 $\dots$
S [iv]{}, Si [iv]{} 700 $\dots$
C [iii]{} $\lambda$1175 Å 1,400 $\dots$
Vy2-3 S [vi]{}, O [vi]{} 1,800 $\dots$
------------------- --------------------------------------------- -------------------- ----------------------------------------------------------- --------------------
\
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors acknowledge support from Ministerio de Educación y Ciencia (MEC), and Ministerio de Ciencia e Innovación (MICINN) through grants AYA2005-01495 and AYA2008-01934.
Balick, B. 1987, AJ, 94, 671 Cerruti-Sola, M., & Perinotto, M. 1985, ApJ, 291, 237 Eder, J., Lewis, B. M., & Terzian, Y. 1988, ApJS, 66, 183 Kwok, S., Purton, C. R., & Fitzgerald, P. M. 1978, ApJ, 219, L125 Olson, G. L. 1981, ApJ, 245, 1054 Patriarchi, P., & Perinotto, M. 1991, A&AS, 91, 325
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'bibliography.bib'
title: 'Machine Learning for Pricing American Options in High-Dimensional Markovian and non-Markovian models'
---
------------------------------------------------------------------------
**Abstract**
In this paper we propose two efficient techniques which allow one to compute the price of American basket options. In particular, we consider a basket of assets that follow a multi-dimensional Black-Scholes dynamics. The proposed techniques, called GPR Tree (GRP-Tree) and GPR Exact Integration (GPR-EI), are both based on Machine Learning, exploited together with binomial trees or with a closed formula for integration. Moreover, these two methods solve the backward dynamic programming problem considering a Bermudan approximation of the American option. On the exercise dates, the value of the option is first computed as the maximum between the exercise value and the continuation value and then approximated by means of Gaussian Process Regression. The two methods mainly differ in the approach used to compute the continuation value: a single step of binomial tree or integration according to the probability density of the process. Numerical results show that these two methods are accurate and reliable in handling American options on very large baskets of assets. Moreover we also consider the rough Bergomi model, which provides stochastic volatility with memory. Despite this model is only bidimensional, the whole history of the process impacts on the price, and handling all this information is not obvious at all. To this aim, we present how to adapt the GPR-Tree and GPR-EI methods and we focus on pricing American options in this non-Markovian framework.
*Keywords*: Machine Learning, American Options, Multi-dimensional Black-Scholes Model, Rough Bergomi Model, Binomial Tree Method, Exact Integration.
------------------------------------------------------------------------
Introduction
============
Pricing American options is clearly a crucial question of finance but also a challenging one since computing the optimal exercise strategy is not an evident task. This issue is even more exacting when the underling of the option is a multi-dimensional process, such as a baskets of $d$ assets, since in this case the direct application of standard numerical schemes, such as finite difference or tree methods, is not possible because of the exponential growth of the calculation time and the required working memory.
Common approaches in this field can be divided in four groups: techniques which rely on recombinant trees to discretize the underlyings (see [@bally2003first], [@broadie1997pricing] and [@jain2012pricing]), techniques which employ regression on a truncated basis of $L^{2}$ in order to compute the conditional expectations (see [@longstaff2001valuing] and [@tsitsiklis1999optimal]), techniques which exploit Malliavin calculus to obtain representation formulas for the conditional expectation (see [@abbas2012american], [@bally2005pricing], [@bouchard2004discrete], and [@lions2001calcul]) and techniques which make use of duality-based approaches for Bermudan option pricing (see [@haugh2004pricing], [@lelong2018dual] and [@rogers2002monte]).
Recently, Machine Learning algorithms (Rasmussen and Williams [@williams2006gaussian]) and Deep Learning techniques (Nielsen [@nielsen2015neural]) have found great application in this sector of option pricing.
Neural networks are used by Kohler et al. [@kohler2010pricing] to price American options based on several underlyings. Deep Learning techniques are nowadays widely used in solving large differential equations, which is intimately related to option pricing. In particular, Han et al. [@han2018] introduce a Deep Learning-based approach that can handle general high-dimensional parabolic PDEs. E et al. [@weinan2017] propose an algorithm for solving parabolic partial differential equations and backward stochastic differential equations in high dimension. Beck et al. [@beck2017] introduce a method for solving high-dimensional fully nonlinear second-order PDEs. As far as American options in high dimension are concerned, Becker et al. [@becker2018] develop a Deep Learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples.
Also Machine Learning techniques have made their contribution. For example, Dixon and Cr[é]{}pey present a multi-Gaussian process regression for estimating portfolio risk, and in particular the associated CVA. De Spiegeleer et al. [@de2018machine] propose to apply Gaussian Process Regression (GPR) to predict the price of the derivatives from a training set made of observed prices for particular combinations of model parameters. Ludkovski [@Ludkovski2018] proposes to use GPR meta-models for fitting the continuation values of Bermudan options. Similarly, Goudenège et al. [@goudenege2019machine] propose the GPR-MC, which is a backward induction algorithm that employs Monte Carlo simulations and GPR to compute the price of American options in very high dimension (up to 100). In the insurance context, Gan [@gan2013] studies the pricing of a large portfolio of Variable Annuities in the Black-Scholes model by using clustering and GPR. Moreover, Gan and Lin [@gan2015] propose a novel approach that combines clustering technique and GPR to efficiently evaluate policies considering nested simulations.
In this paper we present two numerical techniques which upgrade the GPR-MC approach by replacing the Monte Carlo based computation of the continuation value respectively with a tree step and with an exact integration step. In particular, the algorithms we propose proceed backward over time and compute the price function only on a set of predetermined points. At each time step, a binomial tree step or a closed formula for integration are used together with GPR to approximate the continuation value at these points. The option price is then obtained as the maximum between the continuation value and the intrinsic value of the option and the algorithms proceed backward. For the sake of simplicity, we name these new approaches Gaussian Process Regression - Tree (GPR-Tree) and Gaussian Process Regression - Exact Integration (GPR-EI). We observe that the use of the GPR method to extrapolate the option value is particularly efficient in terms of computing time with respect to other techniques such as Neural Networks, especially because a small dataset is considered here. Moreover, Le Gratiet et Garnier [@gratiet2012regularity] developed recent convergence results about GPR, extending the outcomes of Rasmussen and Williams [@williams2006gaussian], and founding the convergence rate when different kernels are employed.
In order to demonstrate the wide applicability of the GPR methods, we also consider the rough Bergomi model, which is a non-Markovian model with stochastic volatility. Such a model, introduced by Bayer et al. [@bayer2016pricing] stood out for explaining implied volatility smiles and other phenomena in the pricing of European options. The non-Markovian property of the model makes it difficult to implement a methodologically correct approach to address the valuation of American options. The literature in this framework is really poor. Horvat et al. [@horvath2017functional] propose an approach based on Donskers approximation for fractional Brownian motion and on a tree with exponential complexity. More recently, Bayer et al. [@bayer2018pricing] introduce a method based on Monte Carlo simulation and exercise Rate Optimization.
Numerical results show that both the GPR-Tree and the GPR-EI methods are accurate and reliable in the multi-dimensional Black-Scholes model. Moreover the computational times with respect to the GPR-MC method are improved. The GPR-Tree and the GPR-EI methods prove its accuracy also when applied to the rough Bergomi model.
The reminder of the paper is organized as follows. Section 2 presents American options in the multi-dimensional Black-Scholes model. Section 3 and Section 4 introduce the GPR-Tree and the GPR-EI methods for the multi-dimensional Black-Scholes model respectively. Section 5 presents the American options in the rough Bergomi model. Section 6 and Section 7 introduce the GPR-Tree and the GPR-EI methods for the rough Bergomi model. Section 8 reports some numerical results. Section 9 draws some conclusions.
American options in the multi-dimensional Black-Scholes model
=============================================================
An American option with maturity $T$ is a derivative instrument whose holder can exercise the intrinsic optionality at any moment before maturity. Let $\mathbf{S}=(\mathbf{S}_{t})_{t\in[0,T]}$ denote the $d$-dimensional underlying process, which is supposed to randomly evolve according to the multi-dimensional Black-Scholes model: under the risk neutral probability, such a model is given by the following equation $$dS_{t}^{i}=r\,S_{t}^{i}\,dt+\sigma_{i}\,S_{t}^{i}\,dW_{t}^{i},\quad\ i=1,\ldots,d,\label{sde}$$ with $\mathbf{S}_{0}=\left(s_{0}^{1},\dots,s_{0}^{d}\right)\in\mathbb{R}_{+}^{d}$ the spot price, $r$ the (constant) interest rate, $\mathbf{\sigma}=(\sigma_{1},\dots,\sigma_{d})$ the vector of volatilities, $\mathbf{W}$ a $d$-dimensional correlated Brownian motion and $\rho_{ij}$ the instantaneous correlation coefficient between $W_{t}^{i}$ and $W_{t}^{j}.$ Moreover, let $\Psi(\mathbf{S}_{T})$ denote the cash-flow associated with the option at maturity $T$. Thus, the price at time $t$ of an American option having maturity $T$ and payoff function $\Psi\,:\,\R_{+}^{d}\to\R$ is then $$v(t,\mathbf{x})=\sup_{\tau\in\mathcal{T}_{t,T}}\mathbb{E}_{t,\mathbf{x}}\left[e^{-r(\tau-t)}\Psi(\mathbf{S}_{\tau})\right],\label{price}$$ where $\cl T_{t,T}$ stands for the set of all the stopping times taking values on $[t,T]$ and $\mathbb{E}_{t,\mathbf{x}}\left[\cdot\right]$ represents the expectation given all the information at time $t$ and in particular assuming $\mathbf{S}_{t}=\mathbf{x}$.
For simulation purposes, the $d-$dimensional Black-Scholes model can be written alternatively using the Cholesky decomposition. Specifically, for $i=1,\dots,d$ we can write $$dS_{t}^{i}=S_{t}^{i}(rdt+\sigma_{i}\Sigma_{i}d\mathbf{B}_{t}),\label{sde_cho}$$ where $\mathbf{B}$ is a $d$-dimensional uncorrelated Brownian motion and $\Sigma_{i}$ is the $i$-th row of the matrix $\Sigma$ defined as a square root of the correlation matrix $\Gamma$, given by $$\Gamma=\begin{pmatrix}1 & \rho_{12} & \hdots & \rho_{1d}\\
\rho_{21} & 1 & \ddots & \vdots\\
\vdots & \ddots & \ddots & \vdots\\
\rho_{d1} & \hdots & \hdots & 1
\end{pmatrix}$$
The GPR-Tree method in the multi-dimensional Black-Scholes model
================================================================
The GPR-Tree method is similar to the GPR-MC method but the diffusion of the underlyings is performed through a step of a binomial tree. In particular, the algorithm proceeds backward over time, approximating the price of the American option with the price of a Bermudan option on the same basket. At each time step, the price function is evaluated only on a set of predetermined points, through a binomial tree step together with GPR to approximate the continuation value. Finally, the optionality is exploited by computing the option value as the maximum between the continuation value and the exercise value.
Let $N$ denote the number of time steps, $\Delta t=T/N$ be the time increment and $t_{n}=n\,\Delta t$ represent the discrete exercise dates for $n=0,1,\ldots,N$. At any exercise date $t_{n}$, the value of the option is determined by the vector of the underlying prices $\mathbf{S}_{t_{n}}$ as follows: $$v\left(t_{n},\mathbf{S}_{t_{n}}\right)=\max\left(\Psi\left(\mathbf{S}_{t_{n}}\right),C\left(t_{n},\mathbf{S}_{t_{n}}\right)\right),\label{eq:update}$$ where $C$ denotes the continuation value of the option and it is given by the following relation: $$C\left(t_{n},\mathbf{S}_{t_{n}}\right)=\mathbb{E}_{t_{n},\mathbf{S}_{t_{n}}}\left[e^{-r\Delta t}v\left(t_{n+1},\mathbf{S}_{t_{n+1}}\right)\right].\label{eq:CV}$$
We observe that, if the function $v\left(t_{n+1},\cdot\right)$ is known, then it is possible to compute $v\left(t_{n},\cdot\right)$ by approximating the expectation in (\[eq:CV\]). In order to obtain such an approximation, we consider a set $X$ of $P$ points whose elements represent certain possible values for the underlyings $\mathbf{S}$: $$X=\left\{ \mathbf{x}^{p}=\left(x_{1}^{p},\dots,x_{d}^{p}\right),p=1,\dots,P\right\} \subset\mathbb{R}_{+}^{d},\label{eq:X}$$ where $\mathbb{R}_{+}^{d}=\left]0,+\infty\right[^{d}$. Such a set is determined as done by Goudenège et al. [@goudenege2019machine], that is the elements of $X$ are obtained through a quasi-random simulation of $\mathbf{S}_{T}$ based on the Halton sequence (see [@goudenege2019machine] for more details).
The GPR-Tree method assesses $v\left(t_{n},\mathbf{x}^{p}\right)$ for each $\mathbf{x}^{p}\in X$ through one step of the binomial tree proposed by Ekval [@ekvall1996lattice]. In particular, for each $\mathbf{x}^{p}\in X$, we consider a set $\tilde{X}^{p}$ of $2^{d}$ possible values for $\mathbf{S}_{t_{n+1}}$ $$\tilde{X}^{p}=\left\{ \mathbf{\tilde{x}}^{p,k}=\left(\tilde{x}_{1}^{p,k},\dots,\tilde{x}_{d}^{p,k}\right),k=1,\dots,2^{d}\right\} \subset\mathbb{R}_{+}^{d}$$ which are computed as follows: $$\mathbf{\tilde{x}}_{i}^{p,k}=\mathbf{x}_{i}^{p}\exp\left(\left(r-\frac{\sigma_{i}^{2}}{2}\right)\Delta t+\sigma_{i}\sqrt{\Delta t}\Sigma_{i}\mathbf{G}_{k}\right),\ k=1,\dots,2^{d}$$ being $\mathbf{G}_{k}$ the $k$-th point of the space $\left\{ -1,+1\right\} ^{d}$. In particular, if $\mathbf{Y}_{k}\in\left\{ 0,1\right\} ^{d}$ is the vector whose components are the digits of the binary representation of $2^{d}-1$, then $\mathbf{G}_{k}=2\mathbf{Y}_{k}-1$. It is worth noticing that, as pointed out in [@ekvall1996lattice], the elements of $\tilde{X}^{p}$ are equally likely and this simplifies the evaluation of the expected value to the computation of the arithmetic mean of the future values. Using the tree step, the price function may be approximated by $$v_{n}^{Tree}\left(\mathbf{x}^{p}\right)=\max\left(\Psi\left(\mathbf{x}^{p}\right),\frac{e^{-r\Delta t}}{2^{d}}\sum_{k=1}^{2^{d}}v\left(t_{n+1},\mathbf{\tilde{x}}^{p,k}\right)\right).\label{eq:update2}$$ The computation in (\[eq:update2\]) can be performed only if the quantities $v\left(t_{n+1},\mathbf{\tilde{x}}^{p,k}\right)$ are known for all the future points $\mathbf{\tilde{x}}^{p,k}$. If we proceed backward, the function $v\left(t,\cdot\right)$ is known at maturity since it is given by the payoff function $\Psi\left(\cdot\right)$ and so (\[eq:update2\]) can be computed at $t_{N-1}$ and for all the points of $X$. In order to compute $v\left(t_{N-2},\mathbf{x}^{p}\right)$ for all $\mathbf{x}^{p}\in X$, and thus going on up to $t=0$, we have to evaluate the function $v\left(t_{N-1},\cdot\right)$ for all the points in $\tilde{X}=\bigcup_{p=1}^{P}\tilde{X}^{p}$, but we only know $v_{N-1}^{Tree}\left(\cdot\right)$ at $X$. To overcome this issue, we employ the GPR method to approximate the function $v_{N-1}^{Tree}\left(\cdot\right)$ at any point of $\mathbb{R}^{d}$ and in particular at the elements of $\tilde{X}$. Specifically, let $v_{N-2}^{GPR}\left(\cdot\right)$ denote the GPR prediction of $v_{N-1}^{Tree}\left(\cdot\right)$, obtained by considering the predictor set $X$ and the response $\mathbf{y}\in\mathbb{R}^{P}$ given by $$y^{p}=v_{N-1}^{Tree}\left(\mathbf{x}^{p}\right),\ p\in\left\{ 1,\dots,P\right\} .$$ The GPR-Tree approximation $v_{N-2}^{GPR-Tree}\left(\cdot\right)$ of the value function $v\left(t_{N-2},\cdot\right)$ at time $t_{N-2}$ can be computed as follows:
$$v_{N-2}^{GPR-Tree}\left(\mathbf{x}^{p}\right)=\max\left(\Psi\left(\mathbf{x}^{p}\right),\frac{e^{-r\Delta t}}{2^{d}}\sum_{k=1}^{2^{d}}v_{N-1}^{GPR}\left(\mathbf{\tilde{x}}^{p,k}\right)\right),\ p\in\left\{ 1,\dots,P\right\} .$$
Following the same steps, the dynamic programming problem can be solved. Specifically, let $n\in\left\{ 0,\dots,N-3\right\} $ and let $v_{n+1}^{GPR}\left(\cdot\right)$ denote the GPR prediction of $v_{n+1}^{GPR-Tree}\left(\cdot\right)$ obtained from predictor set $X$ and the response $\mathbf{y}\in\mathbb{R}^{P}$ given by $$y^{p}=v_{n+1}^{GPR-Tree}\left(\mathbf{x}^{p}\right).$$ Then, the function $v_{n}^{GPR-Tree}$ can be obtained as
$$v_{n}^{GPR-Tree}\left(\mathbf{x}^{p}\right)=\max\left(\Psi\left(\mathbf{x}^{p}\right),\frac{e^{-r\Delta t}}{2^{d}}\sum_{k=1}^{2^{d}}v_{n+1}^{GPR}\left(\mathbf{\tilde{x}}^{p,k}\right)\right).$$
The GPR-EI method in the multi-dimensional Black-Scholes model
==============================================================
The GPR-EI method differs from both the GPR-MC and GPR-Tree methods for two reasons. First of all, the predictors employed in the GPR step are related to the logarithms of the predictors used in the GPR-Tree method. Secondly, the continuation value at these points is computed through a closed formula which comes from an exact integration.
Let $X=\left\{ \mathbf{x}^{p},p=1,\dots,P\right\} \subset\mathbb{R}_{+}^{d}$ be the same set as in (\[eq:X\]) and define $\log\left(\mathbf{x}^{p}\right)$ as the vector obtained by applying the natural logarithm to all the components of $\mathbf{x}^{p}$, that is $\log\left(\mathbf{x}^{p}\right)=\left(\log\left(x_{1}^{p}\right),\dots,\log\left(x_{d}^{p}\right)\right)^{\top}$. Moreover, let us define the set $$Z=\left\{ \mathbf{z}^{p}=\log\left(\mathbf{x}^{p}\right)-\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)T,p=1,\dots,P\right\} .\label{eq:Z}$$ In this case, we do not work directly with the function $v$, but we rather consider the function $u:\left[0,T\right]\times Z\rightarrow\mathbb{R}$ defined as $$u\left(t,\mathbf{z}\right):=v\left(t,\exp\left(\mathbf{z}+\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t\right)\right).\label{eq:u_def}$$ In a nutshell, the main idea is to approximate the function $u$ at $t_{N},t_{N-1},\dots,t_{1}$ by using the GPR method on the fixed grid $Z$. In particular, we employ the Squared Exponential Kernel $k_{SE}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}$, which is given by $$k_{SE}\left(\mathbf{a},\mathbf{b}\right)=\sigma_{f}^{2}\exp\left(-\frac{\left(\mathbf{a}-\mathbf{b}\right)^{\top}I_{d}\left(\mathbf{a}-\mathbf{b}\right)}{2\sigma_{l}^{2}}\right),\ \mathbf{a},\mathbf{b}\in\mathbb{R}^{d},\label{eq:A11-1}$$ where $I_{d}$ the $d\times d$ identity matrix, $\sigma_{l}\in\mathbb{R}$ is the characteristic length scale and $\sigma_{f}\in\mathbb{R}$ is the signal standard deviation. These two parameters are obtained by means of a maximum likelihood estimation. The GPR approach allows one to approximate the function $u\left(t_{n},\cdot\right)$ at time $t_{n}$ by
$$u_{n}^{GPR}\left(\mathbf{z}\right)=\sum_{q=1}^{P}k_{SE}\left(\mathbf{z}^{q},\mathbf{z}\right)\mathbf{\omega}_{q},$$
where $\omega_{1},\dots,\omega_{P}$ are weights that are computed by solving a linear system. The continuation value can be computed by integrating the function $u^{GPR}$ against a $d$-dimensional probability density. This calculation can be done easily by means of a closed formula.
Specifically, the GPR-EI method relies on the following Proposition.
\[prop:0\]Let $n\in\left\{ 0,\dots,N-1\right\} $ and suppose the function $u\left(t_{n+1},\cdot\right)$ at time $t_{n+1}$ to be known at $Z$. The GPR-EI approximation of the option value $u\left(t_{n},\cdot\right)$ at time $t_{n}$ at $\mathbf{z}^{p}$ is given by
$$u_{n}^{GPR-EI}\left(\mathbf{z}^{p}\right)=\max\left(\Psi\left(\exp\left(\mathbf{z}^{p}+\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t_{n}\right)\right),e^{-r\Delta t}\sum_{q=1}^{P}\omega_{q}\sigma_{f}^{2}\sigma_{l}^{d}\frac{e^{-\frac{1}{2}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)^{\top}\left(\Pi+\sigma_{l}^{2}I_{d}\right)^{-1}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)}}{\sqrt{\det\left(\Pi+\sigma_{l}^{2}I_{d}\right)}}\right)\label{eq:GPR-EI_0}$$
$\sigma_{f}$, $\sigma_{l}$, and $\omega_{1},\dots,\omega_{P}$ are certain constants determined by the GPR approximation of the function $\mathbf{z}\mapsto u\left(t_{n+1},\mathbf{z}\right)$ for $k=1,\dots,P$, considering $Z$ as the predictor set, and $\Pi=\left(\Pi_{i,j}\right)$ is the $d\times d$ covariance matrix of the log-increments defined by $\Pi_{i,j}=\rho_{i,j}\sigma_{i}\sigma_{j}\Delta t$.
The proof of Proposition \[prop:0\] is reported in the Appendix \[ApA0\]. Equation (\[eq:GPR-EI\_0\]) allows one to compute the option price at time $t=0$ by proceeding backward. In fact, the function $u\left(t_{N},\cdot\right)$ is known at time $t_{N}=T$ throught (\[eq:u\_def\]) since the price function $v\left(t_{N},\cdot\right)$ is equal to the payoff function $\Psi\left(\cdot\right)$. Moreover, if an approximation of $u\left(t_{n+1},\cdot\right)$ is available, then one can approximate $u\left(t_{n},\cdot\right)$ at $Z$ by means of relation (\[eq:GPR-EI\_0\]). Finally, the option price at time $t=0$ is approximated by $u_{0}^{GPR-EI}\left(\log\left(\mathbf{\mathbf{S}_{0}}\right)\right)$.
American options in the rough Bergomi model
===========================================
The rough Bergomi model, introduced by Bayer et al. [@bayer2016pricing], shapes the underlying process $S_{t}$ and its volatility $V_{t}$ through the following relations:
$$\begin{aligned}
dS_{t} & =rS_{t}dt+\sqrt{V_{t}}S_{t}dW_{t}^{1}\\
V_{t} & =\xi_{0}\left(t\right)\exp\left(\eta\widetilde{W}_{t}^{H}-\frac{1}{2}\eta^{2}t^{2H}\right),\end{aligned}$$
with $r$ the (constant) interest rate, $\eta$ a positive parameter and $H\in\left(0,1\right)$ the Hurst parameter. The deterministic function $\xi_{0}\left(t\right)$ represents the forward variance curve and following Bayer et al. [@bayer2016pricing] we consider it as constant. The process $W_{t}^{1}$ is a Brownian motion, whereas $\widetilde{W}_{t}^{H}$ is a Riemann-Liouville fractional Brownian motion that can be expressed as a stochastic integral: $$\widetilde{W}_{t}^{H}=\sqrt{2H}\int_{0}^{t}\left(t-s\right)^{H-\frac{1}{2}}dW_{t}^{2},$$ with $W_{t}^{2}$ a Brownian motion and $\rho$ the instantaneous correlation coefficient between $W_{t}^{1}$ and $W_{t}^{2}$.
The rough Bergomi model stood out for its ability to explain implied volatility and other phenomena related to European options. Moreover, it is particularly interesting from a computational point of view as it is a non-Markovian model and therefore it is not possible to apply standard techniques for American options.
In this framework, the price at time $t$ of an American option having maturity $T$ and payoff function $\Psi\,:\,\R_{+}\to\R$ is then $$v(t,\mathcal{F}_{t})=\sup_{\tau\in\mathcal{T}_{t,T}}\mathbb{E}\left[e^{-r(\tau-t)}\Psi(S_{\tau})|\mathcal{F}_{t}\right],\label{r-price}$$ where $\mathcal{F}_{t}$ is the natural filtration generated by the couple $\left(W_{s}^{1},\widetilde{W}_{s}^{H}\right)$ for $s\in\left[0,t\right]$. We point out that, as opposed to the multi-dimensional Brownian motion, in this case, the stopping time $\tau$ does not only depend from the actual values of $S$ and $V$ but, since these are non-Markovian processes, it depends on the whole filtration, that is from the whole observed history of the processes.
The GPR-Tree method in the rough Bergomi model
==============================================
The GPR-Tree method can be adapted to price American options in the rough Bergomi model. Despite the dimension of the model is only two, it is a non-Markovian model which obliges one to take into account the past history when evaluating the price of an option. So, the price of an option at a certain moment depends on all the filtration at that moment. Clearly, evaluating an option by considering the whole history of the process (a continuous process) is not possible. To overcome such an issue, we simulate the process on a finite number of dates and we consider the sub-filtration induced by these observations. First of all, we consider a finite number $N$ of time steps that determines the time increment $\Delta t=\frac{T}{N}$, and we employ the scheme presented in Bayer et. al [@bayer2016pricing] to generate a set of $P$ simulations of the couple $\left(S_{t},V_{t}\right)$ at $t_{n}=n\,\Delta t$ for $n=1,\ldots,N$. In particular, if we set $\Delta W_{n}^{1}=W_{t_{n}}^{1}-W_{t_{n-1}}^{1}$, then the $2N$-dimensional random vector $\mathbf{R}$, given by $$\mathbf{R}=\left(\Delta W_{1}^{1},\widetilde{W}_{t_{1}}^{H},\dots,\Delta W_{N}^{1},\widetilde{W}_{t_{N}}^{H}\right)^{\top},\label{eq:vector_R}$$ follows a zero-mean Gaussian distribution. Moreover, using the relations stated in Appendix \[ApA\], one can calculate the covariance matrix $\Upsilon$ of $\mathbf{R}$ and its lower triangular square root $\Lambda$ by using the Cholesky factorization. The vector $\mathbf{R}$ can be simulated by computing $\Lambda\mathbf{G}$, where $\mathbf{G}=\left(G_{1},\dots,G_{2N}\right)^{\top}$ is a vector of independent standard Gaussian random variables. Finally, a simulation for $\left(S_{t_{n}},V_{t_{n}}\right)_{n=0,\dots,N}$ can be obtained from $\mathbf{R}$ by considering the initial values $$S_{t_{0}}=S_{0},\ V_{t_{0}}=\xi_{0},\label{eq:62}$$ and the EulerMaruyama scheme given by $$\begin{aligned}
S_{t_{n+1}} & =S_{t_{n}}\exp\left(\left(r-\frac{1}{2}V_{t_{n}}\right)\Delta t+\sqrt{V_{t_{n}}}\Delta W_{n+1}^{1}\right),\label{eq:63}\\
V_{t_{n+1}} & =\xi_{0}\exp\left(-\frac{1}{2}\eta^{2}\left(t_{n+1}\right)^{2H}+\eta\widetilde{W}_{t_{n+1}}^{H}\right).\label{eq:64}\end{aligned}$$
First of all, the GPR-Tree method simulates $P$ different samples for the vector $\mathbf{G}$, namely $\mathbf{G}^{p}$ for $p=1,\dots,P$, and it computes the corresponding paths $\left(S_{t_{1}}^{p},V_{t_{1}}^{p},\dots,S_{t_{N}}^{p},V_{t_{N}}^{p}\right)$ according to (\[eq:62\]), (\[eq:63\]) and (\[eq:64\]). To summarize the values assumed by $S$ and $V$, let us define the vector $$\mathbf{SV}_{i:j}^{p}=\left(S_{t_{i}}^{p},V_{t_{i}}^{p},S_{t_{i+1}}^{p},V_{t_{i+1}}^{p},\dots,S_{t_{j}}^{p},V_{t_{j}}^{p}\right)^{\top}$$ for $i,j\in\left\{ 0,\dots,N\right\} $ and $i<j$. Moreover, we also define $$\log\left(\mathbf{SV}_{i:j}^{p}\right)=\left(\log\left(S_{t_{i}}^{p}\right),\log\left(V_{t_{i}}^{p}\right),\log\left(S_{t_{i+1}}^{p}\right),\log\left(V_{t_{i+1}}^{p}\right),\dots,\log\left(S_{t_{j}}^{p}\right),\log\left(V_{t_{j}}^{p}\right)\right)^{\top},$$ where $\log$ stands for the natural logarithm.
Then, the GPR-Tree method computes the option value for each of these $P$ trajectories, proceeding backward in time and considering the past history coded into the filtration. Since we consider only a finite number of steps, we approximate the filtration $\mathcal{F}_{t_{n}}$ with the natural filtration $\hat{\mathcal{F}_{t_{n}}}$ generated by the $2n$ variables $W_{t_{1}}^{1},\widetilde{W}_{t_{1}}^{H},\dots,W_{t_{n}}^{1},\widetilde{W}_{t_{n}}^{H}$. Moreover, $\hat{\mathcal{F}}_{t_{n}}$ is equal to the filtration generated by $S_{t_{1}},V_{t_{1}},\dots,S_{t_{n}},V_{t_{n}}$ because there exists a deterministic bijective function that allows one to obtain $W_{t_{1}}^{1},\widetilde{W}_{t_{1}}^{H},\dots,W_{t_{n}}^{1},\widetilde{W}_{t_{n}}^{H}$ from $S_{t_{1}},V_{t_{1}},\dots,S_{t_{n}},V_{t_{n}}$ and vice versa. Therefore, when we calculate the option value conditioned by filtration $\hat{\mathcal{F}}_{t_{n}}$, it is enough to conditioning with respect to the knowledge of the variables $S_{t_{1}},V_{t_{1}},\dots,S_{t_{n}},V_{t_{n}}$.
The GPR-Tree method proceeds backward in time, using a tree method and the GPR to calculate the option price with respect to the initially simulated trajectories. As opposed to the multi-dimensional Black-Scholes model, here we perform more than one single tree step, so as to reduce the number of GPR regressions and thus increasing the computational efficiency. In particular, we consider $N=N^{Tree}\cdot m$ with $N^{Tree}$ and $m$ natural numbers that represent how many times the tree method is used and the number of time steps employed, respectively.
After simulating the $P$ random paths $\left\{ \mathbf{SV}_{1:N}^{p},\ p=1,\dots,P\right\} $, we compute the tree approximation of the option value $v\left(t_{N-m},\mathbf{SV}_{1:\left(N-m\right)}^{p}\right)$ at time $t_{N-m}$ for each path as follows: $$v_{N-m}^{Tree}\left(\mathbf{SV}_{1:\left(N-m\right)}^{p}\right)=\max\left(\Psi\left(S_{t_{N-m}}^{p}\right),C_{N-m}^{Tree}\left(\mathbf{SV}_{1:\left(N-m\right)}^{p}\right)\right),$$ with $C_{N-m}^{Tree}$ stands for the the approximation of the continuation value function at time $t_{N-m}$ obtained by means of a tree approach, which discretizes each component of the Gaussian vector $\mathbf{G}_{\left[2\left(N-m\right)+1\right]:2N}$ that generates the process. As opposed to the multi-dimensional Black-Scholes model, the approximation of the independent Gaussian components of $\mathbf{G}$ through the equiprobable couple $\left\{ -1,+1\right\} $ is not suitable since the convergence to the right price is too slow. So, we propose to use the same discrete approximation employed by Alfonsi in [@alfonsi2010high], which is stated in the following Lemma.
\[lem:alfonsi\]The discrete variable $A$ defined by $\mathbb{P}\left(A=\sqrt{3+\sqrt{6}}\right)=\mathbb{P}\left(A=-\sqrt{3+\sqrt{6}}\right)=\frac{\sqrt{6}-2}{4\sqrt{6}}$ and $\mathbb{P}\left(A=\sqrt{3-\sqrt{6}}\right)=\mathbb{P}\left(A=-\sqrt{3-\sqrt{6}}\right)=\frac{1}{2}-\frac{\sqrt{6}-2}{4\sqrt{6}}$ fits the first seven moments of a standard Gaussian random variable.
So, for each path $p$, we consider a quadrinomial tree with $m$ time steps, and we use it to compute the continuation value. In particular, we consider the discrete time process $\left(\hat{S}_{k}^{p},\hat{V}_{k}^{p}\right)_{k\in\left\{ N-m,\dots,N\right\} }$defined through $$\hat{S}_{N-m}^{p}=S_{t_{N-m}}^{p},\hat{V}_{N-m}^{p}=V_{t_{N-m}}^{p}$$ $$\begin{aligned}
\hat{S}_{k+1}^{p} & =\hat{S}_{j-1}^{p}\exp\left(\left(r-\frac{1}{2}\hat{V}_{k}^{p}\right)\Delta t+\sqrt{\hat{V}_{k}^{p}}\Lambda_{2k+1}\hat{\mathbf{G}}^{p}\right),\label{eq:63-1}\\
\hat{V}_{k+1}^{p} & =\xi_{0}\exp\left(-\frac{1}{2}\eta^{2}\left(t_{k+1}\right)^{2H}+\eta\Lambda_{2k+2}\hat{\mathbf{G}}^{p}\right),\label{eq:64-1}\end{aligned}$$ where $\Lambda_{2k+1}$ is the $2k+1$-th rows of the matrix $\Lambda$ and $\Lambda_{2k+2}$ the $2k+2$-th row. Moreover, $\hat{G}_{j}^{p}=G_{j}^{p}$ for $j=1,\dots,2\left(N-m\right)$ and the other components, that is $\hat{G}_{j}^{p}$ for $j=2\left(N-m\right)+1,\dots,2N$, are sampled by using the random variable $A$ of Lemma \[lem:alfonsi\].
An option value is assigned to each node of the tree: at maturity, that is for $k=N,$ it is equal to the payoff $\Psi\left(\hat{S}_{N}^{p}\right)$, and for $k=N-m,\dots,N-1$ it can be obtained as the maximum between the exercise value and the discounted mean value at the future nodes, weighted according to the transition probabilities determined by the probability distribution of $A$.
This approach allows us to compute the function $v_{N-m}^{GPR-Tree}\left(\mathbf{SV}_{1:\left(N-m\right)}^{p}\right)$ for $p=1,\dots,P$. We point out that, since the quadrinomial tree is not recombinant, the number of nodes grows exponentially with the number of time steps $m$. Therefore, $m$ must be small. A similar problem arises with the tree approach proposed by Horvat et al. [@horvath2017functional]. In order to overcome such an issue, we apply the GPR method to approximate the function $u_{N-m}^{GPR-Tree}\left(\log\left(\mathbf{SV}_{1:\left(N-m\right)}^{p}\right)\right)=v_{N-m}^{GPR-Tree}\left(\mathbf{SV}_{1:\left(N-m\right)}^{p}\right)$. Specifically, consider a natural number $J$ and define $d_{n}=2\min\left(n,J+1\right)$. We train the GPR method considering the predictor set given by $$X=\left\{ \mathbf{x}^{p}=\log\left(\mathbf{SV}_{\max\left\{ 1,N-m-J\right\} :N-m}^{p}\right),p=1,\dots,P\right\} \subset\mathbb{R}^{d_{N-m}}$$ and the response $\mathbf{y}\in\mathbb{R}^{P}$ given by $$y^{p}=v_{N-m}^{Tree}\left(\mathbf{SV}_{1:\left(N-m\right)}^{p}\right).$$
We term $u_{N-m}^{GPR}$ the function obtained by the aforementioned regression, which depends on$\log\left(\mathbf{SV}_{\max\left\{ 1,N-m-J\right\} :N-m}^{p}\right)$. We stress out that if we consider $J=N-m-1$ (or greater), then the function $u_{N-m}^{GPR}$ would consider all the observed values of $S$ and $V$ as predictors. Anyway, numerical tests show that it is enough to consider smaller values of $J$, which reduces the dimension $d_{N-m}$ of the regression and thus improves the numerical efficiency. A similar approach is taken by Bayer et al. [@bayer2018pricing].
Once we have obtained $u_{N-m}^{GPR}$, we can approximate the option value $v\left(t_{N-2m},\mathbf{SV}_{1:\left(N-2m\right)}^{p}\right)$ at time $t_{N-2m}$ by means of the tree approach again. The only difference in this case is that the value attributed to the terminal nodes is not determined by the payoff function, but through the function $u_{N-m}^{GPR}$. We term $v_{N-2m}^{GPR-Tree}$ the function obtained after this backward tree step. If we train the GPR method considering the predictor set given by $$X=\left\{ \mathbf{x}^{p}=\log\left(\mathbf{SV}_{\max\left\{ 1,N-2m-J\right\} :N-2m}^{p}\right),p=1,\dots,P\right\} \subset\mathbb{R}^{d_{N-2m}}$$ and the response $\mathbf{y}\in\mathbb{R}^{P}$ given by $$y^{p}=v_{N-2m}^{Tree}\left(\mathbf{SV}_{1:\left(N-2m\right)}^{p}\right),$$ then we obtain the function $u_{N-2m}^{GPR}$, which can be employed to repeat the tree step and the GPR step, proceeding backward up to obtaining the initial option price by backward induction.
The GPR-EI method in the rough Bergomi model
============================================
The GPR-EI method can be adapted to price American options in the rough Bergomi model. Just like the GPR-Tree approach, the GPR-EI method starts by simulating $P$ different paths $\left(S_{t_{1}}^{p},V_{t_{1}}^{p},\dots,S_{t_{N}}^{p},V_{t_{N}}^{p}\right)$ for the processes $S$ and $V$, and it goes on by solving a backward induction problem, through the use of the GPR method and a closed formula for integration.
As opposed to the multi-dimensional Black-Scholes model, in the rough Bergomi case the use of the squared exponential kernel is not suitable because it is a isotropic kernel and the predictors employed have different nature (prices and volatilities at different times) and thus changes in each predictor impact differently on the price. So, we employ the Automatic Relevance Determination (ARD) Squared Exponential Kernel, that has separate length scale for each predictor and it is given by $$k_{ASE}\left(\mathbf{a},\mathbf{b}\right)=\sigma_{f}^{2}\exp\left(-\sum_{i=1}^{d}\frac{\left(a_{i}-b_{i}\right)^{2}}{2\sigma_{i}^{2}}\right),\ \mathbf{a},\mathbf{b}\in\mathbb{R}^{d}.$$ with $d$ the number of the considered predictors. Specifically, the GPR-EI method relies on the following Propositions.
\[lem:L1\]The GPR-EI approximation of the option value at time $t_{N-1}$ at $\mathbf{SV}_{\max\left\{ 1,N-1-J\right\} :\left(N-1\right)}^{p}$ is given by: $$v_{N-1}^{GPR-EI}\left(\mathbf{SV}_{\max\left\{ 1,N-1-J\right\} :\left(N-1\right)}^{p}\right)=\max\left(\Psi\left(S_{t_{N-1}}^{p}\right),\sum_{q=1}^{P}\frac{\mathbf{\omega}_{q}e^{-r\Delta t}\sigma_{f}^{2}\sigma_{l}}{\sqrt{\sigma_{N,p}^{2}+\sigma_{l}^{2}}}\exp\left(-\frac{\left(\log\left(S_{t_{N}}^{q}\right)-\mu_{N,p}\right)^{2}}{2\sigma_{N,p}^{2}+2\sigma_{l}^{2}}\right)\right)\label{eq:v_Nm1}$$ where $\sigma_{f}$, $\sigma_{j}$, and $\omega_{1},\dots,\omega_{P}$ are certain constants determined by the GPR approximation of the function $\log\left(S_{T}\right)\mapsto\Psi\left(S_{T}\right)$. Moreover, $$\mu_{N,p}=\log\left(S_{t_{N-1}}^{p}\right)+\left(r-\frac{1}{2}\sqrt{V_{t_{N-1}}^{p}}\right)\Delta t$$ and $$\sigma_{N,p}^{2}=V_{t_{N-1}}^{p}\Delta t.$$
The proof of Proposition \[lem:L1\] is reported in the Appendix \[ApA2\]. Therefore, we can compute the value of the option at time $t_{N-1}$ for each simulated path by using (\[eq:v\_Nm1\]).
\[lem:L2\]Let $n\in\left\{ 0,\dots,N-2\right\} $ and suppose the option price function $v\left(t_{n+1},\cdot\right)$ at time $t_{n+1}$ to be known for all the simulated paths $\left\{ \mathbf{SV}_{1:N}^{p},p=1,\dots,P\right\} $. Define
$$\boldsymbol{\mu}_{n+1,p}=\left(\log\left(S_{t_{n}}^{p}\right)+\left(r-\frac{1}{2}V_{t_{n}}^{p}\right)\Delta t,\log\left(\xi_{0}\right)+\eta\Lambda_{2n+2}\underline{\mathbf{G}}^{p}-\frac{1}{2}\eta^{2}t_{n+1}^{2H}\right)^{\top},$$
where $\Lambda_{2n+2}$ is the $2n+2$-th row of the matrix $\Lambda$ and $\underline{\mathbf{G}}^{p}=\left(G_{1}^{p},\dots,G_{2n}^{p},0\dots,0\right)^{\top}$, and $$\Sigma_{n+1,p}=\left(\begin{array}{cc}
\Delta tV_{t_{n}}^{p} & \eta\sqrt{\Delta tV_{t_{n}}^{p}}\Lambda_{2n+2,2n+1}\\
\eta\sqrt{\Delta tV_{t_{n}}^{p}}\Lambda_{2n,2n+1} & \eta^{2}\left(\Lambda_{2n+2,2n+2}^{2}+\Lambda_{2n+2,2n+1}^{2}\right)
\end{array}\right),$$ where $\Lambda_{i,j}$ stands for the element of $\Lambda$ in position $i,j$. Moreover, consider a natural number $J\in\mathbb{N}$ and set $d_{n+1}=2\min\left\{ n+1,J+1\right\} .$ Then, the GPR-EI approximation of the option value at time $t_{n}$ at $\mathbf{SV}_{\max\left\{ 1,n-J\right\} :n}^{p}$ is given by $$v_{n}^{GPR-EI}\left(\mathbf{SV}_{\max\left\{ 1,n-J\right\} :n}^{p}\right)=\max\left(\Psi\left(S_{t_{n}}^{p}\right),e^{-r\Delta t}\sigma_{f}^{2}\sigma_{d_{n+1}-1}\sigma_{d_{n+1}}\sum_{q=1}^{P}\mathbf{\omega}_{q}h_{q}^{p}f_{q}^{p}\right)\label{eq:V_n}$$ where $\sigma_{d_{n+1}-1}$, $\sigma_{d_{n+1}}$, $\sigma_{f}$ and $\omega_{1},\dots,\omega_{P}$ are certain constants determined by the GPR approximation of the function $\log\left(\mathbf{SV}_{1:n+1}\right)\mapsto v\left(t_{n+1},\mathbf{SV}_{1:n+1}\right)$ considering $\left\{ \mathbf{SV}_{\max\left\{ 1,n+1-J\right\} :n+1}^{p},p=1,\dots,P\right\} $ as the predictor set. Moreover, $h_{q}^{p}$ and $f_{q}^{p}$ are two factors given by $$h_{q}^{p}=\begin{cases}
\exp\left(-\sum_{i=1}^{d_{n+1}-2}\frac{\left(z_{i}^{p}-z_{i}^{q}\right)^{2}}{2\sigma_{i}^{2}}\right) & \text{if }n>0\\
1 & \text{if }n=0
\end{cases}$$ and $$f_{q}^{p}=\frac{\exp\left(-\frac{1}{2}\left(\left(\begin{array}{c}
z_{d_{n+1}-1}^{q}\\
z_{d_{n+1}}^{q}
\end{array}\right)-\boldsymbol{\mu}_{n+1,p}\right)^{\top}\left(\Sigma_{n+1,p}+\left(\begin{array}{cc}
\sigma_{d_{n+1}-1}^{2} & 0\\
0 & \sigma_{d_{n+1}}^{2}
\end{array}\right)\right)^{-1}\left(\left(\begin{array}{c}
z_{d_{n+1}-1}^{q}\\
z_{d_{n+1}}^{q}
\end{array}\right)-\boldsymbol{\mu}_{n+1,p}\right)\right)}{\sqrt{\text{\ensuremath{\det}}\left(\Sigma_{n+1,p}+\left(\begin{array}{cc}
\sigma_{d_{n+1}-1}^{2} & 0\\
0 & \sigma_{d_{n+1}}^{2}
\end{array}\right)\right)}},$$ where $z_{i}^{p}=\log\left(S_{n+1-\left(i-1\right)/2}^{p}\right)$ if $i$ is even and $z_{i}^{p}=\log\left(V_{n+1-i/2}^{p}\right)$ if $i$ is odd, for $i=1,\dots,d_{n+1}$.
The proof of Proposition \[lem:L2\] is reported in the Appendix \[ApA3\]. Relations (\[eq:v\_Nm1\]) and (\[eq:V\_n\]) can be used to compute the option price at time $t=0$ by backward induction.
Numerical results
=================
In this Section we present some numerical results about the effectiveness of the proposed algorithms. The first section is devoted to the numerical tests about the multi-dimensional Black-Scholes model, while the second is devoted to the rough Bergomi model. The algorithms have been implemented in MATLAB and computations have been preformed on a server which employs a $2.40$ GHz Intel$^{{\scriptsize\textregistered}}$ Xenon$^{{\scriptsize\textregistered}}$ processor (Gold 6148, Skylake) and 20 GB of RAM.
Multi-dimensional Black-Scholes model
-------------------------------------
Following Goudenège et al. [@goudenege2019machine], we consider an Arithmetic basket Put, a Geometric basket Put and a Call on the Maximum of $d$-assets.
In particular, we use the following parameters $T=1$, $S_{0}^{i}=100$, $K=100$, $r=0.05$, constant volatilities $\sigma_{i}=0.2$, constant correlations $\rho_{ij}=0.2$ and $N=10$ exercise dates. Moreover, we consider $P=250,\ 500$ or $1000$ points. As opposed to the other input parameters, we vary the dimension $d$, considering $d=2,\,5,\,10,\,20,\,40$ and $100$.
We present now the numerical results obtained with the GPR-Tree and the GPR-EI methods for the three payoff examples.
### Geometric basket Put option
Geometric basket Put is a particularly interesting option since it is possible to reduce the problem of pricing it in the $d$-dimensional model to a one dimensional American Put option in the Black-Scholes model which can be priced straightforwardly, for example using the CRR algorithm with $1000$ steps (see Cox et al. [@cox1979option]). Therefore, in this case, we have a reliable benchmark to test the proposed methods. Moreover, when $d$ is smaller than $10$ we can also compute the price by means of a multi-dimensional binomial tree (see Ekvall [@ekvall1996lattice]). In particular, the number of steps employed for the multi-dimensional binomial tree is equal to $200$ when $d=2$ and to $50$ when $d=5$. For values of $d$ larger than $5$, prices cannot be approximated via such a tree, because the memory required for the calculations would be too large. Furthermore, we also report the prices obtained with the GPR-MC method, employing $P=1000$ points and $M=10^{5}$ Monte Carlo simulations, for comparison purposes. As far as the GPR-Tree is concerned, we compute the prices only for the values of $d$ smaller than $40$ since for higher values of $d$ the tree step becomes over time demanding. In fact, the computation of the continuation value with the tree step grows exponentially with the dimension $d$ and for $d=40$ it requires the evaluation of the GPR approximation at $2^{40}\approx10^{12}$ points for every times step and for every point of $X$.
Results are reported in Table \[tab:GEO\]. We observe that the two proposed methods provide accurate and stable results and the computational time is generally very small, except for the GPR-Tree method at $d=20$. Moreover, the computer processing time of the GRP-EI method increases little with the size of the problem and this makes the method particularly effective when the dimension of the problem is high. This is because the computation of the expected value and the training of the GPR model are minimally affected by the dimension of the problem.
Figure \[fig:Comparison\] investigates the convergence of the GPR methods changing the dimension $d$. As we can see, the relative error is small with all the considered methods, but the computational time required by the GPR-Tree method and the GPR-EI method is generally smaller with respect to the GPR-MC method.
----- ----- ------------------------------------------- ------------------------------------------- ------------------------------------------- -- --------------------------------------- ---------------------------------------- ---------------------------------------- ----------- ------------ -- ------------ ------------
GPR-MC Ekvall Benchmark
$d$ $P$ $\phantom{1}250$ $\phantom{1}500$ $1000$ $\phantom{1}250$ $\phantom{1}500$ $1000$
2 [$\underset{\left(4\right)}{4.61}$ ]{} [$\underset{\left(7\right)}{4.61}$ ]{} [$\underset{\left(22\right)}{4.61}$ ]{} [$\underset{\left(4\right)}{4.58}$]{} [$\underset{\left(9\right)}{4.58}$]{} [$\underset{\left(26\right)}{4.57}$]{} [$4.57$]{} [$4.62$]{} [$4.62$]{}
5 [$\underset{\left(9\right)}{3.44}$ ]{} [$\underset{\left(15\right)}{3.43}$ ]{} [$\underset{\left(23\right)}{3.44}$ ]{} [$\underset{\left(4\right)}{3.40}$]{} [$\underset{\left(14\right)}{3.43}$]{} [$\underset{\left(27\right)}{3.41}$]{} [$3.41$]{} [$3.44$]{} [$3.45$]{}
10 [$\underset{\left(10\right)}{3.00}$ ]{} [$\underset{\left(33\right)}{2.96}$]{} [$\underset{\left(60\right)}{2.93}$ ]{} [$\underset{\left(4\right)}{2.85}$]{} [$\underset{\left(9\right)}{2.88}$]{} [$\underset{\left(30\right)}{2.93}$]{} [$2.90$]{} [$2.97$]{}
20 [$\underset{\left(4220\right)}{2.80}$ ]{} [$\underset{\left(14304\right)}{2.72}$]{} [$\underset{\left(49609\right)}{2.72}$]{} [$\underset{\left(4\right)}{2.63}$]{} [$\underset{\left(9\right)}{2.73}$]{} [$\underset{\left(29\right)}{2.63}$]{} [$2.70$]{} [$2.70$]{}
40 [$\underset{\left(4\right)}{2.45}$]{} [$\underset{\left(10\right)}{2.52}$]{} [$\underset{\left(38\right)}{2.53}$]{} [$2.57$]{} [$2.56$]{}
100 [$\underset{\left(5\right)}{2.27}$]{} [$\underset{\left(15\right)}{2.32}$]{} [$\underset{\left(45\right)}{2.39}$]{} [$2.40$]{} [$2.47$]{}
----- ----- ------------------------------------------- ------------------------------------------- ------------------------------------------- -- --------------------------------------- ---------------------------------------- ---------------------------------------- ----------- ------------ -- ------------ ------------
: \[tab:GEO\][Results for a Geometric basket Put option using the GPR-Tree method and the GPR-EI method. In the last columns, the prices obtained by using the GPR-MC method, the Ekvall multi-dimensional tree and the exact benchmark ($d$ is the dimension and $P$ is the number of points). Values in brackets are the computational times (in seconds).]{}
![\[fig:Comparison\][Comparison among the GPR methods changing the dimension $d$ and doubling the number of points from $P=250$ to $P=8000$. As far as the GPR-MC method is concerned $M=10^{4}$ Monte Carlo simulations are employed.]{}](GPR_m){width="100.00000%"}
### Arithmetic basket Put option
As opposed to the Geometric basket Put option, in this case we have no method to obtain a fully reliable benchmark. Therefore we only consider the prices obtained by means of the GPR-MC method, employed with $P=1000$ points and $M=10^{5}$ Monte Carlo simulations. Moreover, for small values of $d$, a benchmark can be obtained by means of a multi-dimensional tree method (see Boyle et al. [@boyle1989numerical]), just as shown for the Geometric case. Results are reported in Table \[tab:ARI\]. Similarly to the Geometric basket Put, the prices obtained are reliable and they do not change much with respect to the number $P$ of points. As opposed to the GPR-Tree method, which can not be applied for high values of $d$, the GPR-EI method requires a small computational time for all the values concerned of $d$.
----- ----- ------------------------------------------ ------------------------------------------- ------------------------------------------- -- --------------------------------------- ---------------------------------------- ---------------------------------------- ----------------------- ------------ -- ------------ --
GPR-MC Ekvall $\phantom{Benchmark}$
$d$ $P$ $\phantom{1}250$ $\phantom{1}500$ $1000$ $\phantom{1}250$ $\phantom{1}500$ $1000$
2 [$\underset{\left(5\right)}{4.42}$]{} [$\underset{\left(9\right)}{4.42}$]{} [$\underset{\left(25\right)}{4.42}$]{} [$\underset{\left(4\right)}{4.38}$]{} [$\underset{\left(9\right)}{4.38}$]{} [$\underset{\left(28\right)}{4.37}$]{} [$4.37$]{} [$4.42$]{}
5 [$\underset{\left(5\right)}{3.15}$]{} [$\underset{\left(9\right)}{3.12}$]{} [$\underset{\left(24\right)}{3.13}$]{} [$\underset{\left(6\right)}{3.09}$]{} [$\underset{\left(9\right)}{3.12}$]{} [$\underset{\left(44\right)}{3.10}$]{} [$3.09$]{} [$3.15$]{}
10 [$\underset{\left(10\right)}{2.71}$]{} [$\underset{\left(21\right)}{2.64}$]{} [$\underset{\left(70\right)}{2.62}$]{} [$\underset{\left(5\right)}{2.49}$]{} [$\underset{\left(9\right)}{2.56}$]{} [$\underset{\left(38\right)}{2.60}$]{} [$2.58$]{}
20 [$\underset{\left(4259\right)}{2.37}$]{} [$\underset{\left(16343\right)}{2.35}$]{} [$\underset{\left(57399\right)}{2.40}$]{} [$\underset{\left(6\right)}{2.26}$]{} [$\underset{\left(14\right)}{2.31}$]{} [$\underset{\left(42\right)}{2.28}$]{} [$2.38$]{}
40 [$\underset{\left(4\right)}{2.18}$]{} [$\underset{\left(10\right)}{2.18}$]{} [$\underset{\left(31\right)}{2.16}$]{} [$2.17$]{}
100 [$\underset{\left(7\right)}{2.35}$]{} [$\underset{\left(13\right)}{2.01}$]{} [$\underset{\left(42\right)}{2.06}$]{} [$1.92$]{}
----- ----- ------------------------------------------ ------------------------------------------- ------------------------------------------- -- --------------------------------------- ---------------------------------------- ---------------------------------------- ----------------------- ------------ -- ------------ --
: \[tab:ARI\][Results for an Arithmetic basket Put option using the GPR-Tree method and the GPR-EI method. In the last columns, the prices obtained by using the GPR-MC method and the Ekvall multi-dimensional tree ($d$ is the dimension and $P$ is the number of points). Values in brackets are the computational times (in seconds).]{}
### Call on the Maximum
As for the Arithmetic basket Put, in this case we have no numerical methods to obtain a fully reliable benchmark. However, for small values of $d$, we can approximate the price obtained by means of a multi-dimensional tree method. Moreover, we also consider the price obtained with the GPR-MC method. Results, which are shown in Table \[tab:MAX\], have an accuracy comparable to the one obtained for the Arithmetic basket Put option.
----- ----- ------------------------------------------- -------------------------------------------- -------------------------------------------- -- ---------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------- ------------- -- ------------- --
GPR-MC Ekvall $\phantom{Benchmark}$
$d$ $P$ $\phantom{1}250$ $\phantom{1}500$ $1000$ $\phantom{1}250$ $\phantom{1}500$ $1000$
2 [$\underset{\left(5\right)}{16.94}$ ]{} [$\underset{\left(8\right)}{16.94}$ ]{} [$\underset{\left(20\right)}{16.93}$ ]{} [$\underset{\left(4\right)}{16.75}$]{} [$\underset{\left(10\right)}{16.81}$]{} [$\underset{\left(28\right)}{16.82}$]{} [$16.86$]{} [$16.86$]{}
5 [$\underset{\left(5\right)}{27.14}$ ]{} [$\underset{\left(10\right)}{27.17}$]{} [$\underset{\left(26\right)}{27.19}$ ]{} [$\underset{\left(4\right)}{26.92}$]{} [$\underset{\left(9\right)}{27.15}$]{} [$\underset{\left(27\right)}{26.95}$]{} [$27.20$]{} [$27.20$]{}
10 [$\underset{\left(11\right)}{35.27}$ ]{} [$\underset{\left(21\right)}{34.97}$ ]{} [$\underset{\left(106\right)}{35.08}$ ]{} [$\underset{\left(4\right)}{35.66}$]{} [$\underset{\left(10\right)}{34.98}$]{} [$\underset{\left(29\right)}{34.84}$]{} [$35.17$]{}
20 [$\underset{\left(4126\right)}{43.26}$]{} [$\underset{\left(15025\right)}{43.21}$]{} [$\underset{\left(51090\right)}{43.00}$]{} [$\underset{\left(4\right)}{45.05}$]{} [$\underset{\left(11\right)}{42.74}$]{} [$\underset{\left(35\right)}{42.62}$]{} [$42.76$]{}
40 [$\underset{\left(5\right)}{51.79}$]{} [$\underset{\left(10\right)}{50.36}$]{} [$\underset{\left(41\right)}{49.53}$]{} [$50.70$]{}
100 [$\underset{\left(5\right)}{59.03}$]{} [$\underset{\left(13\right)}{60.72}$]{} [$\underset{\left(42\right)}{60.96}$]{} [$59.69$]{}
----- ----- ------------------------------------------- -------------------------------------------- -------------------------------------------- -- ---------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------- ------------- -- ------------- --
: \[tab:MAX\][Results for a Call on Maximum Put option using the GPR-Tree method and the GPR-EI method. In the last columns, the prices obtained by using the GPR-MC method and the Ekvall multi-dimensional tree ($d$ is the dimension and $P$ is the number of points). Values in brackets are the computational times (in seconds).]{}
Rough Bergomi model
-------------------
Following Bayer et al. [@bayer2018pricing], we consider an American Put option and we use the same parameters: $T=1$, $H=0.07$, $\rho=-0.90$, $\xi_{0}=0.09$, $\eta=1.9$, $S_{0}=100$, $r=0.05$ and strike $K=70,80,\dots,120,130$ or $140$. As far as the GPR-Tree is concerned, we employ $N=50$ or $N=100$ time steps with $m=2$, $P=500,1000,2000$ or $4000$ random paths, and $J=0,1,3,7$ or $15$ past values. As far as the GPR-EI is concerned, we employ $N=50$ or $N=100$ time steps, $P=1000,2000,4000$ or $8000$ random paths, and $J=0,1,3,7$ or $15$ past values. Similar to what observed by Bayer et al. [@bayer2018pricing], the difference changing the value of $J$ does not impact significantly on the price, which indicates that considering the non-Markovian nature of the processes in the formulation of the exercise strategies is not particularly relevant. Conversely, using a large number of predictors significantly increases computational time. Numerical results are reported in Tables \[tab:RB-Tree\] and \[tab:RB-EI\], together with the results reported by Bayer et al. in [@bayer2018pricing]. Prices are very close to the benchmark, except for the case $K=120$: in this case with both the two GPR methods we obtain a price which is close to $20.20$ while Bayer et al. obtain $20.00$. Anyway, it is worth noticing that the relative gap between these two results is less than $1\%$ .
Bayer et al.
----------- -- ---------- --------- ------------------------------------------ ------------------------------------------- ------------------------------------------- ------------------------------------------- ----------- ------------------------------------------ ------------------------------------------- ------------------------------------------- -------------------------------------------- -- --
[$N$]{}
[$J$]{} [$P$]{} [$\phantom{1}500$]{} [$1000$]{} [$2000$]{} [$4000$]{} [$\phantom{1}500$]{} [$1000$]{} [$2000$]{} [$4000$]{}
[$70$]{} [$0$]{} [$\underset{\left(28\right)}{1.87}$]{} [$\underset{\left(97\right)}{1.88}$]{} [$\underset{\left(391\right)}{1.88}$]{} [$\underset{\left(876\right)}{1.86}$]{} [$\underset{\left(71\right)}{1.87}$]{} [$\underset{\left(236\right)}{1.86}$]{} [$\underset{\left(646\right)}{1.86}$]{} [$\underset{\left(1337\right)}{1.86}$]{}
[$1$]{} [$\underset{\left(44\right)}{1.86}$]{} [$\underset{\left(183\right)}{1.87}$]{} [$\underset{\left(607\right)}{1.88}$]{} [$\underset{\left(1672\right)}{1.87}$]{} [$\underset{\left(95\right)}{1.86}$]{} [$\underset{\left(310\right)}{1.87}$]{} [$\underset{\left(1222\right)}{1.87}$]{} [$\underset{\left(2265\right)}{1.87}$]{}
[$3$]{} [$\underset{\left(71\right)}{1.87}$]{} [$\underset{\left(296\right)}{1.86}$]{} [$\underset{\left(1084\right)}{1.87}$]{} [$\underset{\left(3742\right)}{1.87}$]{} [$\underset{\left(163\right)}{1.86}$]{} [$\underset{\left(594\right)}{1.87}$]{} [$\underset{\left(1962\right)}{1.88}$]{} [$\underset{\left(4222\right)}{1.88}$]{}
[$7$]{} [$\underset{\left(168\right)}{1.91}$]{} [$\underset{\left(563\right)}{1.87}$]{} [$\underset{\left(1930\right)}{1.86}$]{} [$\underset{\left(3501\right)}{1.87}$]{} [$\underset{\left(275\right)}{1.85}$]{} [$\underset{\left(1141\right)}{1.87}$]{} [$\underset{\left(4579\right)}{1.88}$]{} [$\underset{\left(7997\right)}{1.88}$]{}
[$15$]{} [$\underset{\left(248\right)}{1.94}$]{} [$\underset{\left(986\right)}{1.88}$]{} [$\underset{\left(4841\right)}{1.87}$]{} [$\underset{\left(7806\right)}{1.87}$]{} [$\underset{\left(541\right)}{1.87}$]{} [$\underset{\left(2171\right)}{1.88}$]{} [$\underset{\left(10169\right)}{1.87}$]{} [$\underset{\left(16466\right)}{1.86}$]{}
[$80$]{} [$0$]{} [$\underset{\left(31\right)}{3.18}$]{} [$\underset{\left(117\right)}{3.19}$]{} [$\underset{\left(376\right)}{3.20}$]{} [$\underset{\left(823\right)}{3.20}$]{} [$\underset{\left(85\right)}{3.17}$]{} [$\underset{\left(216\right)}{3.18}$]{} [$\underset{\left(603\right)}{3.19}$]{} [$\underset{\left(1368\right)}{3.19}$]{}
[$1$]{} [$\underset{\left(47\right)}{3.19}$]{} [$\underset{\left(152\right)}{3.19}$]{} [$\underset{\left(569\right)}{3.20}$]{} [$\underset{\left(1166\right)}{3.20}$]{} [$\underset{\left(119\right)}{3.18}$]{} [$\underset{\left(322\right)}{3.20}$]{} [$\underset{\left(1107\right)}{3.20}$]{} [$\underset{\left(2396\right)}{3.20}$]{}
[$3$]{} [$\underset{\left(93\right)}{3.19}$]{} [$\underset{\left(287\right)}{3.19}$]{} [$\underset{\left(1070\right)}{3.20}$]{} [$\underset{\left(2095\right)}{3.20}$]{} [$\underset{\left(167\right)}{3.17}$]{} [$\underset{\left(617\right)}{3.21}$]{} [$\underset{\left(1966\right)}{3.21}$]{} [$\underset{\left(4043\right)}{3.22}$]{}
[$7$]{} [$\underset{\left(136\right)}{3.21}$]{} [$\underset{\left(624\right)}{3.20}$]{} [$\underset{\left(2653\right)}{3.20}$]{} [$\underset{\left(3721\right)}{3.21}$]{} [$\underset{\left(301\right)}{3.19}$]{} [$\underset{\left(1134\right)}{3.21}$]{} [$\underset{\left(4179\right)}{3.21}$]{} [$\underset{\left(8031\right)}{3.23}$]{}
[$15$]{} [$\underset{\left(322\right)}{3.24}$]{} [$\underset{\left(1186\right)}{3.20}$]{} [$\underset{\left(7011\right)}{3.20}$]{} [$\underset{\left(7392\right)}{3.21}$]{} [$\underset{\left(633\right)}{3.18}$]{} [$\underset{\left(1940\right)}{3.20}$]{} [$\underset{\left(10317\right)}{3.20}$]{} [$\underset{\left(20584\right)}{3.23}$]{}
[$90$]{} [$0$]{} [$\underset{\left(28\right)}{5.24}$]{} [$\underset{\left(102\right)}{5.24}$]{} [$\underset{\left(359\right)}{5.25}$]{} [$\underset{\left(702\right)}{5.26}$]{} [$\underset{\left(82\right)}{5.25}$]{} [$\underset{\left(223\right)}{5.28}$]{} [$\underset{\left(707\right)}{5.26}$]{} [$\underset{\left(1504\right)}{5.28}$]{}
[$1$]{} [$\underset{\left(44\right)}{5.25}$]{} [$\underset{\left(163\right)}{5.25}$]{} [$\underset{\left(512\right)}{5.26}$]{} [$\underset{\left(1185\right)}{5.27}$]{} [$\underset{\left(109\right)}{5.27}$]{} [$\underset{\left(283\right)}{5.29}$]{} [$\underset{\left(1144\right)}{5.28}$]{} [$\underset{\left(3954\right)}{5.30}$]{}
[$3$]{} [$\underset{\left(94\right)}{5.27}$]{} [$\underset{\left(330\right)}{5.26}$]{} [$\underset{\left(1058\right)}{5.28}$]{} [$\underset{\left(1756\right)}{5.28}$]{} [$\underset{\left(177\right)}{5.30}$]{} [$\underset{\left(555\right)}{5.31}$]{} [$\underset{\left(1833\right)}{5.31}$]{} [$\underset{\left(4226\right)}{5.32}$]{}
[$7$]{} [$\underset{\left(150\right)}{5.28}$]{} [$\underset{\left(561\right)}{5.30}$]{} [$\underset{\left(2253\right)}{5.29}$]{} [$\underset{\left(3595\right)}{5.29}$]{} [$\underset{\left(315\right)}{5.30}$]{} [$\underset{\left(1089\right)}{5.33}$]{} [$\underset{\left(4319\right)}{5.33}$]{} [$\underset{\left(7073\right)}{5.33}$]{}
[$15$]{} [$\underset{\left(269\right)}{5.28}$]{} [$\underset{\left(1000\right)}{5.27}$]{} [$\underset{\left(4348\right)}{5.28}$]{} [$\underset{\left(7411\right)}{5.29}$]{} [$\underset{\left(533\right)}{5.28}$]{} [$\underset{\left(2127\right)}{5.33}$]{} [$\underset{\left(17098\right)}{5.34}$]{} [$\underset{\left(16804\right)}{5.34}$]{}
[$100$]{} [$0$]{} [$\underset{\left(29\right)}{8.36}$]{} [$\underset{\left(103\right)}{8.37}$]{} [$\underset{\left(329\right)}{8.37}$]{} [$\underset{\left(748\right)}{8.39}$]{} [$\underset{\left(70\right)}{8.42}$]{} [$\underset{\left(190\right)}{8.45}$]{} [$\underset{\left(584\right)}{8.42}$]{} [$\underset{\left(1313\right)}{8.46}$]{}
[$1$]{} [$\underset{\left(47\right)}{8.39}$]{} [$\underset{\left(177\right)}{8.40}$]{} [$\underset{\left(510\right)}{8.39}$]{} [$\underset{\left(1145\right)}{8.42}$]{} [$\underset{\left(89\right)}{8.43}$]{} [$\underset{\left(325\right)}{8.46}$]{} [$\underset{\left(969\right)}{8.44}$]{} [$\underset{\left(2058\right)}{8.48}$]{}
[$3$]{} [$\underset{\left(89\right)}{8.42}$]{} [$\underset{\left(302\right)}{8.42}$]{} [$\underset{\left(986\right)}{8.43}$]{} [$\underset{\left(1844\right)}{8.45}$]{} [$\underset{\left(167\right)}{8.47}$]{} [$\underset{\left(551\right)}{8.50}$]{} [$\underset{\left(2322\right)}{8.49}$]{} [$\underset{\left(4439\right)}{8.51}$]{}
[$7$]{} [$\underset{\left(173\right)}{8.43}$]{} [$\underset{\left(552\right)}{8.43}$]{} [$\underset{\left(2083\right)}{8.44}$]{} [$\underset{\left(3926\right)}{8.45}$]{} [$\underset{\left(322\right)}{8.47}$]{} [$\underset{\left(1117\right)}{8.51}$]{} [$\underset{\left(4120\right)}{8.49}$]{} [$\underset{\left(8324\right)}{8.53}$]{}
[$15$]{} [$\underset{\left(340\right)}{8.44}$]{} [$\underset{\left(1134\right)}{8.44}$]{} [$\underset{\left(4637\right)}{8.45}$]{} [$\underset{\left(7013\right)}{8.46}$]{} [$\underset{\left(684\right)}{8.51}$]{} [$\underset{\left(2229\right)}{8.53}$]{} [$\underset{\left(8403\right)}{8.48}$]{} [$\underset{\left(14183\right)}{8.53}$]{}
[$110$]{} [$0$]{} [$\underset{\left(32\right)}{13.04}$]{} [$\underset{\left(90\right)}{13.06}$]{} [$\underset{\left(334\right)}{13.08}$]{} [$\underset{\left(695\right)}{13.12}$]{} [$\underset{\left(77\right)}{13.15}$]{} [$\underset{\left(237\right)}{13.18}$]{} [$\underset{\left(572\right)}{13.16}$]{} [$\underset{\left(1364\right)}{13.20}$]{}
[$1$]{} [$\underset{\left(67\right)}{13.09}$]{} [$\underset{\left(180\right)}{13.09}$]{} [$\underset{\left(544\right)}{13.12}$]{} [$\underset{\left(1135\right)}{13.15}$]{} [$\underset{\left(95\right)}{13.17}$]{} [$\underset{\left(296\right)}{13.20}$]{} [$\underset{\left(1192\right)}{13.19}$]{} [$\underset{\left(2207\right)}{13.22}$]{}
[$3$]{} [$\underset{\left(78\right)}{13.11}$]{} [$\underset{\left(282\right)}{13.14}$]{} [$\underset{\left(1119\right)}{13.17}$]{} [$\underset{\left(1896\right)}{13.18}$]{} [$\underset{\left(158\right)}{13.18}$]{} [$\underset{\left(575\right)}{13.23}$]{} [$\underset{\left(1917\right)}{13.23}$]{} [$\underset{\left(4028\right)}{13.26}$]{}
[$7$]{} [$\underset{\left(157\right)}{13.11}$]{} [$\underset{\left(520\right)}{13.14}$]{} [$\underset{\left(2083\right)}{13.19}$]{} [$\underset{\left(3659\right)}{13.19}$]{} [$\underset{\left(318\right)}{13.20}$]{} [$\underset{\left(1058\right)}{13.22}$]{} [$\underset{\left(4508\right)}{13.24}$]{} [$\underset{\left(7440\right)}{13.29}$]{}
[$15$]{} [$\underset{\left(254\right)}{13.09}$]{} [$\underset{\left(1007\right)}{13.15}$]{} [$\underset{\left(4449\right)}{13.17}$]{} [$\underset{\left(7668\right)}{13.20}$]{} [$\underset{\left(625\right)}{13.22}$]{} [$\underset{\left(2582\right)}{13.26}$]{} [$\underset{\left(9055\right)}{13.27}$]{} [$\underset{\left(13191\right)}{13.24}$]{}
[$120$]{} [$0$]{} [$\underset{\left(37\right)}{20.19}$]{} [$\underset{\left(121\right)}{20.19}$]{} [$\underset{\left(304\right)}{20.20}$]{} [$\underset{\left(692\right)}{20.22}$]{} [$\underset{\left(79\right)}{20.21}$]{} [$\underset{\left(206\right)}{20.24}$]{} [$\underset{\left(662\right)}{20.22}$]{} [$\underset{\left(1484\right)}{20.23}$]{}
[$1$]{} [$\underset{\left(49\right)}{20.20}$]{} [$\underset{\left(180\right)}{20.21}$]{} [$\underset{\left(494\right)}{20.21}$]{} [$\underset{\left(1047\right)}{20.25}$]{} [$\underset{\left(95\right)}{20.21}$]{} [$\underset{\left(283\right)}{20.24}$]{} [$\underset{\left(959\right)}{20.24}$]{} [$\underset{\left(2029\right)}{20.26}$]{}
[$3$]{} [$\underset{\left(98\right)}{20.19}$]{} [$\underset{\left(268\right)}{20.19}$]{} [$\underset{\left(1120\right)}{20.25}$]{} [$\underset{\left(2077\right)}{20.26}$]{} [$\underset{\left(156\right)}{20.23}$]{} [$\underset{\left(588\right)}{20.26}$]{} [$\underset{\left(2075\right)}{20.26}$]{} [$\underset{\left(3705\right)}{20.24}$]{}
[$7$]{} [$\underset{\left(152\right)}{20.20}$]{} [$\underset{\left(511\right)}{20.18}$]{} [$\underset{\left(1935\right)}{20.17}$]{} [$\underset{\left(3592\right)}{20.26}$]{} [$\underset{\left(363\right)}{20.25}$]{} [$\underset{\left(1139\right)}{20.25}$]{} [$\underset{\left(4395\right)}{20.23}$]{} [$\underset{\left(6293\right)}{20.28}$]{}
[$15$]{} [$\underset{\left(278\right)}{20.19}$]{} [$\underset{\left(1036\right)}{20.17}$]{} [$\underset{\left(4161\right)}{20.22}$]{} [$\underset{\left(7844\right)}{20.24}$]{} [$\underset{\left(624\right)}{20.18}$]{} [$\underset{\left(1951\right)}{20.22}$]{} [$\underset{\left(8057\right)}{20.19}$]{} [$\underset{\left(15643\right)}{20.28}$]{}
[$130$]{} [30.00]{}
[$140$]{} [40.00]{}
: \[tab:RB-Tree\][Results for an American Put option in the rough Bergomi model using the GPR-Tree method. $N$ represents the number of time steps, $P$ the number of the simulated paths and $J$ the number of past values employed in the regression. Values in brackets are the computational times (in seconds).]{}
Bayer et al.
----------- ---------- --------- ------------------------------------------- ------------------------------------------- ------------------------------------------- ------------------------------------------- ----------- ------------------------------------------- ------------------------------------------- -------------------------------------------- ------------------------------------------- -- --
[$N$]{}
[$J$]{} [$P$]{} [$1000$]{} [$2000$]{} [$4000$]{} [$8000$]{} [$1000$]{} [$2000$]{} [$4000$]{} [$8000$]{}
[$70$]{} [$0$]{} [$\underset{\left(101\right)}{1.82}$]{} [$\underset{\left(253\right)}{1.84}$]{} [$\underset{\left(351\right)}{1.85}$]{} [$\underset{\left(533\right)}{1.85}$]{} [$\underset{\left(162\right)}{1.86}$]{} [$\underset{\left(579\right)}{1.88}$]{} [$\underset{\left(689\right)}{1.87}$]{} [$\underset{\left(1011\right)}{1.88}$]{}
[$1$]{} [$\underset{\left(96\right)}{1.82}$]{} [$\underset{\left(525\right)}{1.85}$]{} [$\underset{\left(636\right)}{1.85}$]{} [$\underset{\left(884\right)}{1.85}$]{} [$\underset{\left(184\right)}{1.86}$]{} [$\underset{\left(816\right)}{1.88}$]{} [$\underset{\left(913\right)}{1.87}$]{} [$\underset{\left(1551\right)}{1.88}$]{}
[$3$]{} [$\underset{\left(263\right)}{1.83}$]{} [$\underset{\left(1305\right)}{1.85}$]{} [$\underset{\left(1118\right)}{1.83}$]{} [$\underset{\left(1630\right)}{1.84}$]{} [$\underset{\left(369\right)}{1.86}$]{} [$\underset{\left(2389\right)}{1.88}$]{} [$\underset{\left(2831\right)}{1.88}$]{} [$\underset{\left(2994\right)}{1.89}$]{}
[$7$]{} [$\underset{\left(497\right)}{1.81}$]{} [$\underset{\left(2706\right)}{1.85}$]{} [$\underset{\left(3014\right)}{1.85}$]{} [$\underset{\left(3447\right)}{1.85}$]{} [$\underset{\left(657\right)}{1.80}$]{} [$\underset{\left(4848\right)}{1.87}$]{} [$\underset{\left(5576\right)}{1.88}$]{} [$\underset{\left(4132\right)}{1.86}$]{}
[$15$]{} [$\underset{\left(820\right)}{1.78}$]{} [$\underset{\left(4939\right)}{1.84}$]{} [$\underset{\left(5802\right)}{1.83}$]{} [$\underset{\left(6006\right)}{1.83}$]{} [$\underset{\left(1932\right)}{1.79}$]{} [$\underset{\left(11876\right)}{1.83}$]{} [$\underset{\left(14703\right)}{1.85}$]{} [$\underset{\left(5870\right)}{1.88}$]{}
[$80$]{} [$0$]{} [$\underset{\left(86\right)}{3.14}$]{} [$\underset{\left(271\right)}{3.16}$]{} [$\underset{\left(348\right)}{3.18}$]{} [$\underset{\left(558\right)}{3.17}$]{} [$\underset{\left(162\right)}{3.22}$]{} [$\underset{\left(549\right)}{3.24}$]{} [$\underset{\left(602\right)}{3.21}$]{} [$\underset{\left(1065\right)}{3.22}$]{}
[$1$]{} [$\underset{\left(127\right)}{3.14}$]{} [$\underset{\left(409\right)}{3.16}$]{} [$\underset{\left(601\right)}{3.19}$]{} [$\underset{\left(865\right)}{3.18}$]{} [$\underset{\left(212\right)}{3.23}$]{} [$\underset{\left(984\right)}{3.24}$]{} [$\underset{\left(847\right)}{3.21}$]{} [$\underset{\left(1285\right)}{3.22}$]{}
[$3$]{} [$\underset{\left(160\right)}{3.14}$]{} [$\underset{\left(1334\right)}{3.18}$]{} [$\underset{\left(1190\right)}{3.19}$]{} [$\underset{\left(1476\right)}{3.19}$]{} [$\underset{\left(357\right)}{3.22}$]{} [$\underset{\left(1411\right)}{3.24}$]{} [$\underset{\left(2739\right)}{3.23}$]{} [$\underset{\left(2387\right)}{3.21}$]{}
[$7$]{} [$\underset{\left(453\right)}{3.15}$]{} [$\underset{\left(3263\right)}{3.18}$]{} [$\underset{\left(3197\right)}{3.19}$]{} [$\underset{\left(3252\right)}{3.18}$]{} [$\underset{\left(631\right)}{3.22}$]{} [$\underset{\left(5813\right)}{3.24}$]{} [$\underset{\left(5327\right)}{3.23}$]{} [$\underset{\left(5035\right)}{3.25}$]{}
[$15$]{} [$\underset{\left(947\right)}{3.12}$]{} [$\underset{\left(7107\right)}{3.16}$]{} [$\underset{\left(5650\right)}{3.19}$]{} [$\underset{\left(7575\right)}{3.16}$]{} [$\underset{\left(2103\right)}{3.17}$]{} [$\underset{\left(17466\right)}{3.12}$]{} [$\underset{\left(15258\right)}{3.23}$]{} [$\underset{\left(5974\right)}{3.22}$]{}
[$90$]{} [$0$]{} [$\underset{\left(77\right)}{5.19}$]{} [$\underset{\left(271\right)}{5.22}$]{} [$\underset{\left(353\right)}{5.24}$]{} [$\underset{\left(517\right)}{5.24}$]{} [$\underset{\left(166\right)}{5.29}$]{} [$\underset{\left(470\right)}{5.30}$]{} [$\underset{\left(608\right)}{5.28}$]{} [$\underset{\left(993\right)}{5.29}$]{}
[$1$]{} [$\underset{\left(89\right)}{5.19}$]{} [$\underset{\left(416\right)}{5.22}$]{} [$\underset{\left(455\right)}{5.24}$]{} [$\underset{\left(748\right)}{5.25}$]{} [$\underset{\left(223\right)}{5.31}$]{} [$\underset{\left(887\right)}{5.32}$]{} [$\underset{\left(1146\right)}{5.30}$]{} [$\underset{\left(1266\right)}{5.29}$]{}
[$3$]{} [$\underset{\left(239\right)}{5.22}$]{} [$\underset{\left(1036\right)}{5.26}$]{} [$\underset{\left(1259\right)}{5.27}$]{} [$\underset{\left(1230\right)}{5.24}$]{} [$\underset{\left(493\right)}{5.33}$]{} [$\underset{\left(2624\right)}{5.34}$]{} [$\underset{\left(1427\right)}{5.28}$]{} [$\underset{\left(2387\right)}{5.33}$]{}
[$7$]{} [$\underset{\left(307\right)}{5.19}$]{} [$\underset{\left(2490\right)}{5.23}$]{} [$\underset{\left(2348\right)}{5.26}$]{} [$\underset{\left(2534\right)}{5.25}$]{} [$\underset{\left(1584\right)}{5.32}$]{} [$\underset{\left(3909\right)}{5.30}$]{} [$\underset{\left(4560\right)}{5.30}$]{} [$\underset{\left(5803\right)}{5.34}$]{}
[$15$]{} [$\underset{\left(1189\right)}{5.23}$]{} [$\underset{\left(5729\right)}{5.25}$]{} [$\underset{\left(6236\right)}{5.26}$]{} [$\underset{\left(6503\right)}{5.27}$]{} [$\underset{\left(2120\right)}{5.28}$]{} [$\underset{\left(9220\right)}{5.28}$]{} [$\underset{\left(9943\right)}{5.28}$]{} [$\underset{\left(6216\right)}{5.29}$]{}
[$100$]{} [$0$]{} [$\underset{\left(81\right)}{8.30}$]{} [$\underset{\left(260\right)}{8.33}$]{} [$\underset{\left(472\right)}{8.36}$]{} [$\underset{\left(566\right)}{8.38}$]{} [$\underset{\left(189\right)}{8.44}$]{} [$\underset{\left(466\right)}{8.46}$]{} [$\underset{\left(625\right)}{8.45}$]{} [$\underset{\left(1099\right)}{8.45}$]{}
[$1$]{} [$\underset{\left(93\right)}{8.30}$]{} [$\underset{\left(402\right)}{8.33}$]{} [$\underset{\left(413\right)}{8.36}$]{} [$\underset{\left(732\right)}{8.38}$]{} [$\underset{\left(191\right)}{8.44}$]{} [$\underset{\left(742\right)}{8.46}$]{} [$\underset{\left(1189\right)}{8.48}$]{} [$\underset{\left(1362\right)}{8.46}$]{}
[$3$]{} [$\underset{\left(250\right)}{8.37}$]{} [$\underset{\left(851\right)}{8.35}$]{} [$\underset{\left(1412\right)}{8.43}$]{} [$\underset{\left(1028\right)}{8.38}$]{} [$\underset{\left(362\right)}{8.44}$]{} [$\underset{\left(1256\right)}{8.46}$]{} [$\underset{\left(2344\right)}{8.51}$]{} [$\underset{\left(1886\right)}{8.45}$]{}
[$7$]{} [$\underset{\left(476\right)}{8.39}$]{} [$\underset{\left(2957\right)}{8.39}$]{} [$\underset{\left(3366\right)}{8.44}$]{} [$\underset{\left(3556\right)}{8.42}$]{} [$\underset{\left(670\right)}{8.44}$]{} [$\underset{\left(3808\right)}{8.47}$]{} [$\underset{\left(4867\right)}{8.53}$]{} [$\underset{\left(5165\right)}{8.52}$]{}
[$15$]{} [$\underset{\left(573\right)}{8.30}$]{} [$\underset{\left(3808\right)}{8.34}$]{} [$\underset{\left(6466\right)}{8.42}$]{} [$\underset{\left(10222\right)}{8.45}$]{} [$\underset{\left(1361\right)}{8.44}$]{} [$\underset{\left(9213\right)}{8.46}$]{} [$\underset{\left(12488\right)}{8.49}$]{} [$\underset{\left(11531\right)}{8.51}$]{}
[$110$]{} [$0$]{} [$\underset{\left(84\right)}{13.05}$]{} [$\underset{\left(229\right)}{13.07}$]{} [$\underset{\left(325\right)}{13.10}$]{} [$\underset{\left(519\right)}{13.10}$]{} [$\underset{\left(216\right)}{13.20}$]{} [$\underset{\left(486\right)}{13.18}$]{} [$\underset{\left(646\right)}{13.17}$]{} [$\underset{\left(1048\right)}{13.17}$]{}
[$1$]{} [$\underset{\left(190\right)}{13.08}$]{} [$\underset{\left(444\right)}{13.09}$]{} [$\underset{\left(476\right)}{13.12}$]{} [$\underset{\left(796\right)}{13.14}$]{} [$\underset{\left(182\right)}{13.20}$]{} [$\underset{\left(737\right)}{13.18}$]{} [$\underset{\left(857\right)}{13.17}$]{} [$\underset{\left(1770\right)}{13.17}$]{}
[$3$]{} [$\underset{\left(180\right)}{13.06}$]{} [$\underset{\left(728\right)}{13.08}$]{} [$\underset{\left(1162\right)}{13.17}$]{} [$\underset{\left(1111\right)}{13.10}$]{} [$\underset{\left(454\right)}{13.24}$]{} [$\underset{\left(1635\right)}{13.20}$]{} [$\underset{\left(2035\right)}{13.21}$]{} [$\underset{\left(2751\right)}{13.27}$]{}
[$7$]{} [$\underset{\left(360\right)}{13.05}$]{} [$\underset{\left(4208\right)}{13.16}$]{} [$\underset{\left(2252\right)}{13.13}$]{} [$\underset{\left(2111\right)}{13.13}$]{} [$\underset{\left(772\right)}{13.20}$]{} [$\underset{\left(4496\right)}{13.25}$]{} [$\underset{\left(4532\right)}{13.24}$]{} [$\underset{\left(4336\right)}{13.21}$]{}
[$15$]{} [$\underset{\left(812\right)}{13.05}$]{} [$\underset{\left(5221\right)}{13.09}$]{} [$\underset{\left(6290\right)}{13.19}$]{} [$\underset{\left(5257\right)}{13.12}$]{} [$\underset{\left(2118\right)}{13.27}$]{} [$\underset{\left(9895\right)}{13.21}$]{} [$\underset{\left(13941\right)}{13.28}$]{} [$\underset{\left(6948\right)}{13.22}$]{}
[$120$]{} [$0$]{} [$\underset{\left(86\right)}{20.19}$]{} [$\underset{\left(281\right)}{20.20}$]{} [$\underset{\left(307\right)}{20.21}$]{} [$\underset{\left(704\right)}{20.21}$]{} [$\underset{\left(174\right)}{20.24}$]{} [$\underset{\left(535\right)}{20.21}$]{} [$\underset{\left(620\right)}{20.21}$]{} [$\underset{\left(1087\right)}{20.21}$]{}
[$1$]{} [$\underset{\left(93\right)}{20.19}$]{} [$\underset{\left(372\right)}{20.20}$]{} [$\underset{\left(454\right)}{20.21}$]{} [$\underset{\left(736\right)}{20.21}$]{} [$\underset{\left(200\right)}{20.24}$]{} [$\underset{\left(776\right)}{20.21}$]{} [$\underset{\left(1025\right)}{20.22}$]{} [$\underset{\left(1311\right)}{20.21}$]{}
[$3$]{} [$\underset{\left(180\right)}{20.19}$]{} [$\underset{\left(675\right)}{20.20}$]{} [$\underset{\left(1002\right)}{20.20}$]{} [$\underset{\left(1286\right)}{20.19}$]{} [$\underset{\left(468\right)}{20.25}$]{} [$\underset{\left(1825\right)}{20.22}$]{} [$\underset{\left(1418\right)}{20.21}$]{} [$\underset{\left(1971\right)}{20.21}$]{}
[$7$]{} [$\underset{\left(323\right)}{20.19}$]{} [$\underset{\left(2411\right)}{20.22}$]{} [$\underset{\left(2307\right)}{20.21}$]{} [$\underset{\left(3043\right)}{20.22}$]{} [$\underset{\left(696\right)}{20.24}$]{} [$\underset{\left(5008\right)}{20.20}$]{} [$\underset{\left(2715\right)}{20.21}$]{} [$\underset{\left(4496\right)}{20.19}$]{}
[$15$]{} [$\underset{\left(1227\right)}{20.16}$]{} [$\underset{\left(5759\right)}{20.20}$]{} [$\underset{\left(3662\right)}{20.19}$]{} [$\underset{\left(7173\right)}{20.21}$]{} [$\underset{\left(1300\right)}{20.24}$]{} [$\underset{\left(9185\right)}{20.22}$]{} [$\underset{\left(5834\right)}{20.20}$]{} [$\underset{\left(7580\right)}{20.21}$]{}
[$130$]{} [30.00]{}
[$140$]{} [40.00]{}
: \[tab:RB-EI\][Results for an American Put option in the rough Bergomi model using the GPR-EI method. $N$ represents the number of time steps, $P$ the number of the simulated paths and $J$ the number of past values employed in the regression. Values in brackets are the computational times (in seconds).]{}
Conclusions
===========
In this paper we have presented two numerical methods to compute the price of American options on a basket of underlyings following the Black-Scholes dynamics. These two methods are based on the GPR-Monte Carlo method and improve its results in terms of accuracy and computational time. The GPR-Tree method can be applied for dimensions up to $d=20$ and it proves to be very efficient when $d\leq10$. The GPR-Exact Integration method proves to be particularly flexible and stands out for the small computational cost which allows one to obtain excellent estimates in a very short time. The two methods also turns out to be an effective tool to address non-Markovian problems such as the pricing of American options in the rough Bergomi model. These two methods are thus a step forward in overcoming the curse of dimensionality.
\[ApA0\]Proof of Proposition \[prop:0\]
=======================================
Let $n\in\left\{ 0,\dots,N-1\right\} $ and suppose the function $u\left(t_{n+1},\cdot\right)$ at time $t_{n+1}$ to be known at $Z$. Let us define the quantity $$\hat{\mathbf{x}}^{p}=\exp\left(\mathbf{z}^{p}+\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t_{n}\right)$$ for $p=1,\dots,P$. The function $u\left(t_{n},\cdot\right)$ at time $t_{n}$ at $\mathbf{z}^{p}$ follows $$\begin{aligned}
u\left(t_{n},\mathbf{z}^{p}\right) & =v\left(t_{n},\hat{\mathbf{x}}^{p}\right).\\
& =\max\left(\Psi\left(\hat{\mathbf{x}}^{p}\right),C\left(t_{n},\hat{\mathbf{x}}^{p}\right)\right),\end{aligned}$$ where $$C\left(t_{n},\hat{\mathbf{x}}^{p}\right)=\mathbb{E}_{t_{n},\hat{\mathbf{x}}^{p}}\left[e^{-r\Delta t}v\left(t_{n+1},\mathbf{S}_{t_{n+1}}\right)\right]$$ We can also write $$\begin{aligned}
C\left(t_{n},\hat{\mathbf{x}}^{p}\right) & =\mathbb{E}_{t_{n},\hat{\mathbf{x}}^{p}}\left[e^{-r\Delta t}u\left(t_{n+1},\log\left(\mathbf{S}_{t_{n+1}}\right)-\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t_{n+1}\right)\right]\\
& =\mathbb{E}_{t_{n},\hat{\mathbf{x}}^{p}}\left[e^{-r\Delta t}u\left(t_{n+1},\mathbf{Z}_{t_{n+1}}\right)\right]\label{eq:A6a}\end{aligned}$$ where $\mathbf{Z}_{t_{n+1}}$ is the random variable defined as $$\mathbf{Z}_{t_{n+1}}=\log\left(\mathbf{S}_{t_{n+1}}\right)-\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t_{n+1}.$$ Let us define $\Pi=\left(\Pi_{i,j}\right)$ as the $d\times d$ covariance matrix of the log-increments, that is $\Pi_{i,j}=\rho_{i,j}\sigma_{i}\sigma_{j}\Delta t$ . Moreover, let $\Lambda$ be a square root of $\Pi$ and $\mathbf{G}$ as a vector that follows a standard Gaussian law. Then, we observe that $\mathbf{Z}_{t_{n+1}}$ has the following conditional law $$\mathbf{Z}_{t_{n+1}}\left|\mathbf{S}_{t_{n}}=\hat{\mathbf{x}}^{p}\right.\sim\mathcal{N}\left(\mathbf{z}^{p},\varPi\right).\label{eq:A6}$$ In fact, simple Algebra leads to $$\mathbf{Z}_{t_{n+1}}=\mathbf{z}^{p}+\Lambda\mathbf{G}.$$ Moreover, relation (\[eq:A6\]) can also be stated as $$\mathbf{Z}_{t_{n+1}}\left|\left(\log\left(\mathbf{S}_{t_{n}}\right)-\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t_{n}=\mathbf{z}^{p}\right)\right.\sim\mathcal{N}\left(\mathbf{z}^{p},\varPi\right).$$ Let $f_{\mathbf{z}^{p}}\left(\mathbf{z}\right)$ denote the density function of $\mathbf{Z}_{t_{n+1}}$ given $\log\left(\mathbf{S}_{t_{n}}\right)-\left(r-\frac{1}{2}\boldsymbol{\sigma}^{2}\right)t_{n}=\mathbf{z}^{p}$ . Specifically, $$f_{\mathbf{z}^{p}}\left(\mathbf{z}\right)=\frac{1}{\left(2\pi\right)^{\frac{d}{2}}\sqrt{\det\left(\Pi\right)}}\exp\left(-\frac{1}{2}\left(\mathbf{z}-\mathbf{z}^{p}\right)^{\top}\Pi^{-1}\left(\mathbf{\mathbf{z}}-\mathbf{z}^{p}\right)\right).$$ Then, according to (\[eq:A6a\]),we can write $$C\left(t_{n},\hat{\mathbf{x}}^{p}\right)=e^{-r\Delta t}\int_{\mathbb{R}^{d}}f_{\mathbf{z}^{p}}\left(\mathbf{z}\right)u\left(t_{n+1},\mathbf{z}\right)d\mathbf{z}.$$
Now, let us consider GPR approximation of the function $u\left(t_{n+1},\cdot\right)$, obtained by assuming $Z$ as the predictor set and by employing the Squared Exponential Kernel $k_{SE}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}$, which is given by $$k_{SE}\left(\mathbf{a},\mathbf{b}\right)=\sigma_{f}^{2}\exp\left(-\frac{\left(\mathbf{a}-\mathbf{b}\right)^{\top}I_{d}\left(\mathbf{a}-\mathbf{b}\right)}{2\sigma_{l}^{2}}\right),\ \mathbf{a},\mathbf{b}\in\mathbb{R}^{d}.\label{eq:A11}$$ In particular, with reference to (\[eq:A11\]), the additional parameters $\sigma_{l}$ and $\sigma_{f}$ are called hyperparameters and are obtained by means of a maximum likelihood estimation. So let $$u_{n+1}^{GPR}\left(\mathbf{z}\right)=\sum_{q=1}^{P}\mathbf{\omega}_{q}k_{SE}\left(\mathbf{z}^{q},\mathbf{z}\right),\label{eq:A12}$$ be the GPR approximation of the function $u\left(t_{n+1},\mathbf{z}\right)$, where $\boldsymbol{\omega}=\left(\omega_{1},\dots,\omega_{q},\dots\omega_{P}\right)^{\top}$ in (\[eq:A12\]) is a vector of weights that can be computed by solving a linear system (see Rasmussen and Williams [@williams2006gaussian]). The GPR-EI approximation $C_{n}^{GPR-EI}$ of the continuation value is then given by $$\begin{aligned}
C_{n}^{GPR-EI}\left(\hat{\mathbf{x}}^{p}\right) & =e^{-r\Delta t}\int_{\mathbb{R}^{d}}f_{\mathbf{z}^{p}}\left(\mathbf{z}\right)u_{n+1}^{GPR}\left(\mathbf{z}\right)d\mathbf{z}\\
& =e^{-r\Delta t}\sum_{q=1}^{P}\omega_{q}\int_{\mathbb{R}^{d}}f_{\mathbf{z}^{p}}\left(\mathbf{z}\right)k_{SE}\left(\mathbf{z}^{q},\mathbf{z}\right)d\mathbf{z}.\label{eq:A14}\end{aligned}$$ To compute each integral in (\[eq:A14\]), we observe that $$\begin{gathered}
\int_{\mathbb{R}^{d}}f_{\mathbf{z}^{p}}\left(\mathbf{z}\right)k_{SE}\left(\mathbf{z}^{q},\mathbf{z}\right)d\mathbf{z}=\\
=\left(2\pi\right)^{\frac{d}{2}}\sigma_{f}^{2}\sigma_{l}^{d}\int_{\mathbb{R}^{d}}\frac{1}{\left(2\pi\right)^{\frac{d}{2}}\sqrt{\det\left(\Pi\right)}}e^{-\frac{1}{2}\left(\mathbf{z}-\mathbf{z}^{p}\right)^{\top}\Pi^{-1}\left(\mathbf{\mathbf{z}}-\mathbf{z}^{p}\right)}\frac{1}{\left(2\pi\right)^{\frac{d}{2}}\sqrt{\sigma_{l}^{2d}}}e^{-\frac{1}{2}\left(\mathbf{z}-\mathbf{z}^{q}\right)^{\top}\left(\sigma_{l}^{2}I_{d}\right)^{-1}\left(\mathbf{z}-\mathbf{z}^{q}\right)}d\mathbf{z}\\
=\left(2\pi\right)^{\frac{d}{2}}\sigma_{f}^{2}\sigma_{l}^{d}\int_{\mathbb{R}^{d}}\frac{1}{\left(2\pi\right)^{\frac{d}{2}}\sqrt{\det\left(\Pi\right)}}e^{-\frac{1}{2}\left(\mathbf{z}-\mathbf{z}^{p}\right)^{\top}\Pi^{-1}\left(\mathbf{\mathbf{z}}-\mathbf{z}^{p}\right)}\frac{1}{\left(2\pi\right)^{\frac{d}{2}}\sqrt{\sigma_{l}^{2d}}}e^{-\frac{1}{2}\left(\left(\mathbf{0}-z\right)-\left(-\mathbf{z}^{q}\right)\right)^{\top}\left(\sigma_{l}^{2}I_{d}\right)^{-1}\left(\left(\mathbf{0}-\mathbf{z}\right)-\left(-\mathbf{z}^{q}\right)\right)}d\mathbf{z}\\
=\left(2\pi\right)^{\frac{d}{2}}\sigma_{f}^{2}\sigma_{l}^{d}f_{\mathbf{z}^{p}}\ast g_{\mathbf{-z}^{q}}\left(0\right)\end{gathered}$$ where $\ast$ is the convolution product and $g_{\mathbf{-z}^{q}}$ is the density function of a Gaussian random vector which has law given by $\mathcal{N}\left(-\mathbf{z}^{q},\sigma_{l}^{2}I_{d}\right)$. Moreover, the convolution product of the densities of two independent random variables is equal to the density of their sum (see Hogg et al. [@hogg2005introduction]) and we can obtain the following relation which allows one to exactly compute the integrals in (\[eq:A14\]): $$f_{\mathbf{x}^{p}}\ast g_{\mathbf{-x}^{q}}\left(0\right)=\frac{1}{\left(2\pi\right)^{\frac{d}{2}}\sqrt{\det\left(\Pi+\sigma_{l}^{2}I_{d}\right)}}e^{-\frac{1}{2}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)^{\top}\left(\Pi+\sigma_{l}^{2}I_{d}\right)^{-1}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)}.$$ Therefore, the GPR-EI approximation $C_{n}^{GPR-EI}$ at $\hat{\mathbf{x}}^{p}$ reads $$C_{n}^{GPR-EI}\left(\hat{\mathbf{x}}^{p}\right)=e^{-r\Delta t}\sum_{q=1}^{P}\omega_{q}\sigma_{f}^{2}\sigma_{l}^{d}\frac{e^{-\frac{1}{2}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)^{\top}\left(\Pi+\sigma_{l}^{2}I_{d}\right)^{-1}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)}}{\sqrt{\det\left(\Pi+\sigma_{l}^{2}I_{d}\right)}},$$ and the GPR-EI approximation $u_{n}^{GPR-EI}$ of the option value $u\left(t_{n},\cdot\right)$ at time $t_{n}$ and at $\mathbf{z}^{p}$ is given by
$$u_{n}^{GPR-EI}\left(\mathbf{z}^{p}\right)=\max\left(\Psi\left(\hat{\mathbf{x}}^{p}\right),e^{-r\Delta t}\sum_{q=1}^{P}\omega_{q}\sigma_{f}^{2}\sigma_{l}^{d}\frac{e^{-\frac{1}{2}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)^{\top}\left(\Pi+\sigma_{l}^{2}I_{d}\right)^{-1}\left(\mathbf{z}^{q}-\mathbf{z}^{p}\right)}}{\sqrt{\det\left(\Pi+\sigma_{l}^{2}I_{d}\right)}}\right).\label{eq:v_GPR-EI}$$
\[ApA\]Covariance of the vector $R$ in (\[eq:vector\_R\])
=========================================================
Let us report the formulas for the covariance of the components of the vector $R$ in (\[eq:vector\_R\]). For all $n=1,\dots,N$, and $m=1,\dots,n-1$, the following relations hold: $$Cov\left(\Delta W_{n}^{1},\Delta W_{n}^{1}\right)=\Delta t,$$
$$Cov\left(\Delta W_{n}^{1},\widetilde{W}_{t_{n}}^{H}\right)=\frac{2\rho\sqrt{2H}}{2H+1}\left(\Delta t\right)^{H+\frac{1}{2}},$$
$$Cov\left(\widetilde{W}_{t_{n}}^{H},\widetilde{W}_{t_{n}}^{H}\right)=\left(t_{n}\right)^{2H}$$
$$Cov\left(\Delta W_{m}^{1},\Delta W_{n}^{1}\right)=0,$$
$$Cov\left(\Delta W_{n}^{1},\widetilde{W}_{t_{m}}^{H}\right)=0,$$
$$Cov\left(\Delta W_{m}^{1},\widetilde{W}_{t_{n}}^{H}\right)=\frac{2\rho\sqrt{2H}}{2H+1}\left(\left(t_{n}-t_{m-1}\right)^{H+\frac{1}{2}}-\left(t_{n}-t_{m}\right)^{H+\frac{1}{2}}\right),$$
$$Cov\left(\widetilde{W}_{t_{m}}^{H},\widetilde{W}_{t_{n}}^{H}\right)=2H\left(t_{m}\right)^{2H}\cdot\int_{0}^{1}\frac{ds}{\left(1-s\right)^{\frac{1}{2}-H}\left(\frac{t_{m}}{t_{n}}-s\right)^{\frac{1}{2}-H}}.$$
\[ApA2\]Proof of Proposition \[lem:L1\]
=======================================
Let us denote the random vector $\left(S_{t_{i}},V_{t_{i}},S_{t_{i+1}},V_{t_{i+1}},\dots,S_{t_{j}},V_{t_{j}}\right)^{\top}$ for $i,j\in\left\{ 0,\dots,N\right\} $ and $i<j$ with $\mathbf{SV}_{i:j}$ . We observe that the option value $v\left(t_{N},\cdot\right)$ at time $t_{N}$ is given by the payoff function $\Psi$, which only depends by the final value of the underlying. The option value $v\left(t_{N-1},\cdot\right)$ at time $t_{N-1}$ about the $p$-th path is given by $$v\left(t_{N-1},\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)=\max\left(\Psi\left(S_{t_{N-1}}^{p}\right),e^{-r\Delta t}C\left(t_{N-1},\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)\right)\label{eq:65}$$ where $C$ stands for the continuation value and it is equal to $$C\left(t_{N-1},\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)=E\left[e^{-r\Delta t}\Psi\left(S_{t_{N}}\right)\left|\left(\mathbf{SV}_{1:\left(N-1\right)}=\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)\right.\right].\label{eq:CVR}$$ We approximate the continuation value in (\[eq:CVR\]) by means of the GPR approximation of $\Psi$. In particular, let $\Psi^{GPR}\left(z\right)$ be the approximation of the function $z\mapsto\Psi\left(\exp\left(z\right)\right)$ by using the GPR method employing the Squared Exponential Kernel and considering the log-underlying values at maturity as predictors. Specifically, the predictor set is $$Z=\left\{ z^{p}=\log\left(S_{t_{N}}^{p}\right),p=1,\dots,P\right\} \subset\mathbb{R}^ {}$$ and the response $\mathbf{y}\in\mathbb{R}^{P}$ is given by $$y^{p}=\Psi\left(S_{t_{N}}^{p}\right).$$ In particular, we can write $$\begin{aligned}
\Psi^{GPR}\left(z\right) & =\sum_{q=1}^{P}k_{SE}\left(\log\left(S_{t_{N}}^{q}\right),z\right)\mathbf{\omega}_{q}=\sigma_{f}^{2}\sum_{q=1}^{P}\exp\left(-\frac{\left(\log\left(S_{t_{N}}^{q}\right)-z\right)^{2}}{2\sigma_{l}^{2}}\right)\mathbf{\omega}_{q}\end{aligned}$$ where $k_{SE}$ is the Squared Exponential kernel, $\sigma_{l}$ is the characteristic length scale, $\sigma_{f}$ is the signal standard deviation and $\omega_{1},\dots,\omega_{P}$ are weights.
So we approximate the continuation value $C\left(t_{N-1},\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)$ with the expression: $$E\left[e^{-r\Delta t}\Psi^{GPR}\left(\ln\left(S_{t_{N}}\right)\right)\left|\left(\mathbf{SV}_{1:\left(N-1\right)}=\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)\right.\right].$$ We observe that the law of $\log\left(S_{t_{N}}\right)$ given $S_{t_{1}}^{p},V_{t_{1}}^{p},\dots,S_{t_{N-1}}^{p},V_{t_{N-1}}^{p}$ is normal $$\log\left(S_{t_{N}}\right)\left|\left(\mathbf{SV}_{1:\left(N-1\right)}=\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)\right.\sim\mathcal{N}\left(\mu_{N,p},\sigma_{N,p}^{2}\right),$$ where $$\mu_{N,p}=\log\left(S_{t_{N-1}}^{p}\right)+\left(r-\frac{1}{2}V_{t_{N-1}}^{p}\right)\Delta t$$ and $$\sigma_{N,p}^{2}=V_{t_{N-1}}^{p}\Delta t.$$ Therefore, the GPR-EI approximation for the continuation value at time $t_{N-1}$ is as follows: $$\begin{gathered}
C_{N-1}^{GPR-EI}\left(\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)=e^{-r\Delta t}\int_{\mathbb{R}}\frac{\exp\left(-\frac{\left(z-\mu_{N,p}\right)^{2}}{2\sigma_{N,p}^{2}}\right)}{\sqrt{2\pi\sigma_{N,p}^{2}}}\Psi^{GPR}\left(z\right)dz\\
=e^{-r\Delta t}\sigma_{f}^{2}\sqrt{2\pi\sigma_{l}^{2}}\sum_{q=1}^{P}\int_{\mathbb{R}}\frac{\exp\left(-\frac{\left(z-\mu_{N,p}\right)^{2}}{2\sigma_{N,p}^{2}}\right)}{\sqrt{2\pi\sigma_{N,p}^{2}}}\frac{\exp\left(-\frac{\left(\log\left(S_{t_{N}}^{q}\right)-z\right)^{2}}{2\sigma_{l}^{2}}\right)}{\sqrt{2\pi\sigma_{l}^{2}}}\mathbf{\omega}_{q}dz.\end{gathered}$$ Taking advantage of the properties of the convolution between density functions, we obtain $$C_{N-1}^{GPR-EI}\left(\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)=e^{-r\Delta t}\sum_{q=1}^{P}\frac{\mathbf{\omega}_{q}\sigma_{f}^{2}\sigma_{l}}{\sqrt{\sigma_{N,p}^{2}+\sigma_{l}^{2}}}\exp\left(-\frac{\left(\log\left(S_{t_{N}}^{q}\right)-\mu_{N,p}\right)^{2}}{2\sigma_{N,p}^{2}+2\sigma_{l}^{2}}\right).\label{eq:CVNm1}$$
\[ApA3\]Proof of Proposition \[lem:L2\]
=======================================
In order to proceed backward, from $t_{N-2}$ up to $t_{1}$ we consider an integer positive value $J$ and train the GPR method considering the last $J+1$ observed values of the couple $\left(\log\left(S_{t_{n}}^{p}\right),\log\left(V_{t_{n}}^{p}\right)\right)$ as predictors, and the option price as response. Specifically, the predictor set is $$Z=\left\{ \mathbf{z}^{p}=\log\left(\mathbf{SV}_{\max\left\{ 1,N-1-J\right\} :\left(N-1\right)}^{p}\right),p=1,\dots,P\right\} \subset\mathbb{R}^{d_{N-1}}$$ where $d_{N-1}=2\min\left\{ N-1,J+1\right\} $ and the response $\mathbf{y}\in\mathbb{R}^{P}$ is given by $$y^{p}=v\left(t_{N-1},\mathbf{SV}_{1:\left(N-1\right)}^{p}\right).$$ We term $u_{N-1}^{GPR}$ the obtained function. In particular, $u_{N-1}^{GPR}:\mathbb{R}^{d_{N-1}}\rightarrow\mathbb{R}$ and $$u_{N-1}^{GPR}\left(\log\left(\mathbf{SV}_{\max\left\{ 1,N-1-J\right\} :\left(N-1\right)}^{p}\right)\right)$$ approximates $v\left(t_{N-1},\mathbf{SV}_{1:\left(N-1\right)}^{p}\right)$.
Since the predictors have different nature (log-prices and log-volatilities at different times), we use the Automatic Relevance Determination (ARD) Squared Exponential Kernel $k_{ASE}$ to perform the GPR regression. In particular, if $d$ is the dimension of the space containing the predictors, it holds $$k_{ASE}\left(\mathbf{a},\mathbf{b}\right)=\sigma_{f}^{2}\exp\left(-\sum_{i=1}^{d}\frac{\left(a_{i}-b_{i}\right)^{2}}{2\sigma_{i}^{2}}\right),\ \mathbf{a},\mathbf{b}\in\mathbb{R}^{d},$$ As opposed to the Squared Exponential kernel, the ARD Squared Exponential kernel considers a different length scale $\sigma_{i}$ for each predictor that allows the regression to better learn the impact of each predictor on the response.
We present now how to perform the backward induction. So, let us consider $n\in\left\{ 0,\dots,N-2\right\} $ and suppose the GPR approximation $u_{n+1}^{GPR}:\mathbb{R}^{d_{n+1}}\rightarrow\mathbb{R}$ to be known. In particular, $d_{n+1}=2\min\left\{ n+1,J+1\right\} $ and for each $\mathbf{z}=\left(z_{1},\dots,z_{d_{n+1}}\right)\in\mathbb{R}^{d_{n+1}},$ it holds $$u_{n+1}^{GPR}\left(\mathbf{z}\right)=\sigma_{f}^{2}\sum_{q=1}^{P}\mathbf{\omega}_{q}\exp\left(-\sum_{i=1}^{d_{n}}\frac{\left(z_{i}^{q}-z_{i}\right)^{2}}{2\sigma_{i}^{2}}\right),$$ where $z_{i}^{q}=\log\left(S_{n+1-\left(i-1\right)/2}^{q}\right)$ if $i$ is even and $z_{i}^{q}=\log\left(V_{n+1-i/2}^{q}\right)$ if $i$ is odd, for $i=1,\dots,d_{n+1}$. This means that $z_{i}^{q}$ is the observed log-price at time $t_{n+1-\left(i-1\right)/2}$ of the $q$-th path if $i$ is even, and it is the observed log-volatility at time $t_{n+1-\left(i-1\right)/2}$ of the $q$-th path if $i$ is odd.
We explain now how to compute the GPR approximation $v_{n}^{GPR-EI}:\mathbb{R}^{d_{n}}\rightarrow\mathbb{R}$ of the price function at time $t_{n}$. First of all, we observe that the vector $\left(\log\left(S_{t_{n+1}}^{p}\right),\log\left(V_{t_{n+1}}^{p}\right)\right)^{\top}$ is not $\hat{\mathcal{F}}_{t_{n}}$-measurable whereas $\log\left(\mathbf{SV}_{\max\left\{ 1,n+1-J\right\} :n}^{p}\right)$ is $\hat{\mathcal{F}}_{t_{n}}$-measurable. The law of $\left(\log\left(S_{t_{n+1}}\right),\log\left(V_{t_{n+1}}\right)\right)^{\top}$ given $S_{t_{n}}^{p},V_{t_{n}}^{p},\dots,S_{t_{1}}^{p},V_{t_{1}}^{p}$ is normal: $$\left(\log\left(S_{t_{n+1}}\right),\log\left(V_{t_{n+1}}\right)\right)^{\top}\left|\left(\mathbf{SV}_{1:n}=\mathbf{SV}_{1:n}^{p}\right)\right.\sim\mathcal{N}\left(\mu_{n+1,p},\Sigma_{n+1,p}\right),$$ In particular $$\mu_{n+1,p}=\left(\log\left(S_{t_{n}}^{p}\right)+\left(r-\frac{1}{2}V_{t_{n}}^{p}\right)\Delta t,\log\left(\xi_{0}\right)+\eta\Lambda_{2n+2}\underline{\mathbf{G}}^{p}-\frac{1}{2}\eta^{2}t_{n+1}^{2H}\right)^{\top},$$ where $\Lambda_{2n+2}$ is the $2n+2$-th row of the matrix $\Lambda$ and $\underline{\mathbf{G}}^{p}=\left(G_{1}^{p},\dots,G_{2n}^{p},0\dots,0\right)^{\top}$. Moreover, the covariance matrix is given by $$\Sigma_{n+1,p}=\left(\begin{array}{cc}
\Delta tV_{t_{n}}^{p} & \eta\sqrt{\Delta tV_{t_{n}}^{p}}\Lambda_{2n+2,2n+1}\\
\eta\sqrt{\Delta tV_{t_{n}}^{p}}\Lambda_{2n+2,2n+1} & \eta^{2}\left(\Lambda_{2n+2,2n+2}^{2}+\Lambda_{2n+2,2n+1}^{2}\right)
\end{array}\right),$$ where $\Lambda_{i,j}$ stands for the element of $\Lambda$ in position $i,j$. Using a similar reasoning as done for the continuation value at time $t_{N-1}$, one can obtain the following GPR-EI approximation for the continuation value at time $t_{n-1}$:
$$C_{n}^{GPR-EI}\left(\mathbf{SV}_{\max\left\{ 1,n-J\right\} :n}^{p}\right)=e^{-r\Delta t}\sigma_{f}^{2}\sigma_{d_{n+1}-1}\sigma_{d_{n+1}}\sum_{q=1}^{P}\mathbf{\omega}_{q}h_{q}^{p}f_{q}^{p},\label{eq:622}$$
where $h_{q}^{p}$ and $f_{q}^{p}$ are two factors given by $$h_{q}^{p}=\exp\left(-\sum_{i=1}^{d_{n+1}-2}\frac{\left(z_{i}^{p}-z_{i}^{q}\right)^{2}}{2\sigma_{i}^{2}}\right)$$ and
$$f_{q}^{p}=\frac{\exp\left(-\frac{1}{2}\left(\left(\begin{array}{c}
z_{d_{n+1}-1}^{q}\\
z_{d_{n+1}}^{q}
\end{array}\right)-\mu_{n+1,p}\right)^{\top}\left(\Sigma_{n+1,p}+\left(\begin{array}{cc}
\sigma_{d_{n+1}-1}^{2} & 0\\
0 & \sigma_{d_{n+1}}^{2}
\end{array}\right)\right)^{-1}\left(\left(\begin{array}{c}
z_{d_{n+1}-1}^{q}\\
z_{d_{n+1}}^{q}
\end{array}\right)-\mu_{n+1,p}\right)\right)}{\sqrt{\text{\ensuremath{\det}}\left(\Sigma_{n+1,p}+\left(\begin{array}{cc}
\sigma_{d_{n+1}-1}^{2} & 0\\
0 & \sigma_{d_{n+1}}^{2}
\end{array}\right)\right)}}.$$
In particular, $h_{q}^{p}$ measures the impact of the past observed values on the price, whereas $f_{q}^{p}$ integrates the changes due to the diffusion of the underlying and its volatility.
Therefore, we obtain $$v_{n}^{GPR-EI}\left(\mathbf{SV}_{\max\left\{ 1,n-J\right\} :n}^{p}\right)=\max\left(\Psi\left(S_{t_{n}}^{p}\right),e^{-r\Delta t}\sigma_{f}^{2}\sigma_{d_{n+1}-1}\sigma_{d_{n+1}}\sum_{q=1}^{P}\mathbf{\omega}_{q}h_{q}^{p}f_{q}^{p}\right).$$ Finally, we observe that, in order to compute $u_{n}^{GPR}$, we train the GPR method considering the predictor set given by $$Z=\left\{ \mathbf{z}^{p}=\log\left(\mathbf{SV}_{\max\left\{ 1,n-J\right\} :n}^{p}\right),p=1,\dots,P\right\} \subset\mathbb{R}^{d_{n}}$$ and the response $\mathbf{y}\in\mathbb{R}^{P}$ is given by $$y^{p}=v_{n}^{GPR-EI}\left(\mathbf{SV}_{\max\left\{ 1,n-J\right\} :n}^{p}\right).$$
By induction we can compute the option price value for $n=N-2,\dots,0$ .
To conclude, we observe that the continuation value at time $t=0$ can be computed by using (\[eq:622\]) and considering $h_{q}^{p}=1$ for $q=1,\dots,P$ since in this case, there are no past values to consider.
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Department of Physics, Kyoto University, Kyoto 606-8502, JAPAN\
E-mail: [email protected]
author:
- Toshitaka Tatsumi
title: |
\
Magnetic instability of quark matter
---
=cmr8
1.5pt
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
Introduction
============
Pulsars are rotating neutron stars emitting radio waves, X-rays or gamma rays. Ordinary radio pulsars have a magnetic field of $O(10^{12 - 13})$G, which causes various radiations. The origin of such strong magnetic field is still an open problem. Recently a new type of neutron stars, called magnetars, has been proposed to explain the observational data on pulsars, which should have an extraordinary magnetic field of $O(10^{15})$ G [@ko]. There are reported several magnetar candidates so far for anomalous X-ray pulsars (AXP) and pulsars associated with soft-gamma-ray repeaters (SGR).
There has been a naive working hypothesis to understand the magnetic field in neutron stars; if the magnetic flux of a main sequence star is conserved during its evolution, the decrease in radius leads to an increase in the magnetic field. For example, the sun, a typical main sequence star, has a magnetic field of $O(10^3)$G with the radius $R\sim
10^{10-11}$cm. By squeezing the radius to $10^6$cm for neutron stars we have $O(10^{11-13})$G, which is consistent with observations for radio pulsars. However, if this argument is extrapolated to explain the intense of the magnetic field for magnetars, their radius should be $O(10^4)$cm, which is much less than the Schwartzschild radius of neutron stars with the canonical mass $M=1.4M_\odot$, $R_{Sch}=2GM/c^2=4\times 10^5$cm.
These observations seems to enforce our reconsideration of the origin of the magnetic field in neutron stars. Since there is a bulk hadronic matter beyond the nuclear density ($n_B\sim 0.16$fm$^{-3}$) inside neutron stars, it should be interesting to consider the hadronic origin of the magnetic field; ferromagnetism or spin-polarization of hadronic matter may give such magnetic field. Unfortunately there has been little suggestion about the possibility of spontaneous magnetization of hadronic matter. We consider here the possibility of ferromagnetism of quark liquid interacting with the one-gluon-exchange (OGE) interaction [@ta].
One believes that there are deconfinement transition and chiral symmetry restoration at several times the nuclear density, while their critical densities have not been fixed yet. One interesting suggestion is that three-flavor symmetric quark matter (strange quark matter) around or above the nuclear density may be the true ground state of matter [@ch]. If this is the case, strange quark stars, where quarks occupy almost whole the inner region of stars, can exist in a different branch from the neutron-star branch in the mass-radius plane. Otherwise quark matter may exist in the small core region of neutron stars. We shall see our results should give an origin of the strong magnetic field in the context of strange quark-star scenario.
Ferromagnetism of quark liquid
==============================
Quark liquid should be totally color singlet (neutral), which means that only the exchange interaction between quarks is relevant there. This may remind us of electron system with the Coulomb interaction in a neutralizing positive charge background. In 1929 Bloch first suggested a possibility of ferromagnetism of electron system [@bl]. He has shown that there is a trade off between the kinetic and the exchange energies as a function of density, the latter of which favors the spin alignment due to the Pauli principle. This was a beginning of of the concept of itinerant magnetism. In the following we discuss the possibility of ferromagnetism of quark liquid on the analogy with electron gas.
It is to be noted that there is one big difference between them; quarks should be treated in a relativistic way. The concept of the direction of spins is not well defined in relativistic theories, while each quark has two polarization degrees of freedom. Here we define the spin-up and -down states in the rest frame of each quark. Then the projector onto states of definite polarization is given by P(a)=(1+\_5[a /]{}) with the 4-peudovector $a$, =+, a\^0= \[aa\] for a quark moving with the momentum $k=(E_k,{{\bf k}})$ [@la]. The 4-peudovector $a$ is reduced into the axial vector ${\mbox{\boldmath$\zeta$}}$ ($|{\mbox{\boldmath$\zeta$}}|=1$) in the rest frame, which is twice the mean spin vector in the rest frame. Actually if we choose ${\mbox{\boldmath$\zeta$}}$ along the $z$ axis, ${\mbox{\boldmath$\zeta$}}=(0,0,\pm 1)$, we can see each value corresponds to the spin-up or -down state. The mean value of the spin is given by |[**s**]{}=(+). \[ab\] Finally the projection operator $P(a)$ gives the polarization density matrix $\rho$, (k, ) =([k /]{}+m\_q)P(a), P(a)=(1+\_5[a /]{}). \[ac\]
The exchange interaction between two quarks with momenta ${\bf k}$ and ${\bf q}$ is given by f\_[[**k**]{},[**q**]{}’]{} =\_[[**k**]{},[**q**]{}’]{}. \[ad\] ${\cal M}_{{\bf k}\zeta,{\bf q}\zeta'}$ is the usual Lorentz invariant matrix element, and is evaluated with the help of the polarization density matrix (\[ac\]) \_[[**k**]{},[**q**]{}’]{} &=&g\^2(\_a/2\_a/2)\
&=&g\^2\[2m\_q\^2-kq-m\_q\^2 ab\], \[ae\] where the 4-pseudovector $b$ is given by the same form as in Eq. (\[aa\]) for the momentum ${\bf q}$.
Although the vector ${\mbox{\boldmath$\zeta$}}$ of each quark may point in a different direction on the two dimentional sphere $S^2$, we assume here it along the same direction, say $z$ axis. The exchange energy for quark liquid is then given by the integration of the interaction (\[ad\]) over the two Fermi seas with the spin-up and -down states; eventually, it consists of two contributions, \_[ex]{}=\_[ex]{}\^[non-flip]{}+\_[ex]{}\^[flip]{}. The first one arises from the interaction between quarks with the same polarization, while the second one with the opposite polarization. The non-flip contribution is the similar one as in electron gas, while the flip contribution is a genuine relativistic effect and never appears in electron gas. We shall see that this relativistic effect leads to a novel mechanism of ferromagnetism of quark liquid.
Examples
========
We show some results about the total energy of quark liquid, ${\epsilon}_{tot}={\epsilon}_{kin}+{\epsilon}_{ex}$, by adding the kinetic term ${\epsilon}_{kin}$. Since gluons have not the flavor quantum numbers, we can consider one flavor quark matter without loss of generality. Then quark number density directly corresponds to baryon number density, if we assume the three flavor symmetric quark matter as mentioned in §1.
There are two parameters in our theory: the quark mass $m_q$ and the quark-gluon coupling constant $\alpha_c$. These values are not well determined so far. In particular, the value of quark mass involves subtle issues; it depends on the current or constituent quark picture and may be also related to the existence of chiral phase transition. Here we allow some range for these parameters and take, for example, a fiducial set, $m_q=300$MeV for strange quark and $\alpha_c=2.2$, given by the MIT bag model [@de]. In Fig.1 two results are presented as functions of the polarization parameter $p$ defined by the difference of the number of the spin-up and -down quarks, $n_q^+-n_q^-\equiv pn_q$. The results clearly show that the ground state should be ferromagnetic for lower density, while it is in the paramagnetic phase for higher density. The phase transition is of first order and its critical density is around $n_q^c\simeq 0.16$fm$^{-3}$ in this case, which corresponds to the nuclear density for flavor symmetric quark matter. Note that there is a metastable ferromagnetic state (the local minimum) even above the critical density. This ferromagnetic phase is a spontaneously symmetry broken state with respect to the rotational symmetry: the order parameter is the mean value of ${\mbox{\boldmath$\zeta$}}$, $\langle{\mbox{\boldmath$\zeta$}}\rangle$, and symmetry is broken from $G=O(3)$ to $H=O(2)$ once $\langle{\mbox{\boldmath$\zeta$}}\rangle$ takes a special direction on $S^2$.
Magnetic properties of quark liquid are characterized by three quantities, $\delta {\epsilon},
\chi$ and $\eta$; $\delta {\epsilon}\equiv {\epsilon}_{tot}(p=1)-{\epsilon}_{tot}(p=0)$, which is the measure for ferromagnetism to appear in the ground state. For small $p\ll 1$, \_[tot]{}-\_[tot]{}(p=0)=\^[-1]{} p\^2+O(p\^4). \[ha\] $\chi$ is proportional to the magnetic susceptibility and plays an important role if the phase transition is of second order. In our case it is less relevant since the phase transition is of first order. Finally, $\eta\equiv\partial {\epsilon}_{tot}/\partial p~|_{p=1}$, which is the measure for metastability to to exist. In Fig.2 the density dependence of three quantities are given for a fiducial set of parameters. We can see that ferromagnetic phase is the ground state below $n_q^c$, while the metastable state is possible up to rather high densities.
Finally we present a phase diagram in the $m_q - \alpha_c$ plane for $n_q=0.3$fm$^{-3}$, which corresponds to about twice the nuclear density for flavor symmetric quark matter. The region above the solid line shows the ferromagnetic phase and that above the dashed line indicates the existence of the matastable state. For massive quarks with the large mass, which may correspond to the current $s$ quarks or the constituent quarks before chiral symmetry restoration, the ferromagnetic state is favored for small coupling constant due to the same mechanism as in electron gas. The ferromagnetic state is favored again for light quarks with small mass, which may correspond to the current $u, d$ quarks, while the nonrelativistic calculation does not show such tendency. Hence this is due to a genuine relativistic effect, where the spin-flip interaction plays an essential role.
Summary and Concluding remarks
==============================
We have seen that the ferromagnetic phase is realized at low densities and the metastable state is plausible up to rather high densities for a reasonable range of the QCD parameters. If a ferromagnetic quark liquid exists stably or metastably around or above nuclear density, it has some implications on the properties of strange quark stars and strange quark nuggets. They should be magnetized in a macroscopic scale. For quark stars with the quark core of $r_q$, simply assuming the dipolar magnetic field, we can estimate its strength at the surface $R$, B\_[max]{}=()\^3\_qn\_q, \[gc\] amounts to order of $O(10^{15-17})$G for $r_q\sim O(R)$ and $n_q=O(0.1)$fm$^{-3}$ , which should be large enough for magnetars, using the quark magnetic moment $\mu_q\sim
\mu_N$($\mu_N: $nuclear magneton$\sim 5\times 10^{-24}{\rm erg}\cdot
{\rm gauss}^{-1}$) for massive quarks and $10^2\mu_N$ for light quarks. Hence it might be interesting to model SGR or AXP using our idea.
We have found that ferromagnetic instability is feasible not only in the massive quark system but also in the light quark system: the spin-nonflip contribution is dominant in the nonrelativistic case as in electron gas, while a novel mechanism appears as a result of the large spin-flip contribution in the relativistic case.
Our calculation is basically a perturbative one and the Fermi sea remains in a spherical shape. However, if we get more insight about the ferromagnetic phase, we must solve the Hartree-Fock equation and thereby derive a self-consistent mean-field for quark liquid. Moreover, we need to examine the long range correlation among quarks by looking into the ring diagrams, which has been known to be important in the calculation of the susceptibility of electron gas.
References {#references .unnumbered}
==========
[99]{} C. Kouveliotou et al., [*Nature*]{} [**393**]{}, 235(1998).\
K. Hurley et al., [*Astrophys. J.*]{} [**510**]{}, L111(1999).
T. Tatsumi, [*hep-ph/9910470 (KUNS 1611)*]{}
S.A. Chin and A.K. Kerman, .\
E. Witten, .\
E. Farhi and R.L. Jaffe, .
F. Bloch, [*Z. Phys.*]{} [**57**]{},545 (1929).
V.B. Berestetsii, E.M. Lifshitz and L.P. Pitaevsii,\
[*Relativistic Quantum Theory*]{}(Pergamon Press, 1971).
T. DeGrand et al., .
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We apply the Darboux integrability method to determine first integrals and Hamiltonian formulations of three dimensional polynomial systems; namely the reduced three-wave interaction problem, the Rabinovich system, the Hindmarsh-Rose model, and the oregonator model. Additionally, we investigate their Hamiltonian, Nambu-Poisson and metriplectic characters.
**Key words:** Darboux integrability method, Prelle-Singer method, the reduced three-wave interaction problem, The Rabinovich system, the Hindmarsh-Rose model, Oregonator Model, Metriplectic Structure, Nambu-Poisson Brackets. MSC2010: 37K10, 70G45
author:
- |
Oğul Esen[^1]\
Department of Mathematics, Gebze Technical University\
Gebze-Kocaeli 41400, Turkey.\
\
- |
Anindya Ghose Choudhury[^2]\
Department of Physics, Surendranath College,\
24/2 Mahatma Gandhi Road, Calcutta-700009, India.\
\
- |
Partha Guha[^3]\
S.N. Bose National Centre for Basic Sciences\
JD Block, Sector III, Salt Lake\
Kolkata - 700098, India\
title: 'On Integrals, Hamiltonian and Metriplectic Formulations of 3D Polynomial Systems'
---
Introduction
============
The problem of solving ordinary nonlinear differential equations is a challenging area in nonlinear dynamics. For a two dimensional system the existence of a first integral completely determines its phase portrait. In these cases chaos cannot arise because of the Poincaré-Bendixson theorem [@HS] which says that any limit of a $2$D system of differential equation is either a fixed point or a cycle. In three dimension this is no longer true. In the case of non planar systems the problem of determining first integrals is a non trivial task in general, and various methods have been introduced for studying the existence of first integrals. However, except for some special cases [@Hi] there are few known satisfactory methods to solve it in general.
Non planar systems are often non-Hamiltonian in character and describe the time evolution of physical processes which are usually dissipative in nature. In general, a Pfaff differential form in $n$-dimensions $$F_{1}(x_{1},\cdots,x_{n})dx_{1}+\cdots+F_{n}(x_{1},\cdots,x_{n})dx_{n}$$ is not exact and therefore integrating factor may not be exist. Earlier a direct method [@GRZ] has been used to search for a first integral of three dimensional dynamical systems. This method consists in proposing an ansatz for the invariant which is a polynomial of a given degree in one coordinates of the phase space of the system. So reader can see immediately that this is a tedious method applied to a very special class of systems. In fact, Grammaticos *et al* [@Gr] proposed another method, based on the Frobenius integrability theorem, for finding integrals for three-dimensional ordinary differential equations. None of these methods are extremely successful. In a similar programme. Dorizzi *et al* [@DGH] investigated a three-dimensional Hamiltonian systems with quartic potentials that are even in $x$, $y$, and $z$. They applied reduction method to obtain two new integrable systems and their constants of motion.
One might ask why do we need first integrals. An integral defines an invariant manifold for the flow which can be used to eliminate one degree of freedom. When the system admits an integral of motion, the analysis of its dynamical behaviour, especially in $t\rightarrow\infty$ limit, is greatly simplified. As elucidated by Giacomini and Neukrich [@GN; @GN2], the first integrals can be used in the non integrable regimes to build generalized Lyapunov functions and obtain bounds on the chaotic attractors of three-dimensional vector fields and prove the absence of homoclinic orbits. Therefore computing the first integral is an important problem but unfortunately the problem of finding a first integral is mathematically the same problem as solving the original system. Indeed exact first integrals are known only in special cases.
In this paper, we are interested in the integrability of the polynomial differential systems of $3$ dimensions. A polynomial system is said to be Darboux integrable if it possesses a first integral or an integrating factor given by Darboux polynomial [@Da]. In particular, Darboux showed (see for example [@CG]) that a polynomial system of degree $n$, with at least $n(n+1)/2+1$ invariant algebraic curves, has a first integral which can be expressed by means of these algebraic curves. Note that, the knowledge of algebraic curves can be used to study the topological properties of the system.
The goal of this paper is to obtain the first integrals of some polynomial three dimensional ODE systems, namely the reduced three-wave interaction problem, Rabinovich system, Hindmarsh-Rose model and Oregonator model, using Darboux polynomials. After deriving the first integrals, we shall further investigate the possible Hamiltonian formulations, bi-Hamiltonian representations or/and metriplectic realizations of these systems. We shall derive Poisson tensors, metric tensors for each system explicitly.
In order to achieve these goals, the paper is divided into two main sections. The following section is reserved for the theoretical background on the notions of integrability, Hamiltonian, Nambu Poisson and metriplectic formulations in three dimensional models. The theorem (\[1\]) in the first subsection has the prominent role while determining the first integrals using the Darboux polynomials. After finding an integral of a system $\dot
{\mathbf{x}}=X$, one starts to wonder whether or not that the system is Hamiltonian. In three dimensions, a Hamiltonian system is bi-Hamiltonian and Nambu-Poisson if it is possible to find a Jacobi’s last multiplier $M$ which makes $MX$ divergence free (c.f. see theorem (\[3\])). A dissipative system can not is not Hamiltonian, but it can be written as a metriplectic formulation which is a linear combination of a Poisson and a gradient systems. The third section is for application of the technics presented in the section $2$ to the particular models. For several subcases of the reduced three-wave interaction problem, for the Rabinovich system, and for the subcases of the Hindmarsh-Rose model, the first integrals will be constructed. A bi-Hamiltonian/Nambu metriplectic formulation of these systems will be exhibited. First integrals of the Oregonator model is established and the model written as a Hamiltonian system.
Some Theory on 3D Polynomial Systems
====================================
Darboux’ Polynomials {#dp}
--------------------
A three dimensional polynomial ODE system is given by the set of equations $$\dot{x}=P(\mathbf{x}),\qquad\dot{y}=Q(\mathbf{x}),\qquad\dot{z}=R(\mathbf{x}),
\label{e1}$$ where $P,Q,R$ are real valued polynomials with real coefficients. Here, the boldface $\mathbf{x}$ stands for the three tuple $\left( x,y,z\right) $. The degree $m$ of a system is the maximum of degrees of the coefficient polynomials. The system (\[e1\]) defines a polynomial vector field $X=X\left( \mathbf{x}\right) $ by the identity $\mathbf{\dot{x}}=X\left(
\mathbf{x}\right) $.
A function $I=I(t,x,y,z)$ is the first integral if it is constant on any integral curve of the system, that is if the total derivative of $I$ with respect to $t$ vanishes on the solution curves. A second integral $g$ of a system $\mathbf{\dot{x}}=X\left( \mathbf{x}\right) $ is a function satisfying$$X(g)=\lambda g $$ for some function $\lambda$ called the cofactor. Polynomial second integrals for the polynomial vector fields are called the Darboux polynomials. The Darboux polynomials simplify the determination of possible first integrals [@Da]. For example, if there exist two relatively prime Darboux polynomials, say $P_{1}$ and $P_{2}$, having a common cofactor then their fraction $P_{1}/P_{2}$ is a rational first integral of the polynomial vector field $X$. The inverse of this statement is also true that is, if we have a rational first integral $P_{1}/P_{2}$ of a vector field $X$, then $P_{1}$ and $P_{2}$ are Darboux polynomials for $X$.
For the case of planar polynomial vector fields, there are more strong tools for the determination of the first integrals. In [@PrSi83; @Si92], a semi-algorithm, called Prelle-Singer method, is presented for the determinations of elementary first integrals for planar systems. If we have a certain number of relatively prime irreducible Darboux polynomials, not necessarily having a common cofactor, it is possible to write first integrals using the Darboux polynomials [@Da; @DuLlAr06; @Ju79; @Si92]. Unfortunately, this algorithm cannot be applicable for non-planar systems. However, Darboux polynomials are still useful though at times the use of a specific ansatz or a polynomial in one variable (of particular degree) with coefficients depending on the remaining variables remains the only option. One may at times use a variant of the Prelle-Singer/Darboux method to derive what are called quasi-rational first integrals [@Ma94]. Now, we state the following observation which enables one to arrive a time dependent first integral of a given system when it possesses autonomous Darboux polynomials.
\[1\] If $g_{\alpha}$’s are Darboux Polynomials for an autonomous system $\mathbf{\dot{x}}=X$ and there exist constants $n_{\alpha}$’s, not all zero, satisfying the equality $$\sum_{\alpha=1}^{k}n_{\alpha}\lambda_{\alpha}=r, \label{clue}$$ for some real number $r\in\mathbb{R}$, then the function $$I=e^{-rt}\prod_{\alpha=1}^{k}g_{\alpha}^{n_{\alpha}} \label{Int}$$ is a time dependet first integral of the system $\mathbf{\dot{x}}=X$.
To prove this assertion, we compute the total derivative of the function $I$ given in (\[Int\]) as follows $$\begin{aligned}
\tilde{X}\left( I\right) & =\frac{\partial}{\partial t}\left(
e^{-rt}\prod_{\alpha=1}^{k}g_{\alpha}^{n_{\alpha}}\right) +e^{-rt}X\left(
\prod_{\alpha=1}^{k}g_{\alpha}^{n_{\alpha}}\right) \\
& =-re^{-rt}\prod_{\alpha=1}^{k}g_{\alpha}^{n_{\alpha}}+e^{-rt}\left(
\prod_{\alpha=1}^{k}g_{\alpha}^{n_{\alpha}-1}\right) \left( \sum_{\beta
=1}^{k}(n_{\beta}g_{1}...X(g_{\beta})...g_{k})\right) \\
& =-re^{-rt}\prod_{\alpha}g_{\alpha}^{n_{\alpha}}+e^{-rt}\left(
\prod_{\alpha=1}^{k}g_{\alpha}^{n_{\alpha}}\right) \left( \sum_{\beta=1}^{k}n_{\beta}\lambda_{\beta} \right) \\
& =-re^{-rt}\prod_{\alpha}g_{\alpha}^{n_{\alpha}}+re^{-rt}\prod_{\alpha
=1}^{k}g_{\alpha}^{n_{\alpha}}=0\end{aligned}$$ where in the first line we assumed that $g_{\alpha}$ is not explicitly time dependent , we applied product rule in the second line, in the third line we used the fact that $g_{\alpha}$’s are Darboux’ polynomials by satisfying the equalities (\[dp\]), and finally, in the last line, we applied the equality (\[clue\]).
To the best of our knowledge, in the literature, the case where $\sum_{\alpha
}n_{\alpha}\lambda_{\alpha}\neq r $ is still open.
Poisson Systems in $3D$
-----------------------
Poisson bracket on an $n$-dimensional space is a binary operation $\{\bullet,\bullet\}$ on the space of real-valued smooth functions satisfying the Leibnitz and the Jacobi identities [@LaPi12; @LiMa12; @OLV; @wei83]. We define a Poisson bracket of two functions $F$ and $H$ by $$\left\{ F,H\right\} =\nabla F\cdot N\nabla H, \label{PB}$$ where $N$ is skew-symmetric Poisson matrix, $\nabla F$ and $\nabla H$ are gradients of $F$ and $H$, respectively. A Casimir function $C$ on a Poisson space is the one that commutes with all the other functions. In order to have a non-trivial Casimir function, the Poisson matrix $N$ must be degenerate. A system of ODEs is Hamiltonian if it can be written in the form of Hamilton’s equation$$\mathbf{\dot{x}}=\left\{ \mathbf{x},H\right\} =N\nabla H \label{HamEqn}$$ for $H$ being a real-valued function, called Hamiltonian function, $\{\bullet,\bullet\}$ being a Poisson bracket and $N$ being the Poisson matrix. A dynamical system is bi-Hamiltonian if it admits two different Hamiltonian structures $$\mathbf{\dot{x}}=N_{1}\nabla H_{2}=N_{2}\nabla H_{1}, \label{biHam}$$ with the requirement that the Poisson matrices $N_{1}$ and $N_{2}$ be compatible [@MaMo84; @OLV].
Space of three dimensional vectors and space of three by three skew-symmetric matrices are isomorphic. Existence of this isomorphism enables us to identify a three by three Poisson matrix $N$ with a three dimensional Poisson vector field $\mathbf{J}$ [@EsGhGu16; @Gum1]. In this case, the Hamilton’s equation takes the particular form$$\mathbf{\dot{x}}=\mathbf{J}\times\nabla H, \label{HamEq3}$$ whereas a bi-Hamiltonian system is in form $$\mathbf{\dot{x}}=\mathbf{J}_{1}\times\nabla H_{2}=\mathbf{J}_{2}\times\nabla
H_{1}. \label{bi-Ham}$$ and the Jacobi identity turns out to be$$\mathbf{J}\cdot(\nabla\times\mathbf{J})=0. \label{jcbv}$$ The following theorem establishes form of a general solution of the Jacobi identity. For the proof this theorem we refer [@AGZ; @HB1; @HB2; @HB3].
General solution of the Jacobi identity (\[jcbv\]) is $$\mathbf{J}=\frac{1}{M}\nabla H_{1} \label{Nsoln}$$ for arbitrary functions $M$ called the Jacobi’s last multiplier, and $H_{1}$ called as the Casimir.
Existence of the scalar multiple $1/M$ in the solution is a manifestation of the conformal invariance of Jacobi identity. In the literature, $M$ is called Jacobi’s last multiplier [@Go01; @Jac1; @Jac2; @Wh88]. The potential function $H_{1}$ in Eq.(\[Nsoln\]) is a Casimir function of the Poisson vector field $\mathbf{J}$. Any other Casimir of $\mathbf{J}$ has to be linearly dependent to the potential function$\ H_{1}$ since the kernel is one dimensional. Substitution of the general solution (\[Nsoln\]) of $\mathbf{J}$ into the Hamilton’s equations (\[HamEq3\]) results with $$\dot{\mathbf{x}}=\frac{1}{M}\nabla H_{1}\times\nabla H_{2}. \label{x1}$$
While writing a non-autonomous system in form of the Hamilton’s equations , inevitably, one of the two, Poisson vector or Hamiltonian function, must depend explicitly on the time variable $t$. The calculation $$\frac{d}{dt}H(\mathbf{x},t)=\nabla H(\mathbf{x},t)\cdot\dot{x}+\frac{\partial
}{\partial t}H(\mathbf{x},t)=\nabla H\cdot(\mathbf{J}\times\nabla
H)+\frac{\partial}{\partial t}H(\mathbf{x},t)=\frac{\partial}{\partial
t}H(\mathbf{x},t),$$ shows that if the time parameter appears only in the Poisson vector, then the Hamiltonian is a constant of the motion, if the time parameter appears in the Hamiltonian, then the Hamiltonian fails to be an integral invariant of the system.
Nambu-Poisson Systems in $3D$
-----------------------------
In [@Nambu], a ternary operation $\{\bullet,\bullet,\bullet\}$, called Nambu-Poisson bracket, is defined on the space of smooth functions satisfying the generalized Leibnitz identity $$\left\{ F_{1},F_{2},FH\right\} =\left\{ F_{1},F_{2},F\right\} H+F\left\{
F_{1},F_{2},,H\right\} \label{GLI}$$ and the fundamental (or Takhtajan) identity $$\left\{ F_{1},F_{2},\{H_{1},H_{2},H_{3}\}\right\} =\sum_{k=1}^{3}\{H_{1},...,H_{k-1},\{F_{1},F_{2},H_{k}\},H_{k+1},...,H_{3}\}, \label{FI}$$ for arbitrary functions $F,F_{1},F_{2},H,H_{1},H_{2}$, see [@Ta]. A dynamical system is called Nambu-Hamiltonian with Hamiltonian functions $H_{1}$ and $H_{2}$ if it can be recasted as$$\mathbf{\dot{x}}=\left\{ \mathbf{x},H_{1},H_{2}\right\} . \label{NHamEqn}$$ By fixing the Hamiltonian functions $H_{1}$ and $H_{2}$, we can write Nambu-Hamiltonian system (\[NHamEqn\]) in the bi-Hamiltonian form $$\mathbf{\dot{x}}=\left\{ \mathbf{x},H_{1}\right\} ^{H_{2}}=\left\{
\mathbf{x},H_{2}\right\} ^{H_{1}} \label{NH-2Ham}$$ where the Poisson brackets $\{\bullet,\bullet\}^{H_{2}}$ and $\{\bullet
,\bullet\}^{H_{1}}$ are defined by $$\left\{ F,H\right\} ^{{H}_{2}}=\left\{ F,H,H_{2}\right\} \qquad\left\{
F,H\right\} ^{{H}_{1}}=\left\{ F,H_{1},H\right\} , \label{Pois}$$ respectively [@Guha06].
In $3D$, we define a Nambu-Poisson bracket of three functions $F$, $H_{1}$ and $H_{2}$ as the triple product $$\left\{ F,H_{1},H_{2}\right\} =\frac{1}{M}\nabla F\cdot\nabla H_{1}\times\nabla H_{2} \label{NambuPois}$$ of their gradient vectors. Note that, the Hamilton’s equation (\[x1\]) is Nambu-Hamiltonian (\[NHamEqn\]) with the bracket (\[NambuPois\]) having the Hamiltonian functions $H_{1}$ and $H_{2}$ [@Guha06; @TeVe04]. If the function $F$ in (\[NambuPois\]) is taken as the coordinate functions, then it becomes the Lie-Poisson bracket on $\mathbb{R}^{3}$ of two functions $H_{1}$ and $H_{2}$ identified with $\left( \mathbb{R}^{3}\right) ^{\ast}$ using the dot product [@BlMoRa13], that is $$\left\{ H_{1},H_{2}\right\} _{LP}=\frac{1}{M}\mathbf{x}\cdot\nabla
H_{1}\times\nabla H_{2}.$$ The following theorem establishes the link between the existence of the Hamiltonian structure of a dynamical system and the existence of the Jacobi’s last multiplier. For the proof of the assertion we cite [@EsGhGu16; @Gao].
\[3\] A three dimensional dynamical system $\dot{\mathbf{x}}=\mathbf{X}$ having a time independent first integral is Hamiltonian, bi-Hamiltonian hence Nambu Hamiltonian if and only if there exist a Jacobi’s last multiplier $M$ which makes $M\mathbf{X}$ divergence free.
Metriplectic Systems in $3D$
----------------------------
Let $G$ be a positive semi definite symmetric matrix on an Euclidean space, and consider the symmetric bracket of two functions $$\left( F,S\right) =\nabla F\cdot G\nabla S.$$ In terms of this symmetric bracket, we define a metric or a gradient system by$$\mathbf{\dot{x}}=\left( \mathbf{x},S\right) =G\nabla S.$$ The generating function, usually called the entropy, is not a conserved quantity for the system instead we have $\dot{S}=\left( S,S\right) \geq0$, see [@Fi05].
The representation of a dynamical system as a metriplectic system requires two geometrical structures namely a Poisson structure $N$ and a metric structure $G$. Metriplectic bracket is the sum of the two brackets $$\{\{F,E\}\}=\{F,E\}+\lambda\left( F,E\right) =\nabla F\cdot N\nabla
E+\lambda\nabla F\cdot G\nabla E,$$ for any scalar $\lambda$. There are extensive studies on the metriplectic systems see, for example, [@BiBoPuTu07; @BlMoRa13; @Br88; @Fi05; @Gu07; @Mo84; @Mo86; @Ka84]. The metriplectic structures also called with the name GENERIC [@GrOt97]. The metriplectic structure satisfies the Leibnitz identity for each entry hence it is an example of a Leibnitz bracket [@OrPl04]. We refer [@Mo09] for a brief history of metriplectic structures and more.
There are two types of metriplectic systems in the literature. One of them is the one governed by so called a generalized free energy $F$ which is the difference of a Hamiltonian function $H$ and a entropy function $S$ . In this case, we require that $\nabla S$ lives in the kernel of $N$ and $\nabla H$ lives in the kernel of $G$, that is $$N\nabla S=0,\text{ \ \ }G\nabla H=0. \label{mc1}$$ The equation of motion is given by $$\mathbf{\dot{x}}=\{\{\mathbf{x},F\}\}=\{\mathbf{x},F\}+\left( \mathbf{x},F\right) =\{\mathbf{x},H\}-\left( \mathbf{x},S\right) .$$ Note that, for the dynamics governed by the metriplectic bracket, we have the conservation law $\dot{H}=\{\{H,F\}\}=0$ and the dissipation $\dot
{S}=\{\{S,F\}\}\leq0$ . We note that a weaker version of the condition (\[mc1\]) can be given by $$N\nabla S+G\nabla H=0\text{.} \label{mc2}$$ The second type of the metriplectic systems is generated by a single function, say $H$, and written as $$\mathbf{\dot{x}}=\{\{\mathbf{x},H\}\}=\{\mathbf{x},H\}+\lambda\left(
\mathbf{x},H\right) \label{mp2nd}$$ without any restriction on $H$ as given in (\[mc1\]) or (\[mc2\]).
If the Hamiltonian (reversible) part of the dynamics can be written in the terms of Nambu-Poisson bracket we may rewrite the system as $$\label{metri1}\mathbf{\dot{x}}=\{\mathbf{x},H_{1},H_{2}\}+\left(
\mathbf{x},S\right) =\frac{1}{M}\nabla H_{1}\times\nabla H_{2}-G\nabla S,$$ where $M$ is the Jacobi’s last multiplier [@Bi08]. In this case, one may take $S$ equals to $H_{1}$ or $H_{2}$.
Examples
========
Reduced three-wave interaction problem
--------------------------------------
The reduced three-wave interaction model [@Go01; @PR] is given by the system of ODEs $$\label{Rabi1}\begin{cases}
\dot{x}=-2y^{2}+\gamma x+z+\delta y\\
\dot{y}=2xy+\gamma y-\delta x\\
\dot{z}=-2xz-2z.
\end{cases}$$ where three quasisynchronous waves interact in a plasma with quadratic nonlinearities. In [@BR], this model is studied by means of Painlevé method. In [@GRZ], the existence of first integrals for this and other systems were investigated by proposing an ansatz for the first integral which explicitly involves a pre-set dependence on a particular phase space coordinate. We show how their results can be obtained in a more simplified manner using Darboux polynomials. We, additionally, present bi-Hamiltonian and metriplectic realizations of the model.
The three dimensional reduced three-wave interaction problem (\[Rabi1\]) has the following first integrals.
1. If $\delta=\hbox{ arbitrary}$, $\gamma=0$, then $I=e^{2t}z\big(y-\delta
/2\big).$
2. If $\delta=\hbox{ arbitrary}$, $\gamma=-1$, then $I=e^{2t}(x^{2}+y^{2}+z).$
3. If $\delta=\hbox{ arbitrary}$, $\gamma=-2$, then $I=e^{4t}(x^{2}+y^{2}+2/\delta\,yz).$
4. If $\delta=0$, $\gamma=\hbox{ arbitrary}$, then $I=e^{2-\gamma}yz.$
5. If $\delta=0$, $\gamma=-1$, then $I_{1}=e^{2t}(x^{2}+y^{2}+z),$ $I_{2}=e^{3t}yz.$
In order to prove this assertion, we recall the eigenvalue problem (\[dp\]) associated with the system (\[Rabi1\]) where $g$ is a second degree polynomial of the form $$g=Ax^{2}+By^{2}+Cz^{2}+Exy+Fxz+Gyz+Jx+Ky+Lz.$$ Equating coefficients then leads to the following set of equations $$\begin{aligned}
& A=B,\qquad E=F=C=0\label{c3}\\
&
\begin{cases}
2A\gamma-E\delta=\lambda A,\qquad2B\gamma+E\delta-2J=\lambda B,\\
F-4C=\lambda C,\qquad2A\delta+2E\gamma-2B\delta+2K=\lambda E,\\
2A+(\gamma-2)F-G\delta-2L=\lambda F,\qquad E+F\delta+(\gamma-2)G=\lambda G
\end{cases}
\label{c2}\\
& J\gamma-K\delta=\lambda J,\qquad J\delta+K\gamma=\lambda K,\qquad
J-2L=\lambda L. \label{c1}$$ for the third order, the second order, and the linear terms, respectively. We distinguish a number of cases following from the solutions of the system (\[c3\])-(\[c1\]) for specific parameter values. These cases will determine the integrals of the reduced systems by following the theorem (\[1\]). In the first three cases, $\delta$ is arbitrary, and we study on three different values of $\gamma$, namely $0,-1$ and $-2$. For the remaining cases wherein $\delta=0$ one can identify explicitly Darboux functions of the associated vector field, with associated eigenpolynomials which are not of degree zero.
### Case 1: $\delta$ is arbitrary and $\gamma=0$
The choices of $\delta$ is arbitrary and $\gamma=0$ reduce the system of equations (\[c3\])-(\[c1\]) to the following list $$A=B=C=E=F=K=J=0,\qquad L=-{\frac{\delta}{2}}G,\qquad\lambda=-2$$ where $G$ is arbitrary function. Additionally, by choosing $G=1$, we obtain the eigenfunction $$g=zy-{\frac{\delta}{2}z}.$$ The condition (\[clue\]) translates to the following requirement $-r+n\lambda=0$. For $r=-1$, we have $n=1/2$, so that an integral of the motion equals to $e^{t}(zy-{\frac{\delta}{2}z})^{\frac{1}{2}}$. As any function of this integrating factor is also a first integral we write the integral as$$I=e^{2t}\left( zy-{\frac{\delta}{2}z}\right) . \label{G1}$$
We change the dependent variable $z$ by $w$ according to $w=e^{2t}z$. In this case, the system (\[Rabi1\]) turns out to be a non-autonomous system $$\label{case1Ham}\begin{cases}
\dot{x}=-2y^{2}+we^{-2t}+\delta y\\
\dot{y}=2xy-\delta x\\
\dot{w}=-2xw
\end{cases}$$ whereas the integral $I$ in (\[G1\]) becomes time independent Hamiltonian of the system given by $$H_{1}=wy-{\frac{\delta}{2}}w.$$ This reduced system is divergence free, hence, according to the theorem (\[3\]), it is bi-Hamiltonian (\[biHam\]) and Nambu-Poisson (\[NHamEqn\]). To exhibit these realizations, we need to introduce a second time dependent Hamiltonian function $$H_{2}=x^{2}+y^{2}+e^{-2t}w$$ of the system (\[case1Ham\]). Note that, the system is divergce free, hence Jacobi’s last multiplier for the system is a constant function, say $M=1$. So that, the system is in the form of cross product of two gradients$$\left( \dot{x},\dot{y},\dot{w}\right) ^{T}=\nabla H_{1}\times\nabla
H_{2}=\mathbf{J}_{1}\times\nabla H_{2}=\mathbf{J}_{2}\times\nabla H_{1}$$ in the form of bi-Hamiltonian and Nambu-Poisson forms (\[x1\]) with Poisson vector fields $\mathbf{J}_{1}=\nabla H_{1}$ and $\mathbf{J}_{2}=-\nabla H_{2}$, respectively. Since the first Hamiltonian is autonomous, the second one has to, evidently, be time dependent. Note that, this second time dependent Hamiltonian $H_{2}$ can not be observed as a consequence of the theorem (\[1\]), because it is not an integral invariant of the system.
At this point, we make a break to the cases and discuss the metriplectic structure of the system (\[Rabi1\]) starting and inspiring from the bi-Hamiltonian/Nambu formulation of its particular case (\[case1Ham\]). The proof of the following assertion is a matter of direct calculation
\[rtwmp\] The reduced three-wave interaction problem (\[Rabi1\])is in bi-Hamiltonian/Nambu metriplectic formulation (\[metri1\]) given by $$\left( \dot{x},\dot{y},\dot{z}\right) ^{T}=\nabla H_{1}\times\nabla
H_{2}-G\nabla H_{2}. \label{Metriplectic1}$$ where the Hamiltonian functions are $H_{1}=zy-{\frac{\delta}{2}}z$, and $H_{2}=x^{2}+y^{2}+e^{-2t}z$, and the metric tensor is $$G=\begin{pmatrix}
-\gamma/2 & 0 & 0\\
0 & -\gamma/2 & 0\\
0 & 0 & 2ze^{2t}\end{pmatrix}
.$$
In (\[Metriplectic1\]), the metriplectic structure in of the second kind. Note that, by replacing the roles of $H_{1}$ and $H_{2}$ in (\[Metriplectic1\]), up to some modifications in the definition of the metric, we may also generate the system (\[Rabi1\]) by the Hamiltonian $H_{1}$ as well. This case will be presented in the case $5$.
### Case 2: $\delta$ is arbitrary and $\gamma=-1$
In the case $\delta$ is arbitrary and $\gamma=-1$, the system of equations (\[c3\])-(\[c1\]) becomes $$C=E=F=G=K=J=0,\qquad A=B=L,\qquad\lambda=-2$$ so that the eigenfunction becomes $A(x^{2}+y^{2}+z)$. Hence, the condition for $I$ to be a first integral, namely $-r+n\lambda=0$ implies $n={\frac{1}{2}}$ and $r=-1$. The corresponding first integral is then given by $$I=e^{2t}(x^{2}+y^{2}+z). \label{G2}$$
We make the change of dependent variables $$u=xe^{t},\text{ \ \ }v=ye^{t},\text{ \ \ }w=ze^{2t}$$ and rescale the time variable by $\bar{t}=e^{t}$, then we arrive the non-autonomous system $$\left\{
\begin{array}
[c]{c}\acute{u}=-2v^{2}+w+\delta v\bar{t}\\
\acute{v}=2uv-\delta u\bar{t}\\
\acute{w}=-2uw
\end{array}
\right. \label{Rtwip2b}$$ where prime denotes the derivative with respect to the new time variable $\bar{t}=e^{t}$. In this coordinates, the integral (\[G2\]) is autonomous $$H_{1}=u^{2}+v^{2}+w.$$ Note that, the system (\[Rtwip2b\]) is divergence free, hence we can take the Jacobi’s last multiplier $M$ as the unity. Hence, we argue that, there exist a second Hamiltonian which enables us to write the system (\[Rtwip2b\]) in bi-Hamiltonian/Nambu formulation. After a straight forward calculation, we arrive a non-autonomous Hamiltonian $$H_{2}=vw+\delta\frac{v^{2}}{2}\bar{t}-\delta\frac{u^{2}}{2}\bar{t}$$ which enables us to write the system (\[Rtwip2b\]) as a bi-Hamiltonian (\[biHam\]) and Nambu-Poisson (\[NHamEqn\]) system $$\left( \acute{u},\acute{v},\acute{w}\right) ^{T}=\nabla H_{1}\times\nabla
H_{2}=\mathbf{J}_{1}\times\nabla H_{2}=\mathbf{J}_{2}\times\nabla H_{1}$$ where the Poisson vectors are $\mathbf{J}_{1}=\nabla H_{1}$ and $\mathbf{J}_{2}=-\nabla H_{2}$, respectively.
### Case 3: $\delta$ is arbitrary and $\gamma=-2$
For the above choice of parameters $\delta$ is arbitrary and $\gamma=-2$, it may be verified that, the system of equations (\[c3\])-(\[c1\]) turn out to be $$C=E=F=J=K=L=0,\qquad A=B,\qquad G={\frac{2}{\delta}}A,\qquad\lambda=-4.$$ This leads to the eigenfunction $A(x^{2}+y^{2}+{\frac{2}{\delta}y}z)$, so that choosing $A=1$ we get the following first integral $$I=e^{4t}(x^{2}+y^{2}+{\frac{2}{\delta}y}z). \label{G3}$$ To arrive the Hamiltonian form of this system, we first make the substitutions $u=xe^{2t},$ $v=ye^{2t},$ $w=ze^{2t}$ which results with the non-autonomous divergence free system$$\left\{
\begin{array}
[c]{c}\dot{u}=-2v^{2}e^{-2t}+w+\delta v\\
\dot{v}=2uve^{-2t}-\delta u\\
\dot{z}=-2uwe^{-2t}\end{array}
\right. . \label{case3}$$ Actually, the system (\[case3\]) is a bi-Hamiltonian (\[biHam\]) and Nambu-Poisson (\[NHamEqn\]) system with the introductions of Hamiltonian functions $$H_{1}=\frac{\delta}{2}\left( u^{2}e^{-2t}+v^{2}e^{-2t}+w\right) \text{,
\ \ }H_{2}=u^{2}+v^{2}+{\frac{2}{\delta}vw,}$$ where the second Hamiltonian is the integral (\[G3\]).
### Case 4: $\delta=0$ and $\gamma$ is arbitrary
It is a straightforward matter to verify that the following functions $g_{\alpha}\;(\alpha=1,2)$ are Darboux polynomials whose associated eigenpolynomials $\lambda_{\alpha}$’s are $$\label{gs}g_{1}=y,\;\lambda_{1}=2x-1\text{, \ \ and \ \ }g_{2}=z,\;\lambda
_{2}=-2x-2,$$ if $\delta=0$ and $\gamma$ is arbitrary. The condition (\[clue\]) now leads to $$0=-r+\sum_{\alpha}n_{\alpha}g_{\alpha}\Rightarrow-r+n_{1}(2x+\gamma
)+n_{2}(-2x-2)=0.$$ Setting $r=-1$ we obtain the following equations: $$n_{1}-n_{2}=0,\;\;\gamma n_{1}-2n_{2}+1=0$$ leading to $n_{1}=n_{2}={\frac{1}{2-\gamma}}$. The corresponding first integral is $$I=e^{(2-\gamma)t}yz. \label{G4}$$ In order to exhibit the Hamiltonian formulation of the system, we define $u=xe^{-\gamma t},$ $v=ye^{-\gamma t},$ $w=ze^{2t}$ then we have a non-autonomous divergence free system $$\begin{cases}
\dot{u}=-2v^{2}e^{\gamma t}+we^{-(2+\gamma)t}\\
\dot{v}=2uve^{\gamma t}\\
\dot{z}=-2uwe^{\gamma t}\end{cases}$$ with the Hamiltonian $H_{2}=vw$. The bi-Hamiltonian (\[biHam\]) and Nambu-Poisson (\[NHamEqn\]) structure of the system can be realized after the introduction of the second (time dependent) Hamiltonian $$H_{1}=u^{2}e^{\gamma t}+v^{2}e^{\gamma t}+e^{-\left( 2+\gamma\right) t}w.$$
### Case 5: $\delta=0$ and $\gamma=-1$
For this case, in addition to $g_{1},g_{2}$ given in (\[gs\]), we have another Darboux polynomial $$g_{3}=x^{2}+y^{2}+z,\;\;\lambda_{3}=-2.$$ The condition (\[clue\]) becomes $$2(n_{1}-n_{2})-(n_{1}+2n_{2}+2n_{3})=r.$$ We make the standardization $r=-1$ and obtain the following set of equations $$n_{1}=n_{2}\;\mbox{and}\;n_{1}+2n_{2}+2n_{3}=1$$ or, in other words, $3n_{1}+2n_{3}=1$ which leads to the following subcases: (a) $n_{3}=0\;$and$\;n_{1}=n_{2}={\frac{1}{3}}$, and (b) $n_{1}=n_{2}=0\;$and$\;n_{3}={\frac{1}{2}}$. So that, we have two time dependent integrals of the motion $$I_{1}(x,y,z)=e^{t}(yz)^{{\frac{1}{3}}}\text{ and \ }I_{2}(x,y,z)=e^{t}(x^{2}+y^{2}+z)^{{\frac{1}{2}}}. \label{Int12}$$ We make the change of variables $u=xe^{t}$, $v=ye^{t}$, and $w=ze^{2t}$ and rescale the time variable by $\bar{t}=e^{t}$, then arrive the autonomous system $$\label{systemp}\begin{cases}
\acute{u}=-2v^{2}+w\\
\acute{v}=2uv\\
\acute{w}=-2uw
\end{cases}$$ where prime denotes the derivative with respect to the new time variable $\bar{t}=e^{t}$. Note that, this system is divergence free, hence we can take the Jacobi’s last multiplier as the unity. In the new coordinate system, the integrals of the system (\[Int12\]) become the Hamiltonian functions of the system given by $$H_{1}=vw\text{, \ \ }H_{2}=u^{2}+v^{2}+w. \label{Ham5}$$ This enables us to write the system (\[systemp\]) in bi-Hamiltonian (\[biHam\]) and Nambu-Poisson (\[NHamEqn\]) form.
Note that, as a particular case of the proposition (\[rtwmp\]), we show how the reduced three-wave interaction model (\[Rabi1\]) with $\delta
=0$** **and $\gamma=-1$ given by $$\begin{pmatrix}
\dot{x}\\
\dot{y}\\
\dot{z}\end{pmatrix}
=\begin{pmatrix}
-2y^{2}+z\\
2xy\\
-2xz
\end{pmatrix}
+\begin{pmatrix}
-x\\
-y\\
-2z
\end{pmatrix}
\label{rtwipmetri}$$ can be put in a metriplectic realization of the second kind (\[mp2nd\]). Note that, the first term at the right hand side is the conservative part of the system with two Hamiltonian functions $H_{1}=yz$ and $H_{2}=x^{2}+y^{2}+z
$ inspired from the ones in (\[Ham5\]). This enables us to write the system (\[rtwipmetri\]) in two different ways. In the first one, we take $H_{2}$ as the Casimir function of the system and $H_{1}$ as the Hamiltonian system with Poisson vector $\mathbf{J}_{2}=-\nabla H_{2}$. Hence, the second term on the right hand side can be described by a dissipative term by taking the metric two-form as $$G=\begin{pmatrix}
0 & \frac{x}{z} & 0\\
\frac{x}{z} & 0 & 1\\
0 & 1 & \frac{z}{y}\end{pmatrix}$$ where $\lambda=-1$. In this case the reduced three wave interaction model (\[rtwipmetri\]) can be written as $$\mathbf{\dot{x}}=\mathbf{J}_{2}\times\mathbf{\nabla}H_{1}-G\mathbf{\nabla
}H_{1}.$$
Rabinovich system
-----------------
This is described by the following system of equations: $$\label{Rabi2}\begin{cases}
\dot{x}=hy-\nu_{1}x+yz\\
\dot{y}=hx-\nu_{2}y-xz\\
\dot{z}=-\nu_{3}z+xy,
\end{cases}$$ where $h$ and $\nu_{i}$ are real constants. We shall very briefly illustrate how the results of [@GRZ] for this system may be derived by the Darboux integrability method. In addition, we will show that the Rabinovich system can be written as a bi-Hamiltonian/Nambu metriplectic form.
Consider the vector field $X$ generating the Rabinovich system (\[Rabi2\]). We note that application of $X$ to the function $g_{1}=y^{2}+z^{2}$ yields $$X\left( g_{1}\right) =2hxy-2(\nu_{2}y^{2}+\nu_{3}z^{2}).$$ Consequently $g_{1}$ becomes a Darboux polynomial when $h=0,\nu_{2}=\nu_{3} $. In this case, the eigenpolynomial being of degree zero *viz* $\lambda=-2\nu_{3}$. We are lead to the first integral$$\label{G6}I_{1}=e^{2\nu_{3}t}(y^{2}+z^{2})$$ of the system (\[Rabi2\]) when $h=0$, $\nu_{2}=\nu_{3}$ with $\nu_{1}
$ and $\nu_{3}$ being arbitrary. The application of the vector field $X$ generating the Rabinovich system (\[Rabi2\]) on the polynomial $g_{2}=x^{2}+y^{2}$ results with $$X\left( g_{2}\right) =4hxy-2(\nu_{1}x^{2}+\nu_{2}y^{2}).$$ Consequently, $g_{2}$ becomes a Darboux polynomial when $h=0,\nu_{1}=\nu_{2}
$. In this case, the eigenpolynomial being of degree zero *viz* $\lambda=-2\nu_{1}$. We are lead to the first integral$$\label{G7}I_{2}=e^{2\nu_{1}t}(x^{2}+y^{2})$$ of the system (\[Rabi2\]) when $h=0$, $\nu_{1}=\nu_{2}$ with $\nu_{1}
$,$\nu_{3}$ being arbitrary.
Let us transform the Rabinovich system (\[Rabi2\]) in a form where we can write it as a bi-Hamiltonian/Nambu system. For the case of $\nu_{1}=\nu
_{2}=v_{3}=v$, we have two integrals $I_{1}$ and $I_{2}$ of the system (\[Rabi2\]). In this case, we apply a coordinate change $$u=xe^{vt},v=ye^{vt},\text{ \ \ }w=ze^{vt}$$ with the time rescaling $\bar{t}=\frac{1}{v}e^{vt}$ with $v\neq0$, then the system turns out to be a divergence free system $$\acute{u}=vw,\text{ \ \ }\acute{v}=-uw,\text{ \ \ }\acute{w}=uv.
\label{DivfreeRabi}$$ In this case the integrals of motion given in (\[G6\]) and (\[G7\]) become the Hamiltonian functions of the system, namely $$H_{1}=\frac{1}{2}(v^{2}+w^{2}),\text{ \ \ }H_{2}=\frac{1}{2}(u^{2}+v^{2}).$$ Hence we can write (\[DivfreeRabi\]) as in the form of bi-Hamiltonian (\[biHam\]) and Nambu-Poisson (\[NHamEqn\]) form $$\left( \acute{u},\acute{v},\acute{w}\right) ^{T}=\nabla H_{1}\times\nabla
H_{2} \label{Rabinovich2}$$ with Jacobi’ last multiplier being the unity, see also [@CrPu07]. For another discussion on the case where $h$ is nonzero and $\nu_{1}=\nu_{2}=v_{3}=0$, we refer [@Tu12].
In the following proposition, inspiring from the bi-Hamiltonian/Nambu form (\[Rabinovich2\]) of the transformed system (\[DivfreeRabi\]), we are, now, exhibiting a metriplectic realization of the Rabinovich system (\[Rabi2\]).
\[rabi\] The Rabinovich system (\[Rabi2\]) is in bi-Hamiltonian/Nambu metriplectic formulation (\[metri1\]) given by $$\left( \dot{x},\dot{y},\dot{z}\right) ^{T}=\nabla H_{1}\times\nabla
H_{2}-G\nabla H_{1}. \label{metrirabi}$$ where the Hamiltonian functions are $H_{1}=\frac{1}{2}(x^{2}+y^{2})$, and $H_{2}=\frac{1}{2}(y^{2}+z^{2})$, and the metric tensor is $$G=\begin{pmatrix}
\nu_{1} & -h & 0\\
-h & \nu_{2} & \frac{z\nu_{3}}{y}\\
0 & \frac{z\nu_{3}}{y} & 0
\end{pmatrix}
.$$
The metriplectic formulation (\[metrirabi\]) of the Rabinovich system (\[Rabi2\]) is of the second kind. As in the case of the reduced three-wave interaction problem, one may generate (\[Rabi2\]) by the Hamiltonian $H_{2}$ instead of $H_{1}$ by adopting a new metric.
Hindmarsh-Rose model
--------------------
The Hindmarsh-Rose model of the action potential which is a modification of Fitzhugh model was proposed as a mathematical representation of the bursting behaviour of neurones, and was expected to simulate the repetitive, patterned and irregular activity seen in molluscan neurones [@HiRo84]. The Hindmarsh-Rose model consists of a system of three autonomous differential equations, with mild nonlinearities for modelling neurons that exhibit triggered firing. The usual form of the equations are $$\begin{cases}
\dot{x}=y+\phi(x)-z-C\\
\dot{y}=\psi(x)-y\\
\dot{z}=r(s(x-x_{R})-z)
\end{cases}
\label{HR1}$$ where $\phi(x)=ax^{2}-x^{3}$ and $\psi(x)=1-bx^{2}$. Here $C$ is a control parameter, while of the remaining five parameters $s$ and $x_{R}$ are usually fixed. We re-write them in the following form appending two extra parameters$$\begin{cases}
\dot{x}=y-z-ax^{3}+bx^{2}+\alpha\\
\dot{y}=\beta-dx^{2}-y\\
\dot{z}=px-rz-\gamma
\end{cases}
\label{HR2}$$ Here $\alpha,\beta,\gamma,a,b,d,p,r$ are parameters. Unfortunately we have not found a first integral with $a\neq0$, which is the dominant nonlinear term here.
The reduced Hindmarsh-Rose system $$\begin{cases}
\dot{x}=y-z+bx^{2}+\alpha\\
\dot{y}=\beta-dx^{2}-y\\
\dot{z}=px-rz-\gamma
\end{cases}
\label{HR3}$$ has the following first integrals.
1. If $p=0$, then the first integral of the system (\[HR3\]) is $I=e^{rt}(rz+\gamma).$
2. If $d=0$ then $I=e^{t}(y-\beta).$
3. If $d,\beta,\gamma$ are arbitrary, $b=-d,p=-2,\alpha=\beta+\gamma$ and $r=1$, then $I=e^{2t}(x-y+z).$
4. If $\alpha,\gamma,p$ and $b$ are arbitrary, and when $d=2b,r=-(p+1),\beta=2(\frac{\gamma}{p}-\alpha)$ then $I=e^{-t}(2x+y+\frac
{2z}{p}).$
5. If $\beta,\gamma,r,b,d$ are arbitrary, and$$\alpha=-\frac{b(\gamma d+\beta d-b\beta+r\beta b)}{d(d-b+br)}\qquad
\text{and}\qquad p=\frac{(b-d)(d-b+br)}{b^{2}}$$ then the first integral becomes $$I=e^{\frac{2(b-d)}{b}}(Ax^{2}+By^{2}+Cz^{2}+Exy+Fxz+Gyz)$$ where the coefficients of the polynomial are given by $$\begin{aligned}
A & =-\frac{(b-d)(d-b+br)}{b(-d+2b+br)},\qquad B=-\frac{b(b-d)(d-b+br)}{d^{2}(-d+2b+br)},\\
C & =-\frac{b(b-d)}{(d-b+br)(-d+2b+br)},\qquad E=-2\frac{(b-d)(d-b+br)}{d(-d+2b+br)},\\
F & =2\frac{b-d}{-d+2b+br},\qquad G=2\frac{b(b-d)}{d(-d+2b+br)}.\end{aligned}$$
6. If $p=0$ , $b=d$, and $\beta,\gamma,r$ are arbitrary and $\alpha
=-\frac{\beta r+\gamma}{r}$ then $I=rx+ry-z.$ When, additionally, $r=-1$, then $I=x+y+z.$
To prove these assertions, one may take the total time derivatives of the integrals and show that they are zero. Starting with the integrals presented in the previous proposition, we are achieving to write the Hindmarsh-Rose model (\[HR2\]) in a metriplectic form of the second kind in the following proposition.
\[metriHR\] The Hindmarsh-Rose model (\[HR2\]) (with $r=-1$ and $\alpha=\beta-\gamma$) is in bi-Hamiltonian/Nambu metriplectic formulation (\[metri1\]) given by $$\left( \dot{x},\dot{y},\dot{z}\right) ^{T}=\nabla H_{1}\times\nabla
H_{2}-G\nabla H_{1}.$$ where the Hamiltonian functions are $H_{1}=x+y+z$, and $H_{2}=yz-\gamma
y-\beta z$, and the metric tensor is $$G=\begin{pmatrix}
ax^{3}-bx^{2} & 0 & 0\\
0 & dx^{2} & 0\\
0 & 0 & -px
\end{pmatrix}
.$$
Oregonator model
----------------
The Oregonator model was developed by Field and Noyes [@FN] to illustrate the mechanism of the Belousov-Zhabotinsky oscillatory reaction. The model can be expressed in terms of coupled three ordinary differential equations $$\left\{
\begin{array}
[c]{c}\dot{x}={\frac{1}{\epsilon}}(x+y-qx^{2}-xy)\\
\dot{y}=-y+2hz-xy\\
\dot{z}={\frac{1}{p}}(x-z).
\end{array}
\right. \label{Ore1}$$ that describe the complex dynamics of the reaction process. In the physical model considered, all the parameters $\epsilon,q,p,h$ are positive. However, from a purely mathematical point of view, allowing the parameters to be negative, we have obtained a first integral $$I=e^{2t}(x+y+z),$$ for the parameters$\;q=0,\epsilon=p=-1$ and$\;h=-{\frac{3}{2}}$ as may be easily verified.
We will write Oregonator model in the Hamiltonian formulation as follows. At first, we change the coordinates according to $$u=xe^{2t},\text{ \ \ }v=ye^{2t},\text{ \ \ }w=e^{2t}z.$$ which enables us to write the system (\[Ore1\]) as the following nonautonomous form $$\begin{aligned}
\dot{u} & =u-v+uve^{-2t}\nonumber\\
\dot{v} & =v-3w-uve^{-2t}\nonumber\\
\dot{w} & =3w-u\end{aligned}$$ with a time independent first integral $H=u+v+w$. Then we introduce the non-autonomous Poisson matrix$$N=\begin{pmatrix}
0 & uve^{-2t}-v & u\\
v-uve^{-2t} & 0 & -3w\\
-u & 3w & 0
\end{pmatrix}$$ then the system (\[Ore1\]) is in form Hamilton’s equation (\[HamEqn\]) given by $\mathbf{\dot{u}}=P\nabla H$.
Conclusions
===========
In this paper, we have reviewed some technical details of the integrability and Hamiltonian representations of the $3D$ systems. Then, we have applied these theoretical results, especially the Darboux polynomials, to derive the first integrals of $3D$ polynomial systems the reduced three-wave interaction problem, Rabinovich system, Hindmarsh-Rose model and Oregonator model. Then we have achieved to exhibit Hamiltonian, and metriplectic realizations of the systems.
[99]{} Ay, A., Gürses, M., & Zheltukhin, K. (2003). Hamiltonian equations in $\mathbb{R}^{3}$, J.Math. Phys. 44(12) 5688-5705.
Birtea, P., Boleantu, M., Puta, M., & Tudoran, R. M. (2007). Asymptotic stability for a class of metriplectic systems. Journal of Mathematical Physics, 48(8), 2703.
Bihlo, A. (2008). Rayleigh–Bénard convection as a Nambu-metriplectic problem. Journal of Physics A: Mathematical and Theoretical, 41(29), 292001.
Bloch, A. M., Morrison, P. J., & Ratiu, T. S. (2013). Gradient flows in the normal and Kähler metrics and triple bracket generated metriplectic systems. In Recent Trends in Dynamical Systems (pp. 371-415). Springer Basel.
Brockett, R. (1991). Dynamical systems that sort lists, solve linear programming problems and diagonalize symmetric matrices. In Proc. 1988 IEEE Conference on Decision and Control, Linear Algebra Appl (Vol. 146, pp. 79-91).
Bountis, T. C., Ramani, A., Grammaticos, B., & Dorizzi, B. (1984). On the complete and partial integrability of non-Hamiltonian systems. Physica A: Statistical Mechanics and its Applications, 128(1-2), 268-288.
Casati, P., Magri, F., & Pedroni, M. (1993). Bihamiltonian manifolds and Sato’s equations. In Integrable Systems (pp. 251-272). Birkhäuser Boston.
Chandrasekar, V. K., Senthilvelan, M., & Lakshmanan, M. (2009, February). On the complete integrability and linearization of nonlinear ordinary differential equations. III. Coupled first-order equations. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences (Vol. 465, No. 2102, pp. 585-608). The Royal Society.
Chavarriga, J., & Grau, M. (2003). Some open problems related to 16b Hilbert problem. Scientia Series A: Mathematical Sciences, 9, 1-26.
Chiş, O., & Puta, M. (2008). The dynamics of Rabinovich system. Differential Geometry–Dynamical Systems.
Darboux, G. (1878). Mémoire sur les équations différentielles algébriques du premier ordre et du premier degré. Bull. Sci. Math. 2(1) 151–200.
Dorizzi, B., Grammaticos, B., Hietarinta, J., Ramani, A., & Schwarz, F. (1986). New integrable three-dimensional quartic potentials. Physics Letters A, 116(9), 432-436.
Dumortier, F., Llibre, J., & Artés, J. C. (2006). Qualitative theory of planar differential systems. Berlin: Springer.
Esen, O., Ghose Choudhury, A., & Guha, P. (2015). Bi-Hamiltonian Structures of Chaotic Dynamical Systems in 3D. arXiv preprint arXiv:1511.06899.
Field, R. J., & Noyes, R. M. (1974). Oscillations in chemical systems. IV. Limit cycle behavior in a model of a real chemical reaction. The Journal of Chemical Physics, 60(5), 1877-1884.
Fish, D. J. (2005). Metriplectic systems. PhD Thesis. Portland State University.
Gao, P. (2000). Hamiltonian structure and first integrals for the Lotka–Volterra systems. Physics Letters A, 273(1), 85-96.
Giacomini, H. J., Repetto, C. E., & Zandron, O. P. (1991). Integrals of motion for three-dimensional non-Hamiltonian dynamical systems. Journal of Physics A: Mathematical and General, 24(19), 4567-4574.
Gonera, C., & Nutku, Y. (2001). Super-integrable Calogero-type systems admit maximal number of Poisson structures. Physics Letters A, 285(5), 301-306.
Giacomini, H., & Neukirch, S. (1997). Integrals of motion and the shape of the attractor for the Lorenz model. Physics Letters A, 227(5), 309-318.
Giacomini, H., & Neukirch, S. (1997). Number of limit cycles of the Liénard equation. Phys. Rev. E (3), 56 (4), 3809-3813
Ghose Choudhury, A., Guha, P., & Khanra, B. (2009). On the Jacobi last multiplier, integrating factors and the Lagrangian formulation of differential equations of the Painlevé–Gambier classification. Journal of Mathematical Analysis and Applications, 360(2), 651-664.
Goriely, A. (2001). Integrability and nonintegrability of dynamical systems (Vol. 19). World Scientific.
Grammaticos, B., Moulin-Ollagnier, J., Ramani, A., Strelcyn, J. M., & Wojciechowski, S. (1990). Integrals of quadratic ordinary differential equations in $R^3$: the Lotka-Volterra system. Physica A: Statistical Mechanics and its Applications, 163(2), 683-722.
Grmela, M., & Öttinger, H. C. (1997). Dynamics and thermodynamics of complex fluids. I. Development of a general formalism. Physical Review E, 56(6), 6620–6632.
Guha, P. (2007). Metriplectic structure, Leibniz dynamics and dissipative systems. Journal of Mathematical Analysis and Applications, 326(1), 121-136.
Guha, P. (2006). Quadratic Poisson structures and Nambu mechanics. Nonlinear Analysis: Theory, Methods & Applications, 65(11), 2025-2034.
Gümral, H. (2010). Existence of Hamiltonian Structure in 3D. Advances in Dynamical Systems and Applications, 5(2), 159-171.
Gümral, H., & Nutku, Y. (1993). Poisson structure of dynamical systems with three degrees of freedom. Journal of Mathematical Physics, 34(12), 5691-5723.
Guha, P., & Choudhury, A. G. (2010). On Planar and Non-planar Isochronous Systems and Poisson Structures. International Journal of Geometric Methods in Modern Physics, 7(07), 1115-1131.
Hernandez-Bermejo, B. (2001). New solutions of the Jacobi equations for three-dimensional Poisson structures. Journal of Mathematical Physics, 42(10), 4984-4996.
Hernández-Bermejo, B. (2001). One solution of the 3D Jacobi identities allows determining an infinity of them. Physics Letters A, 287(5), 371-378.
Hernández-Bermejo, B. (2007). New solution family of the Jacobi equations: Characterization, invariants, and global Darboux analysis. Journal of mathematical physics, 48(2), 022903.
Hirsch, M. W., Smale, S., & Devaney, R. L. (2012). Differential equations, dynamical systems, and an introduction to chaos. Academic press.
Hindmarsh, J. L., & Rose, R. M. (1984). A model of neuronal bursting using three coupled first order differential equations. Proceedings of the Royal Society of London B: Biological Sciences, 221(1222), 87-102.
Hietarinta, J. (1987). Direct methods for the search of the second invariant. Physics Reports, 147(2), 87-154.
Jacobi, C. G. J. (1844). Sul principio dell’ultimo moltiplicatore, e suo uso come nuovo principio generale di meccanica. Giornale Arcadico di Scienze, Lettere ed Arti 99, 129-146.
Jacobi, C. G. J. (1844). Theoria novi multiplicatoris systemati aequationum differentialium vulgarium applicandi. Journal für die reine und angewandte Mathematik, 27, 199-268.
Juanolou, J.P. (1979) Équations de Pfaff algébriques. Lecture Notes in Mathematics, 708, Springer, Berlin,. v+255 pp.
Laurent-Gengoux, C., Pichereau, A., & Vanhaecke, P. (2012). Poisson structures (Vol. 347). Springer Science & Business Media.
Libermann, P., & Marle, C. M. (2012). Symplectic geometry and analytical mechanics (Vol. 35). Springer Science & Business Media.
Kaufman, A. N. (1984). Dissipative Hamiltonian systems: a unifying principle. Physics Letters A, 100(8), 419-422.
Man, Y. K. (1994). First integrals of autonomous systems of differential equations and the Prelle-Singer procedure. Journal of Physics A: Mathematical and General, 27(10), L329.
Magri, F., & Morosi, C. (2008). A geometrical characterization of Hamiltonian systems through the theory of Poisson-Nijenhuis manifolds, Quaderno 19-1984, Univ. of Milan.
Man, Y. K., & MacCallum, M. A. (1997). A rational approach to the Prelle–Singer algorithm. Journal of Symbolic Computation, 24(1), 31-43.
Morrison, P. J. (1986). A paradigm for joined Hamiltonian and dissipative systems. Physica D: Nonlinear Phenomena, 18(1-3), 410-419.
Morrison, P. J. (2009). Thoughts on brackets and dissipation: old and new. In Journal of Physics: Conference Series (Vol. 169, No. 1, p. 012006). IOP Publishing.
Morrison, P. J. (1984). Bracket formulation for irreversible classical fields. Physics Letters A, 100(8), 423-427.
Nambu, Y. (1973). Generalized hamiltonian dynamics. Physical Review D, 7(8), 2405.
Olver, P. J. (2000). Applications of Lie groups to differential equations (Vol. 107). Springer Science & Business Media..
Ortega, J. P., & Planas-Bielsa, V. (2004). Dynamics on Leibniz manifolds. Journal of Geometry and Physics, 52(1), 1-27.
Pikovskii, A. S., & Rabinovich, M. I. (1981). Stochastic behavior of dissipative systems. Soc. Sci. Rev. C: Math. Phys. Rev, 2, 165-208.
Prelle, M. J., & Singer, M. F. (1983). Elementary first integrals of differential equations. Transactions of the American Mathematical Society, 279(1), 215-229.
Singer, M. F. (1992). Liouvillian first integrals of differential equations. Transactions of the American Mathematical Society, 333(2), 673-688.
Takhtajan, L. (1994). On foundation of the generalized Nambu mechanics. Communications in Mathematical Physics, 160(2), 295-315.
Teğmen, A., & Verçin, A. (2004). Superintegrable systems, multi-Hamiltonian structures and Nambu mechanics in an arbitrary dimension. International Journal of Modern Physics A, 19(03), 393-409.
Tudoran, R. A. (2012). On asymptotically stabilizing the Rabinovich dynamical system. International Journal of Geometric Methods in Modern Physics, 9(05), 1220008.
Weinstein, A. (1983). The local structure of Poisson manifolds. Journal of differential geometry, 18(3), 523-557..
Whittaker, E. T. (1988). A treatise on the analytical dynamics of particles and rigid bodies. Cambridge University Press.
[^1]: E-mail: [email protected]
[^2]: Email: [email protected]
[^3]: E-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove an equivariant version of Beilinson’s conjecture on non-critical $L$-values of strongly modular abelian varieties over number fields. As an application, we prove a weak version of Zagier’s conjecture on $L(E,2)$ and Deninger’s conjecture on $L(E,3)$ for non-CM strongly modular ${\mathbf{Q}}$-curves.'
address: 'ÉNS Lyon, Unité de mathématiques pures et appliquées, 46 allée d’Italie, 69007 Lyon, France'
author:
- François Brunault
bibliography:
- 'references.bib'
title: |
Non-critical equivariant $L$-values\
of modular abelian varieties
---
=1
The purpose of this article is to use the full strength of Beilinson’s theorem on modular curves to prove the following result.
\[main thm\] Let $A$ be an abelian variety defined over a Galois number field $K$ whose Hasse-Weil $L$-function $L(A/K,s)$ is a product of $L$-functions of newforms of weight $2$ without complex multiplication. Then for every integer $n \geqslant 2$, the weak form of Beilinson’s conjecture on $L(A/K,n)$ holds.
We in fact prove a slightly stronger result, namely an equivariant version of Beilinson’s conjecture for the Chow motive $H^1(A/K)$ with coefficients in the endomorphism algebra of $A$, at every non-critical integer (see Corollary \[cor 1\]).
The abelian varieties satisfying the hypotheses of Theorem \[main thm\] are called strongly modular in [@guitart-quer]. Thanks to the work of Ribet and the proof of Serre’s conjecture, such abelian varieties are known to be modular in the sense that they arise as a quotient of the Jacobian $J_1(N)$ of the modular curve $X_1(N)$ over ${\overline{{\mathbf{Q}}}}$ for some integer $N \geqslant 1$. Given the profound results of Beilinson on values of $L$-functions associated to modular forms [@beilinson:2], it is natural to investigate Beilinson’s conjectures on values of $L$-functions for such abelian varieties. We do so using the equivariant formalism developed by Burns and Flach [@burns-flach]. The modular parametrization by $X_1(N)$, which is available only for curves, is replaced by a quotient of $J_1(N)$ over a suitable abelian extension of ${\mathbf{Q}}$. The main technical ingredient is to show that the aforementioned abelian varieties are cut out by Hecke operators in the motive of $J_1(N)$. In the course of doing so, we make slightly more precise a theorem of Ribet by offering a purely automorphic proof that every endomorphism of a modular abelian variety $A_f$ which is defined over an abelian extension of ${\mathbf{Q}}$ arises from the Hecke algebra (see §\[modular endo\]).
Theorem \[main thm\] has the following consequence on Zagier’s conjecture on $L(E,2)$, which generalizes [@brunault:LEF Thm 1, Cor] to the case of non-CM strongly modular ${\mathbf{Q}}$-curves (see [@wildeshaus:ezc] for the statement of Zagier’s conjecture). Recall that a ${\mathbf{Q}}$-curve is an elliptic curve $E$ over a number field $K$ which is isogenous to all its Galois conjugates.
\[main cor\] Let $E$ be a ${\mathbf{Q}}$-curve without complex multiplication over a number field $K$ such that $L(E/K,s)$ is a product of $L$-functions of newforms of weight $2$. Then the weak form of Zagier’s conjecture on $L(E/K,2)$ holds.
Corollary \[main cor\] applies in particular to every non-CM ${\mathbf{Q}}$-curve which is completely defined over a quadratic field. To our knowledge, this is the first instance where Zagier’s conjecture on $L(E,2)$ is proved for a non-CM elliptic curve $E$ which is genuinely defined over a number field.
Using Goncharov’s results [@goncharov:LE3], we also get the following consequence on Deninger’s conjecture on $L(E,3)$ (see [@deninger:higher; @goncharov:LE3] for an account of Deninger’s conjecture).
\[main cor 2\] Let $E$ be a ${\mathbf{Q}}$-curve without complex multiplication over a number field $K$ such that $L(E/K,s)$ is a product of $L$-functions of newforms of weight $2$. Then the weak form of Deninger’s conjecture on $L(E/K,3)$ holds.
This work originates in an invitation at Kyoto University in October 2010. I gave a lecture on the results of [@brunault:LEF] and Prof. Hida suggested that the same method could work for ${\mathbf{Q}}$-curves. I would like to thank Prof. Hida for his valuable suggestion. I would also like to thank Frédéric Déglise, Gabriel Dospinescu, Vincent Pilloni for stimulating discussions on these topics.
The equivariant Beilinson conjecture
====================================
For any ${\mathbf{Q}}$-vector space $V$ and any field $F$ of characteristic $0$, we put $V_F=V \otimes_{\mathbf{Q}}F$. For any ring $R$, we denote by $Z(R)$ the center of $R$.
Chow motives with coefficients
------------------------------
Let us review some background material on Chow motives. In view of our results, we allow Chow motives to be defined over arbitrary number fields and to have coefficients in an arbitrary subfield of ${\overline{{\mathbf{Q}}}}$. This setting is more convenient in order to apply Beilinson’s theorem on modular curves, which really is a theorem with ${\overline{{\mathbf{Q}}}}$-coefficients.
Let $K$ be a number field, and let $E$ be a subfield of ${\overline{{\mathbf{Q}}}}$. The category $\operatorname{CHM}_K(E)$ of Chow motives over $K$ with coefficients in $E$ consists of triples $(X_d,p,n)$ where $X_d$ denotes a $d$-dimensional smooth projective $K$-variety, $p \in \operatorname{CH}^d(X \times_K X) \otimes E$ is an idempotent and $n \in {\mathbf{Z}}$ ([@murre-nagel-peters Chapter 2], [@jannsen:deligne §4]). Morphisms between two objects $M=(X_d,p,m)$ and $N=(Y_e,q,n)$ in $\operatorname{CHM}_K(E)$ are given by $$\operatorname{Hom}(M,N) = q \circ \bigl(\operatorname{CH}^{d+n-m}(X \times_K Y) \otimes E)\bigr) \circ p.$$ Note that $\operatorname{End}(M)$ is an $E$-algebra, and every idempotent $e \in \operatorname{End}(M)$ has kernel and image in $\operatorname{CHM}_K(E)$.
Let $M=(X_d,p,n) \in \operatorname{CHM}_K(E)$ be a Chow motive, and let $0 \leqslant i \leqslant 2d$ be an integer. We can attach to $M$ and $i$ the following system of realizations, which we denote by the formal notation $H^i(M)$:
- for any embedding $\sigma : K \hookrightarrow {\mathbf{C}}$, the Betti realization $$H^i_{{\mathrm{B}},\sigma}(M) = p^* H^i_{{\mathrm{B}}}(X_\sigma({\mathbf{C}}),E(n));$$
- the de Rham realization $$H^i_{{\mathrm{dR}}}(M) = p^* (H^i_{{\mathrm{dR}}}(X) \otimes_{\mathbf{Q}}E);$$
- for any prime $\ell$, the $\ell$-adic étale realization $$H^i_{{\textrm{ét}}}(M) = p^* \bigl[ H^i_{{\textrm{ét}}}(X_{\overline{K}},{\mathbf{Z}}_\ell(n)) \otimes_{{\mathbf{Z}}_\ell} (E \otimes_{\mathbf{Q}}{\mathbf{Q}}_\ell)\bigr].$$
These realizations are linked by the following comparison theorems. For any embedding $\sigma : K \hookrightarrow {\mathbf{C}}$, we have an isomorphism of $E \otimes {\mathbf{C}}$-modules (Grothendieck’s theorem) $$\label{I sigma}
I_{\sigma} : H^i_{{\mathrm{B}},\sigma}(M) \otimes_{\mathbf{Q}}{\mathbf{C}}\xrightarrow{\cong} H^i_{{\mathrm{dR}}}(M) \otimes_{K,\sigma} {\mathbf{C}},$$ and for any embedding $\tilde{\sigma} : \overline{K} \hookrightarrow {\mathbf{C}}$ extending $\sigma$, we have an isomorphism of $E \otimes {\mathbf{Q}}_\ell$-modules $$I_{\ell,\tilde{\sigma}} : H^i_{{\mathrm{B}},\sigma}(M) \otimes_{\mathbf{Q}}{\mathbf{Q}}_\ell \xrightarrow{\cong} H^i_{{\textrm{ét}}}(M).$$ We put $H^i_{\mathrm{B}}(M) = \bigoplus_{\sigma : K \hookrightarrow {\mathbf{C}}} H^i_{{\mathrm{B}},\sigma}(M)$, so that the various isomorphisms (\[I sigma\]) combine to give $$I_\infty : H^i_B(M) \otimes_{\mathbf{Q}}{\mathbf{C}}\xrightarrow{\cong} H^i_{\mathrm{dR}}(M) \otimes_{\mathbf{Q}}{\mathbf{C}}.$$
By definition, the weight of $H^i(M)$ is $i-2n$.
Let $A$ be an $E$-algebra. We denote by $\operatorname{CHM}_K(A)$ the category of Chow motives in $\operatorname{CHM}_K(E)$ endowed with an action of $A$. Its objects are pairs $(M,\rho)$ with $M \in \operatorname{CHM}_K(E)$ and $\rho : A \to \operatorname{End}(M)$ is a morphism of $E$-algebras. Morphisms in $\operatorname{CHM}_K(A)$ are morphisms in $\operatorname{CHM}_K(E)$ commuting with the action of $A$. The category $\operatorname{CHM}_K(A)$ is additive but not abelian. If $M \in \operatorname{CHM}_K(A)$ then all realizations of $M$ have natural structures of left $A$-modules, and the comparison isomorphisms are $A$-linear. If $e \in A$ is an idempotent then we may define $e(M) \in \operatorname{CHM}_K(eAe)$.
Assume $A$ is finite-dimensional and semisimple. Conjecturally, we then have an equivariant $L$-function $L({}_A H^i(M),s)$ ($s \in {\mathbf{C}}$) with values in the center $Z(A_{\mathbf{C}})$ of $A_{\mathbf{C}}:= A \otimes_{{\mathbf{Q}}} {\mathbf{C}}$. This function is meromorphic in the sense that for every embedding $\sigma$ of $E$ into ${\mathbf{C}}$, the function $s \mapsto L({}_A H^i(M),s)^\sigma \in Z(A \otimes_{E,\sigma} {\mathbf{C}})$ is meromorphic. In the case $A=E$, we recover the usual $L$-function $L^*(H^i(M),s)$. For any $M \in \operatorname{CHM}_K(A)$ and any integer $n \in {\mathbf{Z}}$, we denote by $M(n) := M \otimes E(n)$ the Tate twist of $M$ by $n$. Recall that $$\label{L shift}
L({}_A H^i(M(n)),s) = L({}_A H^i(M),s+n) \qquad (s \in {\mathbf{C}}).$$
For any motive $M=(X_d,p,n)$, we denote by $M^* = (X_d,{}^t p,d-n)$ the dual motive. If $A$ acts on $M$ then $A^{\operatorname{op}}$ acts on $M^*$. The conjectural functional equation of the equivariant $L$-function relates $L({}_A H^i(M),s)$ and $L({}_{A^{\operatorname{op}}} H^{2d-i}(M^*),1-s)$.
Let $X_d$ be a $d$-dimensional smooth projective $K$-variety. The motive $h(X)$ is defined as $(X,\Delta_X,0)$. The standard conjectures imply that there exists a direct sum decomposition $$h(X) = \bigoplus_{i=0}^{2d} h^i(X)$$ such that the realization functor $H^i$ factors through the projection $h(X) \to h^i(X)$. Such a decomposition is known in the case $X/K$ is an abelian variety [@deninger-murre], which is the only case we will consider in this paper. In this case, we even have canonical Chow-Künneth projectors $p_0,\ldots,p_{2d} \in \operatorname{End}(h(X))$ such that $(X,p_i,0) \cong h^i(X)$ for every $0 \leqslant i \leqslant 2d$. These projectors are compatible with morphisms of abelian varieties [@deninger-murre Prop 3.3].
\[rmk E0\] Although we do not assume that $E/{\mathbf{Q}}$ is finite, we may in practice reduce to this case thanks to the following fact. If $A$ is any finite-dimensional $E$-algebra acting on $M \in \operatorname{CHM}_K(E)$, there exists a finite subextension $E_0/{\mathbf{Q}}$ of $E/{\mathbf{Q}}$, a motive $M_0 \in \operatorname{CHM}_K(E_0)$ and a finite-dimensional $E_0$-algebra $A_0$ acting on $M_0$ such that $(M,A)$ arises from $(M_0,A_0)$ by extending the scalars from $E_0$ to $E$.
Relative $K$-theory
-------------------
Let $E$ be a subfield of ${\overline{{\mathbf{Q}}}}$, and let $A$ be a finite-dimensional semisimple $E$-algebra. The $A$-equivariant versions of the Beilinson conjectures are most conveniently formulated using the relative $K$-group $K_0(A,{\mathbf{R}})$. Recall that $K_0(A,{\mathbf{R}})$ is an abelian group generated by triples $(X,f,Y)$ where $X$ and $Y$ are finitely generated $A$-modules and $f : X_{\mathbf{R}}\to Y_{\mathbf{R}}$ is an isomorphism of $A_{\mathbf{R}}$-modules [@swan p. 215]. Note that $X$ and $Y$ are automatically projective since $A$ is semisimple (however, some care is needed since $A_{\mathbf{R}}$ need not be semisimple). This group sits in an exact sequence [@swan Thm 15.5] $$\label{K0AR long}
K_1(A) \to K_1(A_{\mathbf{R}}) \xrightarrow{\delta} K_0(A,{\mathbf{R}}) \to K_0(A) \to K_0(A_{\mathbf{R}}).$$
\[lem K0AR\] The map $K_0(A) \to K_0(A_{\mathbf{R}})$ is injective.
It suffices to consider the case where $A$ is simple, in other words $A=M_n(D)$ for some division algebra $D$ over $E$. Since $K_0(M_n(B))$ is canonically isomorphic to $K_0(B)$ for every ring $B$, it suffices to prove the injectivity of $K_0(D) \to K_0(D_{\mathbf{R}})$. Let $\sigma$ be an embedding of $E$ into ${\mathbf{C}}$. Since $D \otimes_{E,\sigma} {\mathbf{C}}$ is semisimple, there exists a ring morphism $D \otimes_{E,\sigma} {\mathbf{C}}\to M_n({\mathbf{C}})$. Now the composite map $${\mathbf{Z}}\cong K_0(D) \to K_0(D_{\mathbf{R}}) \to K_0(D \otimes_{E,\sigma} {\mathbf{C}}) \to K_0(M_n({\mathbf{C}})) \cong {\mathbf{Z}}$$ sends $1$ to $n$, thus is injective.
Since $A$ is semisimple, we have a reduced norm map $\operatorname{nr}: K_1(A) \to Z(A)^\times$ [@curtis-reiner2 §45A].
\[lem nr\] The reduced norm map $\operatorname{nr}$ is injective.
We may assume that $A$ is a central simple algebra over $E$. If $E$ is a number field, the result is proved in [@curtis-reiner2 (45.3)]. In the general case, write $A=A_0 \otimes_{E_0} E$ where $E_0$ is a finite subextension of $E/{\mathbf{Q}}$ and $A_0$ is a finite-dimensional semisimple $E_0$-algebra. Then $A = \varinjlim_{E'} A_0 \otimes_{E_0} E'$ where $E'$ runs through the finite subextensions of $E/E_0$. It follows that $K_1(A) = \varinjlim_{E'} K_1(A_0 \otimes_{E_0} E')$, and the result follows from the injectivity of the reduced norm map for $A_0 \otimes_{E_0} E'$.
If $E$ is a number field, then $A_{\mathbf{R}}$ and $A_{\mathbf{C}}$ are semisimple so we also have reduced norm maps on $K_1(A_{\mathbf{R}})$ and $K_1(A_{\mathbf{C}})$. In the general case, writing $A$ as a direct limit as in the proof of Lemma \[lem nr\], we construct reduced norm maps $\operatorname{nr}_{\mathbf{R}}: K_1(A_{\mathbf{R}}) \to Z(A_{\mathbf{R}})^\times$ and $\operatorname{nr}_{\mathbf{C}}: K_1(A_{\mathbf{C}}) \to Z(A_{\mathbf{C}})^\times$ which make the following diagram commute: $$\label{diag nr}
\begin{tikzcd}
K_1(A) \rar \dar{\operatorname{nr}} & K_1(A_{\mathbf{R}}) \rar \dar{\operatorname{nr}_{\mathbf{R}}} & K_1(A_{\mathbf{C}}) \dar{\operatorname{nr}_{\mathbf{C}}} \\
Z(A)^\times \rar & Z(A_{\mathbf{R}})^\times \rar & Z(A_{\mathbf{C}})^\times.
\end{tikzcd}$$ By Lemma \[lem nr\] and diagram (\[diag nr\]), the map $K_1(A) \to K_1(A_{\mathbf{R}})$ is injective. The exact sequence (\[K0AR long\]) thus simplifies to $$\label{K0AR short}
0 \to K_1(A) \to K_1(A_{\mathbf{R}}) \xrightarrow{\delta} K_0(A,{\mathbf{R}}) \to 0.$$
Let us consider the classical case, namely $A=E={\mathbf{Q}}$. Then $K_1(A)={\mathbf{Q}}^\times$ and $K_1(A_{\mathbf{R}})= {\mathbf{R}}^\times$ so that $K_0(A,{\mathbf{R}})$ can be identified with ${\mathbf{R}}^\times/{\mathbf{Q}}^\times$. Moreover, using this identification, the class of $(X,f,Y)$ in $K_0(A,{\mathbf{R}})$ is none other than the determinant of $f$ with respect to bases of $X$ and $Y$.
\[lem K1 cartesien\] The map $\operatorname{nr}_{\mathbf{R}}$ is injective and the map $\operatorname{nr}_{\mathbf{C}}$ is an isomorphism. Moreover, the left-hand square of diagram (\[diag nr\]) is Cartesian: identifying the groups $K_1(A)$, $K_1(A_{\mathbf{R}})$ and $Z(A)^\times$ with subgroups of $Z(A_{\mathbf{R}})^\times$, we have $$\label{eq K1 cartesien}
Z(A_{\mathbf{R}})^\times = Z(A)^\times \cdot K_1(A_{\mathbf{R}}) \qquad \textrm{and} \qquad K_1(A) = Z(A)^\times \cap K_1(A_{\mathbf{R}}).$$
Writing (\[diag nr\]) as a direct limit of commutative diagrams, we may assume that $E$ is a number field. We may also assume that $A$ is a central simple algebra over $E$. The injectivity of $\operatorname{nr}_{\mathbf{R}}$ and the bijectivity of $\operatorname{nr}_{\mathbf{C}}$ are proved in [@curtis-reiner2 (45.3)]. Let $\Sigma_\infty$ be the set of archimedean places of $E$. For any $v \in \Sigma_\infty$, let $A_v := A \otimes_E E_v$, so that $A_{\mathbf{R}}\cong \prod_{v \in \Sigma_\infty} A_v$ and $Z(A_{\mathbf{R}})^\times = (E \otimes_{\mathbf{Q}}{\mathbf{R}})^\times = \prod_{v \in \Sigma_\infty} E_v^\times$. Let $\Sigma$ be the set of places $v \in \Sigma_\infty$ such that $E_v = {\mathbf{R}}$ and $A_v$ is isomorphic to a matrix algebra over the real quaternions. By [@curtis-reiner2 (45.3)], we have $$\begin{aligned}
\label{nr} \operatorname{nr}(K_1(A)) & = \{x \in E^\times ; x_v>0 \textrm{ for every } v \in \Sigma\}\\
\label{nrR} \operatorname{nr}_{\mathbf{R}}(K_1(A_{\mathbf{R}})) & = \{(x_v)_{v \in \Sigma_\infty} ; x_v>0 \textrm{ for every } v \in \Sigma\}.\end{aligned}$$ In particular the image of $\operatorname{nr}_{\mathbf{R}}$ contains the connected component of identity in $Z(A_{\mathbf{R}})^\times$. Since $E^\times$ is dense in $(E \otimes_{\mathbf{Q}}{\mathbf{R}})^\times$, the first identity of (\[eq K1 cartesien\]) follows. The second equation is an immediate consequence of (\[nr\]) and (\[nrR\]).
Following the terminology of [@burns-flach §4.2], the *extended boundary map* $\hat{\delta} : Z(A_{\mathbf{R}})^\times \to K_0(A,{\mathbf{R}})$ is the unique extension of $\delta$ to $Z(A_{\mathbf{R}})^\times$ which vanishes on $Z(A)^\times$ (such an extension exists and is unique by Lemma \[lem K1 cartesien\]).
Let $X,Y,Z$ be finitely generated $A$-modules, together with a short exact sequence of $A_{\mathbf{R}}$-modules $$0 \to X_{\mathbf{R}}\xrightarrow{\alpha} Y_{\mathbf{R}}\xrightarrow{\beta} Z_{\mathbf{R}}\to 0.$$ Since $Z_{\mathbf{R}}$ is projective over $A_{\mathbf{R}}$, this sequence splits and the map $\beta$ admits a section $s : Z_{\mathbf{R}}\to Y_{\mathbf{R}}$. Then the element $\vartheta=(X \oplus Z,\alpha \oplus s,Y) \in K_0(A,{\mathbf{R}})$ is independent of the choice of $s$.
Statement of the conjecture in the region of convergence {#conj statement}
--------------------------------------------------------
In this section we state an equivariant version of Beilinson’s conjecture on special values of $L$-functions in the region of absolute convergence. This conjecture is a particular case of the equivariant Tamagawa number conjecture of Burns and Flach [@burns-flach §4.3]. Since we don’t consider the integrality part of the conjecture in this article, the formulation becomes in fact much simpler.
Let $A$ be a finite-dimensional semisimple $E$-algebra. Fix a Chow motive $M = (X_d,p,0,\rho) \in \operatorname{CHM}_K(A)$ and an integer $0 \leqslant i \leqslant 2d$. Whenever defined, the equivariant $L$-function $L({}_A H^i(M),s)$ converges absolutely in the region $\Re(s)>\frac{i}{2}+1$, and because of the Euler product, its values at integers in this region belong to $Z(A_{\mathbf{R}})^\times$.
Fix an integer $n >\frac{i}{2}+1$. The conjecture on $L({}_A H^i(M),n)$ involves the *Beilinson regulator map* $$r_{\BB} : H^{i+1}_{\mathcal{M}/{\mathcal{O}_K}}(M,E(n)) \otimes_{\mathbf{Q}}{\mathbf{R}}\to H^{i+1}_{\mathcal{D}}(M,E_{\mathbf{R}}(n)).$$
Let us briefly recall the definitions of the cohomology groups involved. The relevant motivic cohomology group is given by $$H^{i+1}_{\mathcal{M}}(M,E(n)) = p^* \bigl[H^{i+1}_{\mathcal{M}}(X,{\mathbf{Q}}(n)) \otimes E\bigr].$$ where $H^{i+1}_{\mathcal{M}}(X,{\mathbf{Q}}(n))$ is defined as Quillen’s $K$-group $K_{2n-i-1}^{(n)}(X)$. We denote by $H^{i+1}_{\mathcal{M}/{\mathcal{O}_K}}(M,E(n))$ the subspace of integral elements defined by Scholl [@scholl:integral_elements].
The Deligne cohomology group can be expressed as follows. Let $c \in \operatorname{Gal}({\mathbf{C}}/{\mathbf{R}})$ denote complex conjugation. The isomorphisms $c^* : X_\sigma({\mathbf{C}}) \xrightarrow{\cong} X_{\overline{\sigma}}({\mathbf{C}})$ together with complex conjugation on ${\mathbf{Q}}(n)=(2\pi i)^n {\mathbf{Q}}$ induce an $A$-linear involution $c_B : H^i_{{\mathrm{B}}}(M(n)) \to H^i_{{\mathrm{B}}}(M(n))$, which makes the following diagram commute $$\begin{tikzcd}
H^i_{B}(M(n)) \otimes_{\mathbf{Q}}{\mathbf{C}}\rar{I_\infty} \dar[swap]{c_B \otimes c} & H^i_{{\mathrm{dR}}}(M(n)) \otimes_{{\mathbf{Q}}} {\mathbf{C}}\dar{1 \otimes c} \\
H^i_{B}(M(n)) \otimes_{\mathbf{Q}}{\mathbf{C}}\rar{I_\infty} & H^i_{{\mathrm{dR}}}(M(n)) \otimes_{{\mathbf{Q}}} {\mathbf{C}}.
\end{tikzcd}$$ Let $H^i_{\mathrm{B}}(M(n))^{\pm}$ denote the subspace of $H^i_{\mathrm{B}}(M(n))$ where $c_B$ acts by $\pm 1$. The diagram above induces an isomorphism $$\begin{aligned}
\nonumber H^i_{\mathrm{dR}}(M(n)) \otimes_{\mathbf{Q}}{\mathbf{R}}& \cong \bigl(H^i_B(M(n))^+ \otimes_{\mathbf{Q}}{\mathbf{R}}\bigr) \oplus \bigl(H^i_B(M(n))^- \otimes_{\mathbf{Q}}{\mathbf{R}}(-1)\bigr)\\
& \cong \bigl(H^i_B(M(n))^+ \otimes_{\mathbf{Q}}{\mathbf{R}}\bigr) \oplus \bigl(H^i_B(M(n-1))^+ \otimes_{\mathbf{Q}}{\mathbf{R}}\bigr).\label{HdR HB}\end{aligned}$$ The *Deligne period map* is the canonical map $$\alpha : H^i_B(M(n))^+ \otimes_{\mathbf{Q}}{\mathbf{R}}\to (H^i_{\mathrm{dR}}(M)/\operatorname{Fil}^n) \otimes_{\mathbf{Q}}{\mathbf{R}}.$$ Since the motive $M(n)$ has weight $i-2n<0$, we have $$I_\infty (\ker(\alpha)) \subset (\operatorname{Fil}^0 \cap \overline{\operatorname{Fil}}^0) H^i_{\mathrm{dR}}(M(n)) \otimes {\mathbf{R}}= (\operatorname{Fil}^n \cap \overline{\operatorname{Fil}}^n) H^i_{\mathrm{dR}}(M) \otimes {\mathbf{R}}=0$$ so that $\alpha$ is injective. The Deligne cohomology group of $M$ is then given by the cokernel of $\alpha$ : $$\label{HD 1}
0 \to H^i_B(M(n))^+ \otimes_{\mathbf{Q}}{\mathbf{R}}\xrightarrow{\alpha} (H^i_{\mathrm{dR}}(M)/\operatorname{Fil}^n) \otimes_{\mathbf{Q}}{\mathbf{R}}\to H^{i+1}_{\mathcal{D}}(M,E_{\mathbf{R}}(n)) \to 0.$$
\[conj1\] The regulator map $r_\BB$ is an isomorphism.
The idea is that both the domain and codomain of the regulator map carry natural $A$-structures, and comparing these two $A$-structures is enough to determine the equivariant $L$-value up to an element of $Z(A)^\times$. The Deligne period map and the Beilinson regulator map are $A_{\mathbf{R}}$-linear, and (\[HD 1\]) is an exact sequence of $A_{\mathbf{R}}$-modules. Assuming Conjecture \[conj1\], the exact sequence (\[HD 1\]) together with $r_\BB$ yields a canonical element $\vartheta_\infty = \vartheta_\infty(M,i,n)$ in $K_0(A,{\mathbf{R}})$. We may now formulate the conjecture on the $L$-value as follows.
\[conj2\] Let $n > \frac{i}{2}+1$ be an integer. We have the following equality in $K_0(A,{\mathbf{R}})$ $$\label{conj2 eq}
\hat{\delta}(L({}_A H^i(M),n)) = \vartheta_\infty(M,i,n).$$
Conjecture \[conj2\] implies Beilinson’s conjecture on the classical $L$-value $L(H^i(M),n) \in E_{\mathbf{R}}^\times$ by taking norms down to $E_{\mathbf{R}}$. Note that in the case $A=E$, we have $K_0(A,{\mathbf{R}}) \cong E_{\mathbf{R}}^\times / E^\times$ and (\[conj2 eq\]) is just a restatement of the usual conjecture.
Assuming the meromorphic continuation of the equivariant $L$-function, we may reformulate Conjecture \[conj2\] using $L$-values at integers to the left of the central point. For this we use a different $A$-structure in Deligne cohomology. Using (\[HdR HB\]), we may also express Deligne cohomology as $$\label{HD 2}
0 \to \operatorname{Fil}^n H^i_{\mathrm{dR}}(M) \otimes_{\mathbf{Q}}{\mathbf{R}}\to H^i_B(M(n-1))^+ \otimes_{\mathbf{Q}}{\mathbf{R}}\to H^{i+1}_{\mathcal{D}}(M,E_{\mathbf{R}}(n)) \to 0$$ where the first arrow is induced by the projection on the second factor of (\[HdR HB\]). Assuming Conjecture \[conj1\], the exact sequence (\[HD 2\]) together with $r_\BB$ yields a canonical element $\vartheta'_\infty=\vartheta'_\infty(M,i,n)$ in $K_0(A,{\mathbf{R}})$.
Since $A$ is a semisimple algebra, we have a reduced rank morphism $\operatorname{rr}_A : K_0(A) \to H^0(\operatorname{Spec}Z(A),{\mathbf{Z}})$ with values in the group of ${\mathbf{Z}}$-valued functions on $\operatorname{Spec}Z(A)$ [@burns-flach §2.6, p. 510]. For any embedding $\sigma$ of $E$ into ${\mathbf{C}}$, we have a canonical morphism $\operatorname{Spec}Z(A_\sigma) \to \operatorname{Spec}Z(A)$, from which we get a morphism $ \operatorname{rr}_{A,\sigma} : K_0(A) \to H^0(\operatorname{Spec}Z(A_\sigma),{\mathbf{Z}})$.
\[conj3\] Let $n > \frac{i}{2}+1$ be an integer. For any embedding $\sigma : E \hookrightarrow {\mathbf{C}}$, we have $$\label{conj3 eq}
\operatorname{ord}_{s=1-n} L({}_{A^{\operatorname{op}}} H^{2d-i}(M^*),s)^\sigma = \operatorname{rr}_{A,\sigma} \bigl(H^{i+1}_{\mathcal{M}/{\mathcal{O}_K}}(M,E(n))\bigr).$$ Furthermore, let $L^* \in Z(A_{\mathbf{R}})^\times$ denote the leading term of the Taylor expansion of $L({}_{A^{\operatorname{op}}} H^{2d-i}(M^*),s)$ at $s=1-n$, defined componentwise. Then we have $$\label{conj3 eq2}
\hat{\delta}(L^*) = \vartheta'_\infty(M,i,n).$$
The reduced rank in (\[conj3 eq\]) depends only on the realization of $M$ in Deligne cohomology if we assume Conjecture \[conj1\].
In general these conjectures are out of reach as we cannot prove that the motivic cohomology groups are finite-dimensional. Therefore, one often uses the following weakened conjecture.
\[conj4\] There exists an $A$-submodule $W$ of $H^{i+1}_{\mathcal{M}/{\mathcal{O}_K}}(M,E(n))$ such that $$\label{rBW}
r_\BB(W) \otimes_{\mathbf{Q}}{\mathbf{R}}\cong H^{i+1}_\mathcal{D}(M,E_{\mathbf{R}}(n)).$$ Furthermore, let $\vartheta_\infty(W)$ (resp. $\vartheta'_\infty(W)$) be the element of $K_0(A,{\mathbf{R}})$ arising from $r_\BB(W)$ by means of the exact sequence (\[HD 1\]) (resp. (\[HD 2\])). Then we have the equalities $$\begin{aligned}
\hat{\delta}(L({}_A H^i(M),n)) & = \vartheta_\infty(W)\\
\hat{\delta}(L^*) & = \vartheta'_\infty(W).\end{aligned}$$
We may ask for a property which is stronger than (\[rBW\]), namely that $r_\BB$ induces an isomorphism $W \otimes_{\mathbf{Q}}{\mathbf{R}}\xrightarrow{\cong} H^{i+1}_\mathcal{D}(M,E_{\mathbf{R}}(n))$.
Finally, let us spell out the conjecture in the particular case of abelian varieties. Let $B$ be an abelian variety defined over a number field $K$. Consider the motive with ${\mathbf{Q}}$-coefficients $M=H^1(B)=(B,p_1,0)$, where $p_1$ the Chow-Künneth projector in degree 1. The usual $L$-function of $B/K$ is given by $L(B,s)=L(H^1(B),s)$. The semisimple algebra $A=\operatorname{End}_K(B) \otimes {\mathbf{Q}}$ acts on $M$, and we denote by $L({}_A B,s)=L({}_A H^1(B),s)$ the associated equivariant $L$-function. It converges for $\Re(s)>\frac32$ and takes values in $Z(A_{\mathbf{C}})$. Let $B^\vee$ be the dual abelian variety of $B$. The Poincaré bundle on $B \times B^\vee$ induces a canonical isomorphism $M^* \cong H^1(B^\vee)(1)$ in $\operatorname{CHM}_K(A^{\operatorname{op}})$. The (conjectural) functional equation thus relates $L({}_A B,s)$ and $L({}_{A^{\operatorname{op}}} B^\vee,2-s)$. Let $n \geqslant 2$ be an integer. We have isomorphisms $$H^2_{\mathcal{D}}(M,{\mathbf{R}}(n))=H^2_{\mathcal{D}}(B_{\mathbf{R}},{\mathbf{R}}(n)) \cong \frac{H^1_{\mathrm{dR}}(B) \otimes {\mathbf{R}}}{H^1_B(B({\mathbf{C}}),{\mathbf{R}}(n))^+} \cong H^1_B(B({\mathbf{C}}),{\mathbf{R}}(n-1))^+.$$ Let $B \sim \prod_{i=1}^r B_i^{e_i}$ be the decomposition of $B$ into $K$-simple factors up to isogeny, and let $D_i=\operatorname{End}_K(B_i) \otimes {\mathbf{Q}}$. We have $A \cong \prod_{i=1}^r M_{e_i}(D_i)$ so that $Z(A) = \prod_{i=1}^r Z(D_i)$. The reduced rank of $H^1_B(B({\mathbf{C}}),{\mathbf{Q}}(n-1))^+$ over $A$ is the function $i \mapsto \dim(B_i)$. Let $L^* \in Z(A_{\mathbf{R}})^\times$ denote the leading term of the Taylor expansion of $L({}_{A^{\operatorname{op}}} B^\vee,s)$ at $s=2-n$.
There exists an $A$-submodule $W$ of $H^2_{\mathcal{M}/{\mathcal{O}_K}}(B,{\mathbf{Q}}(n))$ such that $$\label{rBW ab}
r_\BB(W) \otimes_{\mathbf{Q}}{\mathbf{R}}\cong H^2_\mathcal{D}(B_{\mathbf{R}},{\mathbf{R}}(n)).$$
Furthermore, let $\vartheta_\infty(W)$ be the element of $K_0(A,{\mathbf{R}})$ arising from the exact sequence $$0 \to H^1_B(B({\mathbf{C}}),{\mathbf{Q}}(n))^+ \otimes {\mathbf{R}}\xrightarrow{\alpha} H^1_{\mathrm{dR}}(B) \otimes {\mathbf{R}}\to r_\BB(W) \otimes {\mathbf{R}}\to 0,$$ and let $\vartheta'_\infty(W)$ be the element of $K_0(A,{\mathbf{R}})$ arising from the isomorphism $$r_\BB(W) \otimes {\mathbf{R}}\xrightarrow{\cong} H^1_B(B({\mathbf{C}}),{\mathbf{Q}}(n-1))^+ \otimes {\mathbf{R}}.$$ Then we have $$\begin{aligned}
\hat{\delta}(L({}_A B,n)) & = \vartheta_\infty(W)\\
\hat{\delta}(L^*) & = \vartheta'_\infty(W).\end{aligned}$$
Base changes of Chow motives
----------------------------
If $R$ is any ring and $G$ is any group acting on $R$ by ring automorphisms, the *twisted group ring* $R\{G\}$ is the free $R$-module with basis $G$, endowed with the product $$\bigl(\sum_{\sigma \in G} a_\sigma \cdot \sigma\bigr) \bigl(\sum_{\tau \in G} b_\tau \cdot \tau\bigr) = \sum_{\sigma, \tau \in G} a_\sigma \sigma(b_\tau) \cdot \sigma \tau.$$
Let $L/K$ be a Galois extension of number fields, with Galois group $G$. There is a canonical base change functor $\operatorname{CHM}_K(E) \to \operatorname{CHM}_L(E)$ sending a Chow motive $M=(X,p,n)$ to $M_L=(X_L,p_L,n)$. In particular, we have a canonical morphism of $E$-algebras $\operatorname{End}_K(M) \to \operatorname{End}_L(M_L)$. Note that $M_L$ is a Chow motive over $L$, but we may also consider it as a Chow motive over $K$.
\[EndK ML\] For every $M \in \operatorname{CHM}_K(E)$, there is a canonical isomorphism $\operatorname{End}_K(M_L) \cong \operatorname{End}_L(M_L)\{G\}$.
For any smooth projective $L$-variety $Y$, we have $$Y \times_K Y \cong \bigsqcup_{\sigma \in G} Y \times_L Y^\sigma$$ where $Y^\sigma/L$ denotes the conjugate variety. Thus we have an isomorphism of abelian groups $$\label{CHYY}
\operatorname{CH}(Y \times_K Y) \cong \bigoplus_{\sigma \in G} \operatorname{CH}(Y \times_L Y^\sigma).$$ Now if $Y=X_L$ is the base change of a smooth projective $K$-variety $X$, then $Y^\sigma=Y$ so that $\operatorname{CH}(Y \times_K Y)$ is the direct sum of copies of $\operatorname{CH}(Y \times_L Y)$. The ring structure can be described as follows. For any $\sigma \in G$, let $\phi_\sigma : X_L \to X_L$ denote the $K$-automorphism induced by $\sigma$, and let $\Gamma_\sigma = \phi_\sigma^* \subset X_L \times_K X_L$ denote the transpose of the graph of $\phi_\sigma$. We have $\Gamma_\sigma \Gamma_\tau = \phi_\sigma^* \phi_\tau^* = (\phi_\tau \phi_\sigma)^* = \phi_{\sigma \tau}^* = \Gamma_{\sigma \tau}$, so we get a group morphism $\Gamma : G \to \operatorname{Aut}_K(H(X_L))$ where $H(X_L)$ is the total motive of $X_L$. By (\[CHYY\]), we get $$\operatorname{End}_K(H(X_L)) = \bigoplus_{\sigma \in G} \operatorname{End}_L(H(X_L)) \cdot \Gamma(\sigma) = \operatorname{End}_L(H(X_L)) \{G\}.$$ For an arbitrary $M=(X,p,n) \in \operatorname{CHM}_K(E)$, the idempotent $p_L \in \operatorname{End}_L(H(X_L)) \subset \operatorname{End}_K(H(X_L))$ commutes with the action of $G$, so that we get a corresponding decomposition $$\operatorname{End}_K(M_L) = p_L \operatorname{End}_K(H(X_L)) p_L = \bigl(p_L \operatorname{End}_L(H(X_L)) p_L\bigr) \{G\} = \operatorname{End}_L(M_L)\{G\}.$$
We will also need the following lemma from non-commutative algebra.
If $A$ is a semisimple ${\mathbf{Q}}$-algebra and $G$ is a finite group acting on $A$ by ${\mathbf{Q}}$-automorphisms, then $A\{G\}$ is semisimple.
Let $M$ be an arbitrary $A\{G\}$-module. Let us show that every submodule $N$ of $M$ is a direct factor. Since $A$ is semisimple, there exists an $A$-linear map $p : M \to N$ such that $p(x)=x$ for all $x \in N$. Define $p' : M \to N$ by $$p' = \frac{1}{|G|} \sum_{\sigma \in G} \sigma p \sigma^{-1}.$$ It is easy to check that $p'$ is $A$-linear and commutes with the action of $G$, so that $p'$ is $A\{G\}$-linear. Moreover $p'(x)=x$ for all $x \in N$, so that $N$ is a direct factor of $M$.
Let $B$ be an abelian variety defined over $K$, and let $B_L = B \times_{\operatorname{Spec}K} \operatorname{Spec}L$ be its base change to $L$. Let $A=\operatorname{End}_L(B_L) \otimes {\mathbf{Q}}$ be the algebra of endomorphisms of $B$ defined over $L$. By Lemma \[EndK ML\], we have an isomorphism $\operatorname{End}_K(H^1(B_L)) \cong A\{G\}$. Note that $A$ and $G$ commute if and only if all endomorphisms of $B_L$ are defined over $K$. We may consider the equivariant $L$-function $L({}_{A\{G\}} B_L,s)$ and formulate a conjecture on the values $L({}_{A\{G\}} B_L,n)$, $n \geqslant 2$ as in §\[conj statement\]. Note that this conjecture specializes to a conjecture on all Artin-twisted $L$-values $L(B \otimes \rho,n)$ for any finite-dimensional complex representation $\rho$ of $G$ and any integer $n \geqslant 2$.
Functoriality
-------------
In this section we recall functoriality results for the equivariant Beilinson conjecture. Note that all compatibility results below are studied and proved by Burns and Flach in the more general setting of the equivariant Tamagawa number conjecture [@burns-flach]. In the following results, the « equivariant Beilinson conjecture » means any of the Conjectures \[conj1\], \[conj2\], \[conj3\], \[conj4\].
As a first step, the equivariant Beilinson conjecture is clearly compatible with taking direct sums of Chow motives. We next study the behaviour of the conjecture under change of coefficients.
\[func1\] Let $E,E'$ be subfields of ${\overline{{\mathbf{Q}}}}$ with $E \subset E'$. Let $A$ be a finite-dimensional semisimple $E$-algebra, and let $A'=A \otimes_E E'$. Let $M=(X,p,0,\rho) \in \operatorname{CHM}_K(A)$ be a Chow motive, and let $M'=M \otimes_E E' \in \operatorname{CHM}_K(A')$. Let $i,n$ be integers such that $0 \leqslant i \leqslant 2\dim X$ and $n > \frac{i}{2}+1$. Then the equivariant Beilinson conjecture holds for $L({}_A H^i(M),n)$ if and only if it holds for $L({}_{A'} H^i(M'),n)$.
The equivariant $L$-function of $H^i(M')$ is the image of the equivariant $L$-function of $H^i(M)$ under the canonical map $Z(A_{\mathbf{C}}) \to Z(A'_{\mathbf{C}})$. Moreover, the regulator map associated to $(M',i,n)$ is obtained from the regulator map associated to $(M,i,n)$ by tensoring with $E'$ over $E$. Since the extended boundary map is functorial, we are thus reduced to show that the canonical map $\iota : K_0(A,{\mathbf{R}}) \to K_0(A',{\mathbf{R}})$ is injective. Writing $A$ as a direct limit, we may assume that $E$ is a number field and that $A$ is a central simple algebra over $E$. We have a commutative diagram $$\label{diag K0 rel}
\begin{tikzcd}
0 \rar & K_1(A) \rar \dar & K_1(A_{\mathbf{R}}) \rar{\delta} \dar & K_0(A,{\mathbf{R}}) \dar{\iota} \rar & 0 \\
0 \rar & K_1(A') \rar & K_1(A'_{\mathbf{R}}) \rar{\delta'} & K_0(A',{\mathbf{R}}) \rar & 0.
\end{tikzcd}$$ We may identify all the $K_1$-groups with subgroups of $Z(A'_{\mathbf{R}})^\times$. Let $x \in K_0(A,{\mathbf{R}})$ be in the kernel of $\iota$, and let $z \in K_1(A_{\mathbf{R}})$ such that $\delta(z)=x$. Since $Z(A_{\mathbf{R}})^\times \cap Z(A')^\times = Z(A)^\times$, we have $\operatorname{nr}_{\mathbf{R}}(z) \in Z(A)^\times$. Looking at the conditions (\[nr\]) and (\[nrR\]) describing the image of the reduced norm maps, we see that $z$ comes from $K_1(A)$ and thus $x=0$.
\[func2\] Let $E$ be a subfield of ${\overline{{\mathbf{Q}}}}$, and let $\rho : A \to B$ be a morphism beween finite-dimensional semisimple $E$-algebras. Let $M=(X,p,0) \in \operatorname{CHM}_K(B)$ be a Chow motive, and let $\rho^* M \in \operatorname{CHM}_K(A)$ be the motive obtained by restricting the action to $A$. Let $i,n$ be integers such that $0 \leqslant i \leqslant 2\dim X$ and $n > \frac{i}{2}+1$. Then the equivariant Beilinson conjecture for $L({}_B H^i(M),n)$ implies the equivariant Beilinson conjecture for $L({}_{A} H^i(\rho^* M),n)$. Moreover, if $\rho$ is surjective then the converse holds.
The map $\rho$ induces an exact functor from the category of finitely generated $B$-modules to the category of finitely generated $A$-modules, which in turns induces maps $\rho^*$ on $K$-groups. Assume the equivariant Beilinson conjecture for $L({}_B H^i(M),n)$. Let ${}_B \vartheta_\infty$ be the corresponding element of $K_0(B,{\mathbf{R}})$, and let ${}_A \vartheta_\infty = \rho^*({}_B \vartheta_\infty)$. By Lemma \[lem K1 cartesien\], we have isomorphisms $K_1(A_{\mathbf{C}}) \cong Z(A_{\mathbf{C}})^\times$ and $K_1(B_{\mathbf{C}}) \cong Z(B_{\mathbf{C}})^\times$. We use these to define a norm map $\rho^* : Z(B_{\mathbf{C}})^\times \to Z(A_{\mathbf{C}})^\times$. By construction of the equivariant $L$-function, we then have $\rho^* (L({}_B H^i(M),s))=L({}_A H^i(\rho^* M),s)$ (see [@burns-flach Thm 4.1]). Taking invariants under $\operatorname{Gal}({\mathbf{C}}/{\mathbf{R}})$, we also have a map $\rho^* : Z(B_{\mathbf{R}})^\times \to Z(A_{\mathbf{R}})^\times$, and we are left to show that $\rho^*$ commutes with the extended boundary map, in other words that $\rho^* \circ \hat{\delta}_B = \hat{\delta}_A \circ \rho^*$. This identity is true on $K_1(B_{\mathbf{R}})$ because the boundary map is functorial, and it is true on $Z(B)^\times$ because $\rho^*(Z(B)^\times) \subset Z(A)^\times$.
Assume $\rho$ is surjective. By the discussion above, it suffices to prove that $\rho^* : K_0(B,{\mathbf{R}}) \to K_0(A,{\mathbf{R}})$ is injective. Since $A$ is semisimple, we must have an isomorphism $A \cong B \times B'$ such that $\rho$ becomes the canonical projection. Then $K_0(A,{\mathbf{R}}) \cong K_0(B,{\mathbf{R}}) \oplus K_0(B',{\mathbf{R}})$ and the result is clear.
\[func3\] Let $E$ be a subfield of ${\overline{{\mathbf{Q}}}}$, and let $A$ be a finite-dimensional semisimple $E$-algebra. Let $e$ be a nonzero idempotent of $A$, and let $A'=eAe$. Let $M=(X,p,0) \in \operatorname{CHM}_K(A)$ be a Chow motive, and let $i,n$ be integers such that $0 \leqslant i \leqslant 2\dim X$ and $n > \frac{i}{2}+1$. If the equivariant Beilinson conjecture holds for $L({}_A H^i(M),n)$, then it holds for $L({}_{A'} H^i(e(M)),n)$.
The algebra $A'$ is semisimple (see [@bourbaki-alg8 §9, Exerc. 10d, p. 162]). We have an exact functor $e^*$ sending a finitely generated $A$-module $V$ to the $A'$-module $V'=e(V)$. It induces maps $e^* : K_1(A_{\mathbf{R}}) \to K_1(A'_{\mathbf{R}})$ and $e^* : K_0(A,{\mathbf{R}}) \to K_0(A',{\mathbf{R}})$. Moreover, we have a morphism of $E$-algebras $e^* : Z(A) \to Z(A')$ sending $x$ to $exe$. By definition of the reduced norm map, the diagram $$\begin{tikzcd}
K_1(A_{\mathbf{R}}) \rar{\operatorname{nr}_{\mathbf{R}}} \dar{e^*} & Z(A_{\mathbf{R}})^\times \dar{e^*} \\
K_1(A'_{\mathbf{R}}) \rar{\operatorname{nr}'_{\mathbf{R}}} & Z(A'_{\mathbf{R}})^\times
\end{tikzcd}$$ is commutative. It follows that $e^*$ commutes with the extended boundary maps. By definition of the equivariant $L$-function [@burns-flach §4.1], we have $e^*(L({}_A H^i(M),s)) = L({}_{A'} H^i(e(M)),s)$. Assume the equivariant Beilinson conjecture for $L({}_A H^i(M),n)$, and let ${}_A \vartheta_\infty$ be the corresponding element of $K_0(A,{\mathbf{R}})$. Applying $e^*$ to all objects appearing in the Beilinson regulator map, we see that the element of $K_0(A',{\mathbf{R}})$ associated to the regulator map for ${}_{A'} H^i(e(M))$ is simply $e^*({}_A \vartheta_\infty)$. Thus the equivariant conjecture for $L({}_{A'} H^i(e(M)),n)$ holds.
Modular abelian varieties {#sec modular abvar}
=========================
In this section and §\[sec modular curves\], we fix a newform $f$ of weight $2$ on $\Gamma_1(N)$. We always assume that $f$ doesn’t have complex multiplication. Let $K_f \subset {\mathbf{C}}$ be the number field generated by the Fourier coefficients of $f$.
Let $A_f/{\mathbf{Q}}$ be the modular abelian variety attached to $f$. It is defined as the quotient $J_1(N)/I_f J_1(N)$, where $J_1(N)$ is the Jacobian of the modular curve $X_1(N)$, and $I_f$ is the annihilator of $f$ in the Hecke algebra. There is a natural isomorphism $K_f \cong \operatorname{End}_{\mathbf{Q}}(A_f) \otimes {\mathbf{Q}}$, which shows that $A_f$ is simple over ${\mathbf{Q}}$. In general, the abelian variety $A_f$ is not absolutely simple. We first recall a standard result on the simple factors of $A_f$ over a given extension of ${\mathbf{Q}}$.
Fix a subfield $F$ of ${\overline{{\mathbf{Q}}}}$. Let $X=\operatorname{End}_F(A_f) \otimes {\mathbf{Q}}$ be the endomorphism algebra of $(A_f)_F$. The following theorem was proved by Ribet [@ribet:twists Thm 5.1] in the case $F={\overline{{\mathbf{Q}}}}$. The general case follows rather easily from this case.
\[thm BfF\]
1. The center $k$ of $X$ is a subfield of $K_f$.
2. The dimension of $X$ over $k$ is $[K_f:k]^2$.
3. The abelian variety $A_f$ is isogenous over $F$ to the power of a simple abelian variety $B_{f,F}/F$.
4. The abelian variety $B_{f,F}$ is unique up to $F$-isogeny. Moreover, if $F/{\mathbf{Q}}$ is Galois, then $B_{f,F}$ is $F$-isogenous to all its $\operatorname{Gal}(F/{\mathbf{Q}})$-conjugates.
Since $f$ doesn’t have complex multiplication, the abelian variety $(A_f)_{{\overline{{\mathbf{Q}}}}}$ has no abelian subvariety of CM-type. This implies that $K_f$ is its own commutant in $\operatorname{End}_{{\overline{{\mathbf{Q}}}}}(A_f) \otimes {\mathbf{Q}}$ (see the proof of [@ribet Prop. 5.2]), which proves $(a)$. Now $X$ is a central simple algebra over $k$, and $K_f$ is a (semisimple) maximal commutative subalgebra of $X$, so that $[X:k]=[K_f:k]^2$ by [@bourbaki-alg8 §14, N°6, Prop. 3], which proves $(b)$. Moreover $k$ being a field means precisely that $A_f$ is $F$-isogenous to the power of a simple abelian variety over $F$, which proves $(c)$. Finally $(d)$ follows from the unicity of decomposition of $(A_f)_F$ into simple factors up to isogeny, together with the fact that $A_f$ is defined over ${\mathbf{Q}}$.
In the particular case where $F/{\mathbf{Q}}$ is Galois and $B_{f,F}$ is an elliptic curve, Theorem \[thm BfF\]$(d)$ says precisely that $B_{f,F}$ is a ${\mathbf{Q}}$-curve completely defined over $F$ in the terminology of [@quer:Qcurves p. 286].
It is known that the minimal number field over which all endomorphisms of $A_f$ are defined is an abelian extension of ${\mathbf{Q}}$ [@gonzalez-lario Prop. 2.1].
We next show that the $L$-function of $B_{f,F}$ can be expressed as a product of twists of $L$-functions of conjugates of $f$. Note that $B_{f,F}$ is defined only up to $F$-isogeny, but it makes sense to speak of its $L$-function.
Let $V_\ell$ be the Tate module of $A_f$ with coefficients in ${\mathbf{Q}}_\ell$. It carries an action of $G_{\mathbf{Q}}=\operatorname{Gal}({\overline{{\mathbf{Q}}}}/{\mathbf{Q}})$. After choosing an isomorphism $\overline{{\mathbf{Q}}_\ell} \cong {\mathbf{C}}$, we have a decomposition $$\label{decomp_Vell}
\overline{V_\ell} := V_\ell \otimes_{{\mathbf{Q}}_\ell} \overline{{\mathbf{Q}}_\ell} \cong \prod_{\sigma : K_f \hookrightarrow {\mathbf{C}}} V_{f^\sigma}$$ where $V_{f^\sigma}$ denotes the $2$-dimensional $\overline{{\mathbf{Q}}_\ell}$-representation of $G_{\mathbf{Q}}$ associated to $f^\sigma$. This decomposition is compatible with the action of $K_f$, where $K_f$ acts on $V_{f^\sigma}$ through $\sigma$. Let $G = \operatorname{Gal}(F/{\mathbf{Q}})$, and let $\hat{G}$ be the group of complex-valued characters of $G$. We will identify elements of $\hat{G}$ with Dirichlet characters in the usual way.
\[emb\_equiv\_F\] Let $\sigma,\tau : K_f \hookrightarrow {\mathbf{C}}$. The following conditions are equivalent :
1. The restrictions of $V_{f^\sigma}$ and $V_{f^\tau}$ to $G_F=\operatorname{Gal}({\overline{{\mathbf{Q}}}}/F)$ are isomorphic.
2. There exists a character $\chi \in \hat{G}$ such that $f^\tau = f^\sigma \otimes \chi$.
3. We have $\sigma |_k = \tau|_k$.
If these conditions are satisfied, then the character $\chi$ in (b) is unique.
Since $f$ doesn’t have complex multiplication and the action of $G_F$ on $V_\ell$ is semisimple [@faltings:endlichkeit Satz 3], each $V_{f^\sigma}$ is a simple $\overline{{\mathbf{Q}}_\ell}[G_F]$-module.
$(a) \Rightarrow (b)$. If $V_{f^\sigma}|_{G_F} \cong V_{f^\tau}|_{G_F}$, then there exists a character $\chi \in \hat{G}$ such that $V_{f^\tau} \cong V_{f^\sigma} \otimes \chi$ as $G_{\mathbf{Q}}$-modules. Taking the traces of Frobenius elements, we get $\tau(a_p) = \sigma(a_p) \chi(p)$ for almost all primes $p$, which implies $f^\tau = f^\sigma \otimes \chi$.
$(b) \Rightarrow (a)$. Obvious.
$(a) \Rightarrow (c)$. By Faltings’s theorem [@faltings:endlichkeit Satz 4], we have $$\label{XQell}
X \otimes \overline{{\mathbf{Q}}_\ell} \cong \operatorname{End}_{\overline{{\mathbf{Q}}_\ell}[G_F]}(\overline{V_\ell}).$$ The field $k$ acts on $V_{f^\sigma}$ and $V_{f^\tau}$ through $\sigma$ and $\tau$ respectively. If $V_{f^\sigma} |_{G_F} \cong V_{f^\tau} |_{G_F}$, then these actions should match, and this means that $\sigma |_k = \tau |_k$.
$(c) \Rightarrow (a)$. Since $(a)$ implies $(c)$, there are at least $[k:{\mathbf{Q}}]$ distinct $G_F$-isomorphism classes among the $V_{f^\sigma}$’s. Should there be more, then the center of $X \otimes \overline{{\mathbf{Q}}_\ell}$ would have dimension greater than $[k:{\mathbf{Q}}]$. But this center is $k \otimes_{\mathbf{Q}}\overline{{\mathbf{Q}}_\ell}$, which gives a contradiction.
The unicity of $\chi$ follows from the fact that $f$ has no complex multiplication.
Consider the equivalence relation on $\operatorname{Hom}(K_f,{\mathbf{C}})$ given by Lemma \[emb\_equiv\_F\], namely $\sigma \sim \tau \Leftrightarrow \sigma |_k = \tau |_k$. Fix a system $\Sigma$ of representatives of $\operatorname{Hom}(K_f,{\mathbf{C}})/\sim$. We have $|\Sigma| = [k:{\mathbf{Q}}]$.
\[LBfF-pro\] Let $D=\operatorname{End}_F(B_{f,F}) \otimes {\mathbf{Q}}$, and let $t =[D:k]^{1/2}$ be the degree of $D$. We have $$\label{LBfF-formula}
L(B_{f,F}/F,s) = \prod_{\sigma \in \Sigma} \prod_{\chi \in \hat{G}} L(f^\sigma \otimes \chi,s)^t.$$
Write $A_f \sim_F B_{f,F}^n$, so that $X \cong M_n(D)$. By Theorem \[thm BfF\]$(b)$, we know that $[K_f:k]=nt$. By Lemma \[emb\_equiv\_F\], the $\overline{{\mathbf{Q}}_\ell}$-Tate module of $B_{f,F}$ is isomorphic as a $G_F$-module to $\prod_{\sigma \in \Sigma} V_{f^\sigma}^t$. For a given embedding $\sigma : K_f \hookrightarrow {\mathbf{C}}$, we have $$\operatorname{Ind}_{G_F}^{G_{\mathbf{Q}}} (V_{f^\sigma} |_{G_F}) \cong \bigoplus_{\chi \in \hat{G}} V_{f^\sigma} \otimes \chi.$$ Taking $L$-functions of both sides, and using Artin formalism, we get $$L(V_{f^\sigma} |_{G_F},s) = \prod_{\chi \in \hat{G}} L(f^\sigma \otimes \chi,s).$$ The formula for $L(B_{f,F}/F,s)$ follows.
Conversely, we have the following result, which follows from the work of Guitart and Quer [@guitart-quer].
\[strongly modular\] Let $A$ be an abelian variety over a Galois number field $F$ such that $L(A/F,s)$ is a product of $L$-functions of newforms of weight $2$ without complex multiplication. Then $F/{\mathbf{Q}}$ is abelian, and there exist newforms $f_1,\ldots,f_r$ of weight $2$ without complex multiplication such that $A$ is $F$-isogenous to $B_{f_1,F} \times \cdots \times B_{f_r,F}$.
Let $B=\operatorname{Res}_{F/{\mathbf{Q}}} A$ be the restriction of scalars of $A$. Let $f_1,\ldots,f_r$ be newforms of weight $2$ such that $L(A/F,s)=L(B/{\mathbf{Q}},s)=L(f_1,s) \cdots L(f_r,s)$. By the proof of [@guitart-quer Prop 2.3], the abelian variety $B$ is ${\mathbf{Q}}$-isogenous to $A_{f_1}^{n_1} \times \cdots \times A_{f_r}^{n_r}$ for some integers $n_1,\ldots,n_r \geqslant 1$. Let $C$ be a $F$-simple factor of $A$. The abelian variety $D=\operatorname{Res}_{F/{\mathbf{Q}}} C$ is a factor of $B$, thus is also ${\mathbf{Q}}$-isogenous to $A_{f_1}^{m_1} \times \cdots \times A_{f_r}^{m_r}$ for some integers $0 \leqslant m_i \leqslant n_i$. Moreover $C$ is a $F$-simple factor of $D_F$ and thus a factor of $(A_f)_F$ for some newform $f$. It now suffices to prove that $F/{\mathbf{Q}}$ is abelian, and this follows from [@guitart-quer Thm 5.3].
The abelian varieties whose $L$-functions are products of $L$-functions of newforms of weight 2 are called *strongly modular* in [@guitart-quer]. By Theorem \[strongly modular\], every non-CM strongly modular abelian variety over a Galois number field is a ${\mathbf{Q}}$-variety, in the sense that it is isogenous to all its Galois conjugates. In the particular case of elliptic curves, this gives the following result.
\[strongly modular 2\] Let $E$ be an elliptic curve without complex multiplication over a Galois number field $F$ such that $L(E/F,s)$ is a product of $L$-functions of newforms of weight $2$. Then $F/{\mathbf{Q}}$ is abelian, and there exist a newform $f$ of weight $2$ without complex multiplication such that $E$ is $F$-isogenous to $B_{f,F}$. In particular $E$ is a ${\mathbf{Q}}$-curve completely defined over $F$.
By the work of Quer [@quer Thm 3.1], we can drop the assumption that $F/{\mathbf{Q}}$ is Galois in Corollary \[strongly modular 2\].
It was predicted by Serre that the ${\mathbf{Q}}$-curves are precisely the elliptic curves which arise as quotients of $J_1(N)$ over ${\overline{{\mathbf{Q}}}}$. This is now a theorem thanks to the work of Ribet [@ribet] and the proof of Serre’s modularity conjecture (see [@khare Thm 7.2]). It follows that every ${\mathbf{Q}}$-curve $E/{\overline{{\mathbf{Q}}}}$ is isogenous over ${\overline{{\mathbf{Q}}}}$ to $B_{f,{\overline{{\mathbf{Q}}}}}$ for some newform $f$ of weight $2$. It seems an interesting question to determine a minimal field of definition for this isogeny in terms of the arithmetic of $E$. By Corollary \[strongly modular 2\], every non-CM strongly modular ${\mathbf{Q}}$-curve $E/F$ is completely defined over $F$. The converse is not true, even if $F/{\mathbf{Q}}$ is abelian : see the introduction of [@guitart-quer] for a counterexample with $F={\mathbf{Q}}(\sqrt{-2},\sqrt{-3})$. However, if $F$ is a quadratic field, then every non-CM ${\mathbf{Q}}$-curve completely defined over $F$ is strongly modular, so that our results apply to these ${\mathbf{Q}}$-curves. In the general case, necessary and sufficient conditions for strong modularity in terms of splittings of $2$-cocycles are worked out in [@guitart-quer Thm 5.3, Thm 5.4].
Modular curves in the adelic setting {#sec modular curves}
====================================
Notations and standard results
------------------------------
Let us recall the notations of [@brunault:LEF §4]. Let ${\mathbf{A}}_f$ be the ring of finite adèles of ${\mathbf{Q}}$. To any compact open subgroup $K$ of $\operatorname{GL}_2({\mathbf{A}}_f)$ is associated a smooth projective modular curve $\overline{M}_K$ over ${\mathbf{Q}}$, whose set of complex points $\overline{M}_K({\mathbf{C}})$ is the compactification of the Riemann surface $\operatorname{GL}_2({\mathbf{Q}}) \backslash (\mathfrak{h}^{\pm} \times \operatorname{GL}_2({\mathbf{A}}_f)) / K$. There are natural projections $\pi_{K',K} : \overline{M}_{K'} \to \overline{M}_K$ for any compact open subgroups $K' \subset K$ of $\operatorname{GL}_2({\mathbf{A}}_f)$. For any $g \in \operatorname{GL}_2({\mathbf{A}}_f)$, there is a canonical isomorphism $g : \overline{M}_K \xrightarrow{\cong} \overline{M}_{g^{-1}Kg}$, given at the level of complex points by $(\tau,h) \mapsto (\tau,hg)$.
The Hecke algebra $\tilde{{\mathbf{T}}}_K$ is the space of functions $K \backslash \operatorname{GL}_2({\mathbf{A}}_f)/K \to {\mathbf{Q}}$ with finite support, equipped with the convolution product [@cartier:corvallis]. We may identify $\tilde{{\mathbf{T}}}_K$ with its image in the ${\mathbf{Q}}$-algebra of finite correspondences on $\overline{M}_K$ by sending the characteristic function of $KgK$ to the correspondence $\tilde{T}(g)=\tilde{T}(g)_K$ defined by the diagram $$\label{Ttildeg}
\begin{tikzcd}
& \overline{M}_{K \cap g^{-1}Kg} \dlar[swap]{\pi} \drar{\pi'} \\
\overline{M}_K \ar[dashed]{rr}{\tilde{T}(g)} & & \overline{M}_K
\end{tikzcd}$$ where $\pi = \pi_{K \cap g^{-1} K g, K}$ and $\pi' = \pi_{gKg^{-1} \cap K,K} \circ g^{-1}$.
The space $\Omega^1(\overline{M}_K)$ carries a natural structure of left $\tilde{{\mathbf{T}}}_K$-module, and we denote by ${\mathbf{T}}_K$ the image of $\tilde{{\mathbf{T}}}_K$ in $\operatorname{End}_{{\mathbf{Q}}} (\Omega^1(\overline{M}_K))$. We denote by $T(g)=T(g)_K$ the canonical image of $\tilde{T}(g)$ in ${\mathbf{T}}_K$. Using notations of (\[Ttildeg\]), we have $T(g) = \pi'_* \circ \pi^*$.
The ring $\tilde{{\mathbf{T}}}_K$ also acts from the left on $H^1(\overline{M}_K({\mathbf{C}}),{\mathbf{Q}})$, and this action factors through ${\mathbf{T}}_K$. In fact, Poincaré duality induces a perfect bilinear pairing $$\label{poincare duality}
\langle \cdot,\cdot \rangle : H^1(\overline{M}_K({\mathbf{C}}),{\mathbf{R}})^- \times \bigl(\Omega^1(\overline{M}_K) \otimes {\mathbf{R}}\bigr) \to {\mathbf{R}}$$ satisfying $\langle \tilde{T}(g) \eta, \omega \rangle = \langle \eta, \tilde{T}(g^{-1}) \omega \rangle$ for every $g \in \operatorname{GL}_2({\mathbf{A}}_f)$, $\eta \in H^1(\overline{M}_K({\mathbf{C}}),{\mathbf{R}})^-$ and $\omega \in \Omega^1(\overline{M}_K) \otimes {\mathbf{R}}$.
Let us define $\Omega = \varinjlim_{K} \Omega^1(\overline{M}_K) \otimes {\overline{{\mathbf{Q}}}}$, where the direct limit is taken with respect to the pull-back maps $\pi_{K',K}^*$. This space carries a natural $\operatorname{GL}_2({\mathbf{A}}_f)$-action, and for any $K$ we have $\Omega^K = \Omega^1(\overline{M}_K) \otimes {\overline{{\mathbf{Q}}}}$. The space $\Omega$ decomposes as a direct sum of irreducible admissible representations $\Omega(\pi)$ of $\operatorname{GL}_2({\mathbf{A}}_f)$. Let $\Pi(K)$ be the set of those representations $\pi$ satisfying $\Omega(\pi)^K \neq \{0\}$. We have a direct sum decomposition $$\label{dec Omega1}
\Omega^1(\overline{M}_K) \otimes {\overline{{\mathbf{Q}}}}= \bigoplus_{\pi \in \Pi(K)} \Omega(\pi)^K$$ where the $\Omega(\pi)^K$ are pairwise non-isomorphic simple ${\mathbf{T}}_K \otimes {\overline{{\mathbf{Q}}}}$-modules [@langlands p. 393].
\[TK semisimple\] The natural map $${\mathbf{T}}_K \otimes {\overline{{\mathbf{Q}}}}\to \prod_{\pi \in \Pi(K)} \operatorname{End}_{{\overline{{\mathbf{Q}}}}}(\Omega(\pi)^K)$$ is an isomorphism. In particular ${\mathbf{T}}_K$ is a semisimple algebra.
The above map is injective by definition of ${\mathbf{T}}_K$. The surjectivity follows from Burnside’s Theorem [@bourbaki-alg8 §5, N°3, Cor. 1 of Prop. 4, p. 79]. The algebra ${\mathbf{T}}_K \otimes {\overline{{\mathbf{Q}}}}$, being a product of matrix algebras over ${\overline{{\mathbf{Q}}}}$, is semisimple. This implies that ${\mathbf{T}}_K$ is semisimple [@bourbaki-alg8 §12, N° 7, Cor. 2 a), p. 218].
As a consequence of Lemma \[TK semisimple\], note that for each $\pi \in \Pi(K)$, the center $Z({\mathbf{T}}_K)$ acts on $\Omega(\pi)^K$ through a character $\theta_{\pi,K} : Z({\mathbf{T}}_K) \to {\overline{{\mathbf{Q}}}}$.
Let $p$ be a prime number, and let $\varpi_p$ be the element of ${\mathbf{A}}_f^\times$ whose component at $p$ is equal to $p$, and whose other components are equal to $1$. The Hecke operator $\tilde{T}(p) = \tilde{T}(p)_K \in \tilde{{\mathbf{T}}}_K$ is defined as the characteristic function of the double coset $K \begin{pmatrix} \varpi_p & 0 \\ 0 & 1 \end{pmatrix} K$, and the Hecke operator $\tilde{T}(p,p)=\tilde{T}(p,p)_K \in \tilde{{\mathbf{T}}}_K$ is defined as the characteristic function of $K \begin{pmatrix} \varpi_p & 0 \\ 0 & \varpi_p \end{pmatrix}$. We let $T(p)=T(p)_K$ and $T(p,p)=T(p,p)_K$ be their respective images in ${\mathbf{T}}_K$. If $p$ doesn’t divide the level of $K$, meaning that $K$ contains $\operatorname{GL}_2({\mathbf{Z}}_p)$ (this happens for all but finitely many $p$), then $\tilde{T}(p)$ and $\tilde{T}(p,p)$ belong to the center of $\tilde{{\mathbf{T}}}_K$. In this case $T(p)$ and $T(p,p)$ act by scalar multiplication on each $\Omega(\pi)^K$.
Base changes of Hecke correspondences {#hecke base change}
-------------------------------------
In this subsection, we assume that $\det(K)=\hat{{\mathbf{Z}}}^\times$, which means that $\overline{M}_K$ is geometrically connected.
Let $F$ be a finite abelian extension of ${\mathbf{Q}}$, with Galois group $G=\operatorname{Gal}(F/{\mathbf{Q}})$. Let $U_F$ be the subgroup of $\hat{{\mathbf{Z}}}^\times$ corresponding to $F$ by abelian class field theory. We have an isomorphism $\hat{{\mathbf{Z}}}^\times /U_F \cong G$. Let us define $$K_F = \{g \in K : \det(g) \in U_F\}.$$ The determinant map induces an isomorphism $K/K_F \cong G$. The modular curve $\overline{M}_{K_F}$ is canonically isomorphic to the base change $\overline{M}_K \otimes_{{\mathbf{Q}}} F$. The group $G$ acts on the right on $\operatorname{Spec}F$ and $\overline{M}_{K_F}$. This induces a left action of $G$ on $\Omega^1(\overline{M}_{K_F})$. The action of an element $\sigma \in G$ on $\Omega^1(\overline{M}_{K_F})$ coincides with $T(g)_{K_F}$, where $g$ is any representative of $\sigma$ in $K$.
Let $\delta : \overline{M}_{K_F} \to \operatorname{Spec}F$ be the structural morphism. Let $T=(X,\alpha,\beta)$ be a finite correspondence on $\overline{M}_{K_F}$, defined by the diagram $$\begin{tikzcd}
& X \dlar[swap]{\alpha} \drar{\beta} \\
\overline{M}_{K_F} \ar[dashed]{rr}{T} & & \overline{M}_{K_F}.
\end{tikzcd}$$ There exists a unique element $\sigma \in G$ such that $\delta \circ \beta = \sigma^* \circ \delta \circ \alpha$. We say that $T$ is *defined over $F$* if $\sigma=\operatorname{id}_G$, which amounts to say that $\delta \circ \alpha = \delta \circ \beta$.
Let $g \in \operatorname{GL}_2({\mathbf{A}}_f)$. The correpondence $\tilde{T}(g)$ on $\overline{M}_{K_F}$ is defined over $F$ if and only if $\det(g) \in {\mathbf{Q}}_{>0} \cdot U_F$.
We denote by $\tilde{{\mathbf{T}}}'_{K_F}$ the subalgebra of $\tilde{{\mathbf{T}}}_{K_F}$ generated by those correspondences $\tilde{T}(g)_{K_F}$ which are defined over $F$. Let ${\mathbf{T}}'_{K_F}$ be the canonical image of $\tilde{{\mathbf{T}}}'_{K_F}$ in ${\mathbf{T}}_{K_F}$. The elements of ${\mathbf{T}}'_{K_F}$ are precisely those elements of ${\mathbf{T}}_{K_F}$ which are $F$-linear endomorphisms of $\Omega^1(\overline{M}_{K_F}) \cong \Omega^1(\overline{M}_K) \otimes F$, and we have an isomorphism $$\label{decompT}
{\mathbf{T}}_{K_F} = {\mathbf{T}}'_{K_F} \{G\}.$$
We now restrict to the case $$K = K_1(N) = \biggl\{g \in \operatorname{GL}_2({\hat{{\mathbf{Z}}}}) : g \equiv \begin{pmatrix} * & * \\ 0 & 1 \end{pmatrix} \pmod{N}\biggr\}.$$ The associated modular curves are $\overline{M}_{K_1(N)} = X_1(N)$ and $\overline{M}_{K_1(N)_F} = X_1(N)_F$.
Let us recall the relation between Hecke operators on $X_1(N)$ and $X_1(N)_F$. Define the base change morphism $\nu_F : \operatorname{End}_{\mathbf{Q}}(\Omega^1(X_1(N))) \to \operatorname{End}_F(\Omega^1(X_1(N)) \otimes F)$ by $\nu_F(T) = T \otimes \operatorname{id}_F$. Fix an integer $m \geqslant 1$ such that $F \subset {\mathbf{Q}}(\zeta_m)$. For any element $\alpha \in ({\mathbf{Z}}/m{\mathbf{Z}})^\times$, let $\sigma_\alpha$ denote its canonical image in $G$.
The following lemma was proved in [@brunault:LEF Lemma 13].
\[lem nuF\] For any prime $p$ not dividing $Nm$, we have $$\begin{aligned}
\label{nuF Tp}\nu_F \bigl(T(p)_{K_1(N)}\bigr) & = T(p)_{K_1(N)_F} \cdot \sigma_p\\
\label{nuF Tpp}\nu_F \bigl(T(p,p)_{K_1(N)}\bigr) & = T(p,p)_{K_1(N)_F} \cdot \sigma_p^2.\end{aligned}$$
Now let $f$ be a newform of weight $2$ on $\Gamma_1(N)$. Fix an embedding $\sigma : K_f \hookrightarrow {\mathbf{C}}$ and a character $\chi \in \hat{G}$, and let $\pi (f^\sigma \otimes \chi)$ be the automorphic representation of $\operatorname{GL}_2({\mathbf{A}}_f)$ associated to the newform $f^\sigma \otimes \chi$. We have $\pi (f^\sigma \otimes \chi) \cong \pi(f^\sigma) \otimes (\tilde{\chi} \circ \det)$, where $\tilde{\chi} : {\mathbf{A}}_f^\times/{\mathbf{Q}}_{>0} \to {\mathbf{C}}^\times$ denotes the adèlization of $\chi$, sending $\varpi_p$ to $\chi(p)$ for every prime $p$ not dividing $m$. Since $\pi(f^\sigma) \in \Pi(K_1(N))$, we have $\pi(f^\sigma \otimes \chi) \in \Pi(K_1(N)_F)$.
The following lemma was proved in [@brunault:LEF Lemma 15].
\[lem theta\] Let $\sigma : K_f \hookrightarrow {\mathbf{C}}$ and $\chi \in \hat{G}$. For any prime $p$ not dividing $Nm$, the operator $T(p)_{K_1(N)_F}$ (resp. $T(p,p)_{K_1(N)_F}$) acts as $\sigma(a_p) \chi(p)$ (resp. $\chi(p)^2$) on $\Omega(\pi(f^\sigma \otimes \chi))^{K_1(N)_F}$.
Modularity of endomorphism algebras {#modular endo}
-----------------------------------
In this section, we show that all endomorphisms of $A_f$ defined over abelian extensions of ${\mathbf{Q}}$ are modular, in the sense that they come from the Hecke algebra. This is the main technical ingredient in order to apply Beilinson’s theorem on modular curves. That all endomorphisms of $A_f$ over ${\overline{{\mathbf{Q}}}}$ are modular was proved by Ribet [@ribet:twists] using a construction of Shimura [@shimura:modular_jacobian]. It also appears in the work of González-Lario on ${\mathbf{Q}}$-curves [@gonzalez-lario]. Our approach is different in that we study endomorphisms defined over a given abelian extension of ${\mathbf{Q}}$. Moreover, the statement and proof are completely automorphic and don’t involve explicit computation of Hecke operators.
In this section, we fix a finite abelian extension $F$ of ${\mathbf{Q}}$. Let $\Omega_{N,F} = \Omega^1(X_1(N)_F) \cong \Omega^1(X_1(N))_F$. In order to ease notations, let ${\mathbf{T}}_{N,F} = {\mathbf{T}}_{K_1(N)_F} \subset \operatorname{End}_{\mathbf{Q}}(\Omega_{N,F})$ and ${\mathbf{T}}'_{N,F} = {\mathbf{T}}'_{K_1(N)_F} \subset \operatorname{End}_F(\Omega_{N,F})$. By (\[decompT\]) we have an isomorphism ${\mathbf{T}}_{N,F} \cong {\mathbf{T}}'_{N,F}\{G\}$.
There is a commutative diagram $$\label{cd EndJ1N}
\begin{tikzcd}
{\mathbf{T}}'_{N,F} \arrow{r}{\rho'} \arrow[hook]{d} & \operatorname{End}_F(J_1(N)) \otimes {\mathbf{Q}}\arrow[hook]{r} \arrow[hook]{d} & \operatorname{End}_F(\Omega_{N,F}) \arrow[hook]{d} \\
{\mathbf{T}}_{N,F} \arrow{r}{\rho} & \operatorname{End}_F(J_1(N)) \otimes {\mathbf{Q}}\{G\} \arrow[hook]{r} & \operatorname{End}_{\mathbf{Q}}(\Omega_{N,F})
\end{tikzcd}$$ such that for any $T \in {\mathbf{T}}'_{N,F}$, we have $\rho'(T)^* = T$ and for any $\sigma \in G$, we have $\rho(\sigma) = \sigma$.
The cotangent space of $J_1(N)_F$ at the origin is given by $\Omega^1(J_1(N))_F$ and can be identified canonically with $\Omega_{N,F}$. We define the map $\operatorname{End}_F(J_1(N)) \to \operatorname{End}_F(\Omega_{N,F})$ by sending an endomorphism $\varphi$ of $J_1(N)_F$ to its cotangent map $\operatorname{Cot}(\varphi)$ at the origin. If $\tilde{T}$ is a finite correspondence on $X_1(N)_F$ defined over $F$, and $T$ is the canonical image of $\tilde{T}$ in $\operatorname{End}_F (\Omega_{N,F})$, then by definition of the Jacobian variety, there is a unique endomorphism $\varphi(\tilde{T}) \in \operatorname{End}_F (J_1(N)) \otimes {\mathbf{Q}}$ such that the $\operatorname{Cot}(\varphi(\tilde{T})) = T$. In particular, the restriction of the map $\tilde{T} \mapsto \varphi(\tilde{T})$ to $\tilde{{\mathbf{T}}}'_{N,F}$ factors through ${\mathbf{T}}'_{N,F}$. This defines the map $\rho'$ of (\[cd EndJ1N\]). We define $\rho$ by extending linearly $\rho'$ using ${\mathbf{T}}_{N,F} \cong {\mathbf{T}}'_{N,F}\{G\}$.
We next give a criterion for an endomorphism of $J_1(N)$ to induce an endomorphism of $A_f$. Let $\pi : J_1(N) \to A_f$ denote the canonical projection, and let $\pi_F : J_1(N)_F \to (A_f)_F$ be its base change to $F$. Let $\Omega_{f,F} = \Omega^1(A_f)_F$. We may and will identify $\Omega_{f,F}$ with its image in $\Omega_{N,F}$ by means of the canonical injection $\pi_F^* : \Omega^1(A_f)_F \to \Omega^1(J_1(N))_F$.
\[lem EndAf\] Let $T$ be an element of ${\mathbf{T}}'_{N,F}$. Then $\rho'(T)$ induces an element of $\operatorname{End}_F(A_f) \otimes {\mathbf{Q}}$ if and only if $T$ leaves stable $\Omega_{f,F}$.
Since $A_f=J_1(N)/I_f J_1(N)$, we have an exact sequence $$0 \to \operatorname{Lie}(I_f J_1(N)) \to \operatorname{Lie}(J_1(N)) \to \operatorname{Lie}(A_f) \to 0.$$ Base changing to $F$, we get an exact sequence $$0 \to \operatorname{Lie}(I_f J_1(N))_F \to \operatorname{Lie}(J_1(N))_F \to \operatorname{Lie}(A_f)_F \to 0.$$ The dual exact sequence is $$0 \to \Omega_{f,F} \to \Omega_{N,F} \to \Omega^1(I_f J_1(N))_F \to 0.$$ Let $D \in \operatorname{End}_F (\operatorname{Lie}(J_1(N))_F)$ be the differential of $\rho'(T)$ at the origin. The operators $T$ and $D$ are dual to each other. Then $\rho'(T)$ induces an endomorphism of $(A_f)_F$ if and only if $D$ leaves stable $\operatorname{Lie}(I_f J_1(N))_F$, which means exactly that $T$ leaves stable $\Omega_{f,F}$.
As a next step, we determine how $A_f$ interacts with the Hecke algebra. Fix an embedding of ${\overline{{\mathbf{Q}}}}$ into ${\mathbf{C}}$. For any $\sigma : K_f \hookrightarrow {\mathbf{C}}$, the differential form $\omega_{f^{\sigma}}=2\pi i f^{\sigma}(z) dz$ defines an element of $\Omega^1(X_1(N)) \otimes {\overline{{\mathbf{Q}}}}$, and the elements $(\omega_{f^\sigma})_{\sigma : K_f \hookrightarrow {\mathbf{C}}}$ form a ${\overline{{\mathbf{Q}}}}$-basis of $\Omega^1(A_f) \otimes {\overline{{\mathbf{Q}}}}$. By the normal basis theorem, the ${\overline{{\mathbf{Q}}}}$-vector space $F \otimes {\overline{{\mathbf{Q}}}}$ splits into ${\overline{{\mathbf{Q}}}}$-lines $(L_\chi)_{\chi \in \hat{G}}$ such that $\sigma \in G$ acts as $\overline{\chi}(\sigma)$ on $L_\chi$.
\[pro decomp Omegaf\] We have a direct sum decomposition $$\label{eq decomp Omegaf}
\Omega_{f,F} \otimes_{{\mathbf{Q}}} {\overline{{\mathbf{Q}}}}= \bigoplus_{\substack{\sigma : K_f \hookrightarrow {\mathbf{C}}\\ \chi \in \hat{G}}} \omega_{f^\sigma} \cdot L_\chi$$ and for every $\sigma : K_f \hookrightarrow {\mathbf{C}}$ and $\chi \in \hat{G}$, we have $\omega_{f^\sigma} \cdot L_\chi \subset \Omega(\pi(f^\sigma \otimes \chi))$.
The decomposition (\[eq decomp Omegaf\]) follows from the equality $\Omega_{f,F} \otimes {\overline{{\mathbf{Q}}}}= \Omega^1(A_f) \otimes F \otimes {\overline{{\mathbf{Q}}}}$. Let $L=\omega_{f^\sigma} \cdot L_\chi$. Let $p$ be a prime not dividing $Nm$. We know that $T(p)_{X_1(N)}(\omega_{f^\sigma})=\sigma(a_p) \omega_{f^\sigma}$. It follows that $\nu_F(T(p)_{X_1(N)})$ acts as $\sigma(a_p)$ on $L$. Moreover $\sigma_p$ acts as $\overline{\chi}(p)$ on $L$. By Lemma \[lem nuF\], we deduce that $T(p)_{X_1(N)_F}$ acts as $\sigma(a_p) \chi(p)$ on $L$. Similarly $T(p,p)_{X_1(N)_F}$ acts as $\chi(p)^2$ on $L$. The result now follows from Lemma \[lem theta\] together with the multiplicity one theorems [@piatetski-shapiro].
\[pro ef\] There exists an idempotent $e_f \in {\mathbf{T}}_{N,F}$ whose image is precisely $\Omega_{f,F}$.
By Galois descent, it is sufficient to prove the existence of an idempotent $e_f \in {\mathbf{T}}_{N,F} \otimes {\overline{{\mathbf{Q}}}}$ whose image is $\Omega_{f,F} \otimes {\overline{{\mathbf{Q}}}}$. This follows from Lemma \[TK semisimple\] and Proposition \[pro decomp Omegaf\].
Let $\iota : I_f J_1(N) \to J_1(N)$ be the canonical inclusion and consider the dual map $\iota^\vee : J_1(N)^\vee = J_1(N) \to (I_f J_1(N))^\vee$. Since the map $(\pi,\iota^\vee) : J_1(N) \to A_f \times (I_f J_1(N))^\vee$ is an isogeny, there exists a canonical projector $e_f^{\textrm{can}} \in \operatorname{End}_{\mathbf{Q}}(J_1(N)) \otimes {\mathbf{Q}}$ with image $A_f$. It seems reasonable to hope that $\nu_F(e_f^{\textrm{can}}) \in \operatorname{End}_F(J_1(N)) \otimes {\mathbf{Q}}$ belongs to the image of $\rho'$ from diagram (\[cd EndJ1N\]), but I haven’t tried to prove this.
Now, let us consider the semisimple algebra ${\mathbf{T}}_{f,F} =e_f {\mathbf{T}}_{N,F} e_f$. It leaves stable $\Omega_{f,F}$, so that by Lemma \[lem EndAf\], we have an induced map $\rho_f : {\mathbf{T}}_{f,F} \to \operatorname{End}_F(A_f) \otimes {\mathbf{Q}}\{G\}$.
\[thm EndAf\] Assume $f$ doesn’t have CM. Then the map $\rho_f : {\mathbf{T}}_{f,F} \to \operatorname{End}_F(A_f) \otimes {\mathbf{Q}}\{G\}$ is bijective. In particular, every endomorphism of $A_f$ defined over $F$ comes from the Hecke algebra.
Since ${\mathbf{T}}_{f,F}$ embeds in $\operatorname{End}_{\mathbf{Q}}(\Omega_{f,F})$, the map $\rho_f$ is injective. Let us prove that $\rho_f$ is surjective. Let $\mathcal{F}$ be the set of newforms $f^\sigma \otimes \chi$ with $\sigma : K_f \hookrightarrow {\mathbf{C}}$ and $\chi \in \hat{G}$. For any $g \in \mathcal{F}$, let $$\Omega_{f,F}[g] = (\Omega_{f,F} \otimes {\overline{{\mathbf{Q}}}}) \cap \Omega(\pi(g))$$ denote the $g$-eigenspace of $\Omega_{f,F}$. By Proposition \[pro decomp Omegaf\], we have direct sum decompositions $$\begin{aligned}
\label{decomp omegafF} \Omega_{f,F} \otimes {\overline{{\mathbf{Q}}}}& = \bigoplus_{g \in \mathcal{F}} \Omega_{f,F}[g],\\
\label{decomp omegafGg} \Omega_{f,F}[g] & = \bigoplus_{\substack{\sigma,\chi \\ f^\sigma \otimes \chi = g}} \omega_{f^\sigma} \cdot L_\chi.\end{aligned}$$
By Lemma \[emb\_equiv\_F\] and since $f$ doesn’t have CM, we have $|\mathcal{F}| = |\Sigma| \cdot |\hat{G}| = [k:{\mathbf{Q}}] \cdot [F:{\mathbf{Q}}]$, and $\dim_{{\overline{{\mathbf{Q}}}}} \Omega_{f,F}[g] = [K_f:k]$ for every $g \in \mathcal{F}$, using notations from §\[sec modular abvar\]. By Lemma \[TK semisimple\], the map $${\mathbf{T}}_{f,F} \otimes {\overline{{\mathbf{Q}}}}\to \prod_{g \in \mathcal{F}} \operatorname{End}_{{\overline{{\mathbf{Q}}}}} \Omega_{f,F}[g]$$ is bijective. It follows that the rank of $\rho_f$ is $$\sum_{g \in \mathcal{F}} (\dim_{{\overline{{\mathbf{Q}}}}} \Omega_{f,F}[g])^2 = [k:{\mathbf{Q}}] \cdot [F:{\mathbf{Q}}] \cdot [K_f:k]^2$$ which agrees with the dimension of $\operatorname{End}_F(A_f) \otimes {\mathbf{Q}}\{G\}$ given by Theorem \[thm BfF\].
Proofs of the main results
==========================
Let us first recall Beilinson’s theorem on modular curves [@beilinson:2]. Let $K$ be a compact open subgroup of $\operatorname{GL}_2({\mathbf{A}}_f)$. For every $\pi \in \Pi(K)$, let $L(\pi,s)$ denote the Jacquet-Langlands $L$-function of $\pi$, with values in ${\overline{{\mathbf{Q}}}}\otimes {\mathbf{C}}$, and shifted by $\frac12$ so that the functional equation corresponds to $s \leftrightarrow 2-s$. If $f$ is a newform of weight $2$ with Fourier coefficients in ${\overline{{\mathbf{Q}}}}$ and $\pi(f)$ is the automorphic representation of $\operatorname{GL}_2({\mathbf{A}}_f)$ associated to $f$, then we have $L(\pi(f),s)^\sigma = L(f^\sigma,s)$ for every embedding $\sigma : {\overline{{\mathbf{Q}}}}\to {\mathbf{C}}$. The functional equation implies that the $L$-function $L(\pi,s)$ has a simple zero at each integer $m \leqslant 0$, with $L'(\pi,m) \in ({\overline{{\mathbf{Q}}}}\otimes {\mathbf{R}})^\times$. Fix an integer $n \geqslant 2$. We have an isomorphism $$H^2_{\mathcal{D}}(\overline{M}_K/{\mathbf{R}},{\mathbf{R}}(n)) \cong H^1_B(\overline{M}_K({\mathbf{C}}),{\mathbf{R}}(n-1))^+.$$ The Betti cohomology group decomposes with respect to the action of the Hecke algebra: $$H^1_B(\overline{M}_K({\mathbf{C}}),{\mathbf{Q}}(n-1))^+ \otimes {\overline{{\mathbf{Q}}}}= \bigoplus_{\pi \in \Pi(K)} H(\pi)$$ where $H(\pi)$ is the subspace cut out by the character $\theta_{\pi,K} : Z({\mathbf{T}}_K) \to {\overline{{\mathbf{Q}}}}$ acting on $\Omega(\pi)^K$.
Beilinson constructs a subspace $W_n \subset H^2_{\mathcal{M}}(\overline{M}_K,{\mathbf{Q}}(n))$ with the following property.
Let $R=r_\BB(W_n) \subset H^1_B(\overline{M}_K({\mathbf{C}}),{\mathbf{R}}(n-1))^+$. We have a direct sum decomposition $R \otimes {\overline{{\mathbf{Q}}}}= \bigoplus_{\pi \in \Pi(K)} R(\pi)$ with $R(\pi)=L'(\pi,2-n) \cdot H(\pi)$ inside $H(\pi) \otimes {\mathbf{R}}$.
If $n \geqslant 3$ then the localization sequence in $K$-theory implies that $H^2_{\mathcal{M}/{\mathbf{Z}}}(\overline{M}_K,{\mathbf{Q}}(n)) = H^2_{\mathcal{M}}(\overline{M}_K,{\mathbf{Q}}(n))$. In the case $n=2$, Schappacher and Scholl [@schappacher-scholl Thm 1.1.2(iii)] later proved that $W_2 \subset H^2_{\mathcal{M}/{\mathbf{Z}}}(\overline{M}_K,{\mathbf{Q}}(2))$.
Let us now reformulate Beilinson’s theorem using the equivariant formalism of §1. The Hecke algebra ${\mathbf{T}}_K$ acts on the Chow motive $H^1(\overline{M}_K)(n)$, thereby defining an element of $\operatorname{CHM}_{{\mathbf{Q}}}({\mathbf{T}}_K)$. The following result is probably well-known to the experts, but doesn’t seem to appear in the litterature.
\[beilinson equiv\] Conjecture \[conj4\] holds for $L({}_{{\mathbf{T}}_K} H^1(\overline{M}_K),n)$.
By Proposition \[func1\], it suffices to prove Conjecture \[conj4\] for $L({}_A M,n)$ where $M=H^1(\overline{M}_K) \otimes {\overline{{\mathbf{Q}}}}$ and $A={\mathbf{T}}_K \otimes {\overline{{\mathbf{Q}}}}$. We have a direct sum decomposition $M = \bigoplus_{\pi \in \Pi(K)} M(\pi)$ in $\operatorname{CHM}_{\mathbf{Q}}(A)$, where the structural morphism $A \to \operatorname{End}(M(\pi))$ factors through $A_\pi := \operatorname{End}_{{\overline{{\mathbf{Q}}}}}(\Omega(\pi)^K)$ (see Lemma \[TK semisimple\]). Moreover $L({}_{A_\pi} M(\pi),s)=L(\pi,s)$. By Proposition \[func2\], it suffices to establish Conjecture \[conj4\] for $L({}_{A_\pi} M(\pi),n)$.
By construction, the Beilinson subspace $W_n$ is stable under ${\mathbf{T}}_K$. For any $\pi \in \Pi(K)$, let $W_n(\pi)$ be the subspace of $W_n \otimes {\overline{{\mathbf{Q}}}}$ cut out by the character $\theta_{\pi,K}$. We may identify $W_n(\pi)$ with a subspace of $H^2_{\mathcal{M}/{\mathbf{Z}}}(M(\pi),{\overline{{\mathbf{Q}}}}(n))$. Since the Beilinson regulator map is ${\mathbf{T}}_K$-equivariant, we have $r_\BB(W_n(\pi))=R(\pi)$. But Beilinson’s theorem $R(\pi)=L'(\pi,2-n) \cdot H(\pi)$ means precisely that the element $\vartheta_\infty(W_n(\pi))$ of $K_0(A_\pi,{\mathbf{R}})$ is given by $\hat{\delta}(L'(\pi,2-n))$.
\[main thm 1\] Let $f$ be a newform of weight $2$ without complex multiplication, and let $F$ be a finite abelian extension of ${\mathbf{Q}}$. Let $X=\operatorname{End}_F(A_f) \otimes{\mathbf{Q}}$ and $G=\operatorname{Gal}(F/{\mathbf{Q}})$. For every integer $n \geqslant 2$, Conjecture \[conj4\] holds for $L({}_{X\{G\}} H^1(A_f/F),n)$.
Assume $f \in S_2(\Gamma_1(N))$ is a newform of level $N$. We use Theorem \[beilinson equiv\] with the subgroup $K=K_1(N)_F$ defined in \[hecke base change\], so that $\overline{M}_K=X_1(N)_F$. Let $J_1(N)_F$ be the Jacobian of $X_1(N)_F$. We have an isomorphism $H^1(X_1(N)_F) \cong H^1(J_1(N)_F)$ in $\operatorname{CHM}_{\mathbf{Q}}({\mathbf{T}}_{N,F})$ (see for instance [@scholl:motives Prop 4.5] applied to $X=X_1(N)_F$ and $X'=J_1(N)_F$). Let $e_f \in {\mathbf{T}}_{N,F}$ be the idempotent from Proposition \[pro ef\], and let ${\mathbf{T}}_{f,F} =e_f {\mathbf{T}}_{N,F} e_f$. By Theorem \[thm EndAf\], we have an isomorphism of Chow motives $e_f(H^1(J_1(N)_F)) = H^1(A_f/F)$ in $\operatorname{CHM}_{\mathbf{Q}}(X\{G\})$. The result now follows from Proposition \[func3\].
\[main thm 2\] Let $f$ be a newform of weight $2$ without complex multiplication, and let $F,F'$ be finite abelian extensions of ${\mathbf{Q}}$ such that $F \subset F'$. Let $X=\operatorname{End}_{F'}(B_{f,F}) \otimes {\mathbf{Q}}$ and $G=\operatorname{Gal}(F'/F)$. For every integer $n \geqslant 2$, Conjecture \[conj4\] holds for $L({}_{X\{G\}} H^1(B_{f,F}/F'),n)$.
By definition of $B_{f,F}$, we have an isogeny $A_f \sim_F B_{f,F}^m$ for some $m \geqslant 1$, and thus an isomorphism of Chow motives $H^1(A_f/F') \cong H^1(B_{f,F}/F')^{\oplus m}$. Let $X' = \operatorname{End}_{F'}(A_f) \otimes {\mathbf{Q}}$ and $G'=\operatorname{Gal}(F'/{\mathbf{Q}})$. We have a canonical embedding $R = M_m(X\{G\}) \cong M_m(X) \{G\} \hookrightarrow X'\{G'\}$. By Theorem \[main thm 1\] and Proposition \[func2\], Conjecture \[conj4\] holds for $L({}_R H^1(B_{f,F}/F')^{\oplus m},n)$. We conclude by projecting onto $H^1(B_{f,F}/F')$ using Proposition \[func3\].
Putting together Theorems \[strongly modular\] and \[main thm 2\], we deduce the following result.
\[cor 1\] Let $A$ be an abelian variety over a Galois number field $K$ such that $L(A/K,s)$ is a product of $L$-functions of newforms of weight $2$ without complex multiplication. Let $X=\operatorname{End}_K(A) \otimes {\mathbf{Q}}$. For every integer $n \geqslant 2$, Conjecture \[conj4\] holds for $L({}_X A,n)$.
In the particular case of ${\mathbf{Q}}$-curves, this gives the following result.
\[cor 2\] Let $E$ be a ${\mathbf{Q}}$-curve without complex multiplication over a number field $K$ such that $L(E/K,s)$ is a product of $L$-functions of newforms of weight $2$. For every integer $n \geqslant 2$, Conjecture \[conj4\] holds for $L(E/K,n)$.
This result has the following consequence on Zagier’s conjecture on $L(E,2)$ and Deninger’s conjecture on $L(E,3)$ for ${\mathbf{Q}}$-curves (see [@brunault:LEF] and [@goncharov:LE3] for how to derive Corollary \[cor 3\] from Corollary \[cor 2\]).
\[cor 3\] Let $E$ be a ${\mathbf{Q}}$-curve without complex multiplication over a number field $K$ such that $L(E/K,s)$ is a product of $L$-functions of newforms of weight $2$. Then the weak forms of Zagier’s conjecture on $L(E/K,2)$ and Deninger’s conjecture on $L(E/K,3)$ hold.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'R. Srikanth'
- Subhashish Banerjee
date: 'Received: date / Revised version: date'
title: 'Complementarity in atomic (finite-level quantum) systems: an information-theoretic approach'
---
[leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore
Introduction
============
Two observables $A$ and $B$ of a $d$-level system are called complementary if knowledge of the measured value of $A$ implies maximal uncertainty of the measured value of $B$, and vice versa [@kraus; @mu88]. Complementarity is an aspect of the Heisenberg uncertainty principle, which says that for any state $\psi$, the probability distributions obtained by measuring $A$ and $B$ cannot both be arbitrarily peaked if $A$ and $B$ are sufficiently non-commuting. Heisenberg uncertainty is traditionally expressed by the relation $$\triangle_\psi A
\triangle_\psi B \ge \frac{1}{2} |\langle [A,B]\rangle_\psi|,
\label{eq:hu}$$ where $(\triangle_\psi A)^2 = \langle A^2\rangle_\psi - (\langle
A\rangle_\psi)^2$. However, this representation of the Heisenberg uncertainty relation has the disadvantage that the right hand side of Eq. (\[eq:hu\]) is not a fixed lower bound but is state dependent. For example, if $\psi$ is an eigenstate of $A$, then both $\triangle_\psi A$ and the right hand side of Eq. (\[eq:hu\]) vanish, so that no restriction is imposed on the uncertainty in $B$. To improve this situation, an information theoretic (or “entropic") version of the Heisenberg uncertainty relationship has been proposed [@kraus; @mu88; @deu83], which relies on Shannon entropy of measurement outcomes as a measure of uncertainty [@nc00; @delg]. An application of this idea to obtain an entropic uncertainty relation for oscillator systems in the Pegg-Barnett scheme [@pb89] has been made in Ref. [@abe], and for entropic uncertainty relations among more than two complementary variables, in Ref. [@wiwe].
Given two observables $A \equiv \sum_a a|a\rangle\langle a|$ and $B
\equiv \sum_b b|b\rangle\langle b|$, let the entropy generated by measuring $A$ or $B$ on a state $|\psi\rangle$ be given by, respectively, $H(A)$ and $H(B)$. The information theoretic representation of the Heisenberg uncertainty principle states that $H(A) + H(B) \ge 2\log\left(\frac{1}{f(A,B)}\right)$, where $f(A,B) =
\max_{a,b}|\langle a|b\rangle|$, and $H(\cdot)$ is the Shannon binary entropy. We note that $f(A,B) \ge d^{-1/2}$, where $d$ is the (finite) dimension of the system. A pair of observables, $A$ and $B$, for which $f(A,B)=d^{-1/2}$ are said to form mutually unbiased bases (MUB) [@ii81; @dur05]. Thus, any $|a\rangle$ is an equal amplitude superposition in the basis $\{|b\rangle\}$ and vice versa. Conventionally, two Hermitian observables are called complementary only if they are mutually unbiased. Given a mutually unbiased pair of Hermitian observables, $A$ and $B$, the Heisenberg uncertainty relation takes the form $$H(A) + H(B) \ge \log d.
\label{eq:hu0}$$ A further advantage of the entropic version of the uncertainty principle over (\[eq:hu\]) is that unlike the latter, it is insensitive to eigenvalue relabeling, and depends only on the probability distribution obtained by measuring $A$ or $B$ on a given state [@deu83].
Even the information theoretic representation (\[eq:hu0\]) may not in general be suitable if $A$ or $B$ is not discrete, because the continuous analog of $H(A)$, which is $H_c(p) \equiv -\int_x
dx~p(x)\log[p(x)]$, is not positive definite, as can be seen from the case where the probability distribution is given by $p(x)=2$ for $x
\in [0,\frac{1}{2}]$ and $p(x)=0$ for $x \in (\frac{1}{2},1]$, where we find $H_c(p)=-\log2$. It is well possible that this pathological behaviour does not afflict classes of physical states of interest. In particular, we verified this in the case of the [*phase distribution*]{} of two- and four-level atomic systems. However, we are not aware that this is generically true. In any case, this potential problem can be generally overcome if the uncertainty principle is expressed in terms of relative entropy (also called Kullb[ä]{}ck-Leibler divergence, which is always positive) [@kl51], instead of Shannon entropy. An example of where this finds application would be when one of the observables, say $A$, is bounded, and its conjugate $B$ is described not as a Hermitian operator but as a continuous-valued POVM. An instance of this kind, considered below in detail, is the number and phase of an atomic system. Here we show that the relative entropic definition can be used to express complementarity of number and phase, where the notion of complementarity is extended to accomodate POVMs. We thus makes this intuitive notion more concrete. Here the ‘number’ variable is analogous to energy in oscillator systems (in the sense of having discrete eigenvalues with fixed difference between consecutive values) and amplitude of light field (eg., a laser, in the sense of being conjugate to a phase variable). We note that recourse to relative entropy is not necessary for a POVM of discrete variables [@mas07], since Shannon entropy is well defined in this case.
The quantum description of phases [@pp98] has a long history [@pb89; @pad27; @sg64; @cn68; @ssw90; @scr93]. Pegg and Barnett [@pb89], following Dirac [@pad27], carried out a polar decomposition of the annihilation operator and defined a Hermitian phase operator in a finite-dimensional Hilbert space. In their scheme, the expectation value of a function of the phase operator is first carried out in a finite-dimensional Hilbert space, and then the dimension is taken to the limit of infinity. However, it is not possible to interpret this expectation value as that of a function of a Hermitian phase operator in an infinite-dimensional Hilbert space [@ssw91; @mh91]. To circumvent this problem, the concept of phase distribution for the quantum phase has been introduced [@ssw91; @as92]. In this scheme, one associates a phase distribution to a given state such that the average of a function of the phase operator in the state, computed with the phase distribution, reproduces the results of Pegg and Barnett.
An interesting question to ask is how mutually unbiased observables behave in the presence of noise. Intuitively, one would expect that the uncertainty or entropy of each observable should be non-decreasing under the effect of noise. However, this is not generally true, as seen for example in the case of a quantum deleter [@qdele; @sr07], where uncertainty in the computational basis vanishes asymptotically during a qubit’s dissipative interaction with a vacuum bath. Here we study number and phase of atomic systems subjected to both non-dissipative and dissipative noise. Noise can be thought of as a manifestation of an open system effect [@bp02]. The total Hamiltonian is $H = H_S + H_R + H_{SR}$ , where $S$ stands for the system, $R$ for the reservoir and $SR$ for the system-reservoir interaction. The evolution of the system of interest $S$ (in this case the atomic system) is studied taking into account the effect of its environment $R$, through the $SR$ interaction term, making the resulting dynamics non-unitary. The open system effects can be broadly classified into non-dissipative, corresponding to the case where $[H_S, H_{SR}] = 0$ resulting in decoherence without dissipation or dissipative, corresponding to the case where $[H_S, H_{SR}] \neq 0$ resulting in decoherence along with dissipation [@bg07].
A class of observables that may be measured repeatedly with arbitrary precision, with the influence of the measurement apparatus on the system being confined strictly to the conjugate observables, is called quantum non-demolition (QND) or back-action evasive observables [@bvt80; @bk92; @wm94; @zu84]. Such a measurement scheme was originally introduced in the context of the detection of gravitational waves [@ct80; @bo96]. The non-dissipative open system effect described above would be a QND effect. Since they describe dephasing without dissipation, a study of phase diffusion in such a situation would be important from the context of a number of experimental situations. A study of the quantum phase diffusion in a number of QND systems was carried out in Ref. [@sb06] using the phase distribution approach. In Ref. [@sr07], the above study was extended to include the effect of dissipation on phase diffusion. This would be under the rubric of a dissipative open system effect, described above.
In this paper we study three broad, related problems: first, we formulate a novel characterization of the Heisenberg uncertainty relationship in terms of Kullbäck-Leibler divergence (or relative entropy). Second, we motivate it by applying it to a study of complementarity in an angular momentum system, which involves a continuous variable POVM; lastly, we study the behavior of complementary variables when subjected to dissipative and non-dissipative (purely dephasing) noise.
The plan of the paper is as follows. In Section \[sec:phasdistr\], we introduce the concept of phase distribution in an atomic system which would be used subsequently. In Section \[sec:qinf\], we motivate and develop an information theoretic representation of complementarity as applied to a two-level atomic system, with a brief discussion of a four-level atomic system. Since any system of interest would, inevitably, be surrounded by an environment which would effect its dynamics, it is of relevance to discuss the above ideas of complementarity in the context of open quantum systems. We do this in Section \[sec:open\] by recapitulating relevant work [@sr07; @bg07; @sb06; @gp; @sqgen] on open quantum systems, of relevance here. Section \[sec:openphase\] deals with the non-dissipative open system effect, described by the phase damping channel [@nc00; @bg07; @sb06; @gp], and Section \[sec:opengen\] discusses the dissipative open system effect, described by the squeezed generalized amplitude damping channel [@gp; @sqgen]. The reason for the above terminologies is the connection of the dynamics generated by these processes with the noise effects pertinent to quantum information [@gp]. For completeness, we relegate some technical details pertaining to these noisy channels to Appendix A and B, where the physical processes underlying these channels are also briefly discussed. In Section \[sec:concl\] we make our conclusions and discuss some open questions coming out of our work.
Phase distribution \[sec:phasdistr\]
====================================
It is not possible to interpret the expectation value of a function of the phase operator, in the Pegg and Barnett scheme [@pb89], as the expectation value of a function of a Hermitian phase operator in an infinite-dimensional Hilbert space [@ssw91; @mh91]. This motivates the introduction of the phase distribution for oscillator systems [@ssw91; @as92]. Interestingly, the concept of phase distribution can also be extended to atomic systems [@as96], which we study here . The phase distribution ${\cal P}(\phi)$, $\phi$ being related to the phase of the dipole moment of the system, is given by $${\cal P}(\phi) = {2j+1 \over 4 \pi} \int_{0}^{\pi} d\theta
\sin(\theta) Q(\theta, \phi), \label{2a.4}$$ where ${\cal P}(\phi)> 0$ and is normalized to unity, i.e., $\int_{0}^{2\pi} d\phi {\cal P}(\phi) = 1$. In the above, $j$ is the angular momentum of the atomic system. The quantity $\phi$ is important in the context of atomic coherences and the interferometry based on such coherences [@as96]. Here $Q(\theta, \phi)$ is defined as $$Q(\theta, \phi) = \langle \theta, \phi|\rho^s| \theta, \phi \rangle,
\label{2a.5}$$ where $|\theta, \phi \rangle$ are the atomic coherent states [@mr78; @ap90] given by an expansion over the Wigner-Dicke states [@at72], which are the simultaneous eigenstates of the angular momentum operators $J^2$ and $J_Z$, as $$|\theta, \phi \rangle = \sum\limits_{m= -j}^j
\left(\matrix{2j \cr j + m}\right)^{1 \over 2}
(\sin(\theta / 2))^{j+m}
(\cos(\theta / 2))^{j-m} |j, m \rangle e^{-i(j + m) \phi}.
\label{2a.6}$$ It can be shown that the angular momentum operators $J_\xi, J_\eta$ and $J_\zeta$ (obtained by rotating the operators $J_x,
J_y$ and $J_z$ through an angle $\theta$ about an axis $\hat{n}
= (\sin\phi, -\cos\phi,0)$), being mutually non-commuting, obey an uncertainty relationship of the type $\langle J_\xi^2 \rangle
\langle J_\eta^2 \rangle \ge \frac{1}{4}\langle J_\zeta^2 \rangle$. Atomic coherent states (obtained by rotating the Wigner-Dicke states via $\theta$ and $\phi$ as above) are precisely those states that saturate this bound, from which the name is derived [@as96]. For two level systems, they exhaust all pure states, whereas for larger dimensions, this is no longer true. Using Eq. (\[2a.5\]) in Eq. (\[2a.4\]), with insertions of partitions of unity in terms of the Wigner-Dicke states, we can write the phase distribution function as [@sb06] $$\begin{aligned}
{\cal P}(\phi) &=& {2j+1 \over 4 \pi} \int_{0}^{\pi} d\theta \sin
\theta \sum\limits_{n,m= -j}^{j} \langle \theta, \phi |j, n \rangle
\langle j, n| \rho^s (t)| j, m \rangle \langle
j, m| \theta, \phi \rangle. \label{2a.7}\end{aligned}$$ The phase distribution ${\cal P}(\phi)$, taking into account the environmental effects, have been studied in detail for QND as well as dissipative systems in [@sb06; @sr07] for physically interesting initial conditions of the system $S$, i.e., (a). Wigner-Dicke state, (b). atomic coherent state and (c). atomic squeezed state.
Information theoretic representation of complementarity \[sec:qinf\]
====================================================================
The relative entropy associated with a discrete distribution $f(j)$ with respect to a distribution $g(j)$ defined over the same index set, is given by $$S(f||g) = \sum_j f(j)\log\left(\frac{f(j)}{g(j)}\right).
\label{eq:re0}$$ It can be thought of as a measure of ‘distance’ of distribution $f$ from distribution $g$ in that $S(f||g) \ge 0$, where the equality holds if and only if $f(j)=g(j)$ [@nc00]. Consider random variable $F$ with probability distribution $f$. We will define $R(F)$ as the relative entropy of $f$ with respect to the uniform distribution $\frac{1}{d}$, i.e., $$R(F) \equiv R[f(j)] = \sum_j f(j)\log(df(j)).
\label{eq:rf}$$ As a measure of distance from a uniform distribution, which has maximal entropy, $R(F)$ can be interpreted as a measure of [*knowledge*]{}, as against uncertainty, of the random variable described by distribution $f$. The following theorem re-casts Heisenberg uncertainty principle in terms of relative entropy.
Given two mutually unbiased Hermitian observables $A$ and $B$, the uncertainty relation (\[eq:hu0\]) is equivalent to $$R(A) + R(B) \le \log d,
\label{eq:ra}$$ where $d$ is the (finite) dimension of the system.
[**Proof.**]{} Let the distribution obtained by measuring $A$ and $B$ on a given state be, respectively, $\{p_j\}$ and $\{q_k\}$. The l.h.s is given by $$\begin{aligned}
S\left(A||\frac{1}{d}\right) + S\left(B||\frac{1}{d}\right) &=&
\sum_j p_j \log( dp_j) +
\sum_k q_k \log( dq_k) \nonumber \\
&=& -[H(A) + H(B)] + 2\log d \label{eq:sub} \\
&\le& -2\log\left(\frac{1}{f(A,B)}\right) + 2\log d.\end{aligned}$$ This is the general result for any two non-commuting Hermitian observables. If $A$ and $B$ are mutually unbiased, then $f(A,B)=d^{-{\frac{1}{2}}}$, and the theorem follows. It follows from the concavity of $H$, and thus from the convexity of $R$, that the inequality Eq. (\[eq:ra\]) derived for pure states holds also for mixed states. $\blacksquare$
Physically, Eq. (\[eq:ra\]) expresses the fact that simultaneous knowledge of $A$ and $B$ is bounded above by $\log d$. This is in contrast to inequality (\[eq:hu0\]), which is bounded below, being a statement on the sum of ignorances or uncertainties. Both are equivalent ways of expressing the fact that the probability distributions obtained by measuring $A$ and $B$ on several identical copies of a given state cannot both peak simultaneously.
In terms of $R$, two Hermitian observables $A$ and $B$ of a $d$-level system are called mutually unbiased if the maximal knowledge of the measured value of $A$, given by $\log d$ bits, implies minimal knowledge of the measured value of $B$, given by 0 bits, and vice versa. In anticipation of the introduction of POVMs instead of Hermitian observables, we will find it convenient to weaken the definition of mutual unbiasedness and call two variables $A$ and $B$ (of which one or both of them may be a POVM) as [*quasi-mutually unbiased*]{} if the maximal knowledge of the measured value of $A$ implies minimal knowledge of the measured value of $B$, and vice versa. The maximum knowledge no longer being $\log d$ bits, but lesser, the pair $A$ and $B$ may be called quasi-mutually unbiased bases (quasi-MUB’s), an extension of the concept of MUB from the case of orthonormal bases to that of non-orthonormal bases.
If two observables are not mutually unbiased, then $\log d$ does not bound from above the knowledge sum $R_T \equiv R(A) + R(B)$, and there exist states such that the corresponding sum satisfies $R_T > \log d$. Intuitively, this is because in the case of two observables that are not mutually unbiased, knowledge of the two observables pertaining to a given state may simultaneously peak. For example, consider the qubit observables $\sigma_z$ and ${\bf n}\cdot{\bf \sigma}$ in the Hilbert space $\mathbb{C}^2$, where ${\bf n} =
(\sin\theta,0,\cos\theta)$ and ${\bf \sigma}$ is the vector of Pauli matrices. It can be seen using Eq. (\[eq:sub\]) that any eigenstate of ${\bf n}\cdot{\bf \sigma}$ corresponds to the knowledge sum $R_T =
2 - H(\cos^2(\theta/2))$. This sum is greater than one, except for $\theta=\pi/2$, which corresponds to the mutually unbiased observable $\sigma_x$.
Eq. (\[eq:re0\]) has a natural extension to the continuous case, given by $$S(f||g) = \int dp~ f(p)\log\left(\frac{f(p)}{g(p)}\right).
\label{eq:re1}$$ As in the discrete case, we define $R(f)$ as relative entropy setting $g(p)$ to a continuous constant function. In particular, the relative entropy of ${\cal P}(\phi)$ with respect to a uniform distribution $\frac{1}{2\pi}$ [@sb06; @sr07] over $\phi$ is given by the functional $$R[{\cal P}(\phi)] = \int_0^{2\pi}d\phi~
{\cal P}(\phi)\log[2\pi {\cal P}(\phi)],
\label{eq:phient}$$ where the $\log(\cdot)$ refers to the binary base.
We define minimum entropy states with respect to an observable as states that yield the minimum entropy when the observable is measured on them. In the context of relative entropy, these states can be generalized to what may be called maximum knowledge (MXK) states, which are applicable even when the measured variable is continuous. For projector valued measurements (PVMs), clearly any eigenstate is a MXK state, with a corresponding entropy of zero and knowledge $R =
\log d$. PVMs, projectors to the eigenstates of a Hermitian operator representing an observable, satisfy three axiomatic requirements: they are positive operators that form a partition of unity; further they satisfy the orthonormalcy condition $\hat{P}_j\hat{P}_k =
\delta_{jk}\hat{P}_j$, where $\hat{P}_j$ is a measurement operator. The last property implies the idempotency of projectors, which captures the idea that projective measurements are repeatable. From a quantum information perspective, it is useful to consider generalized measurements in which the operator elements $M_m$ may not be orthonormal, but satisfy the completeness condition $\sum_m
M_m^{\dag}M_m = I$ and $M^{\dag}_mM_m \ge 0$ [@nc00]. In the context of a qubit, for a generalized measurement, the knowledge corresponding to a MXK state can be less than 1, i.e., $R(|{\rm
MXK}\rangle) \le 1$. For a PVM, we have $R(|{\rm MXK}\rangle) = 1$, whereas a POVM considered here is a measurement strategy such that $R(|{\rm MXK}\rangle) < 1$. The reason is that whereas PVM is an orthonormal resolution of unity, a POVM forms a non-orthonormal resolution of unity [@holevo]. POVMs are useful elsewhere, in quantum information, as general measurement strategies for optimally distinguishing states [@nc00].
A plot of $R_\phi \equiv R[{\cal P}(\phi)]$ for a two-level atomic system in an atomic coherent state $|\alpha^{\prime},\beta^{\prime}\rangle$ with ${\cal P}(\phi) = {1
\over 2 \pi}\left[1 + {\pi \over 4} \sin(\alpha^{\prime})
\cos(\beta^{\prime} - \phi)\right]$ [@sb06; @sr07], is given by the dashed curve in Figure (\[fig:minentcoh\]). We note that $R_\phi$ has no dependence on $\beta^{\prime}$ because $\beta^\prime$ occurs in ${\cal P}(\phi)$ only as the translation $\phi-\beta^{\prime}$, and $R_\phi$ is translation invariant, i.e., unchanged under the transformation $\phi \longrightarrow \phi + \Delta$. The maximum knowledge $R_\phi$ of about 0.245 occurs at $\alpha^{\prime}=\pi/2$. The corresponding continuous family of states $|\pi/2,\beta^{\prime}\rangle$ forms the MXK states or [*quasi-eigenstates*]{} of the phase observable. These are equatorial states on the Bloch sphere, having the form $\frac{1}{\sqrt{2}}(|0\rangle + e^{i\phi_0}|1\rangle)$. That $R_\phi$ is less that 1 for these states reflects the fact that here phase $\phi$ is a POVM.
In analogy with the oscillator case, the Wigner-Dicke or excitation states may be thought of as ‘number states’, thereby making $J_z$ the ‘number observable’, whose distribution is $p(m)$, given in Eq. (\[pnum\]). The ‘number’ distribution given by $$p(m) = \langle j,m|\rho^s(t)|j,m\rangle, \label{pnum}$$ is considered as complementary to ${\cal P}(\phi)$ [@as96]. It is of interest to ask whether they are complementary in the sense of MUBs.
In the manner of Eq. (\[eq:rf\]), we can define $R_m \equiv R[p(m)]$ as knowledge of the number variable. We note that $J_z$ and phase $\phi$ have a reciprocal behavior reminiscent of MUBs: the eigenstates of $J_z$, i.e., Wigner-Dicke states, correspond to minimal knowledge $R_\phi (= 0)$, as seen from the dashed curve in Figure (\[fig:minentcoh\]). This can be seen by noting that for the Wigner-Dicke states $|j, \tilde{m} \rangle$, the phase distribution is [@sb06] $${\cal P}(\phi) = {2j+1 \over 2 \pi} \left(\begin{array}{c}
2j \\ j + \tilde{m} \end{array}
\right) {\cal B}\left[j + \tilde{m} + 1, j - \tilde{m} + 1
\right] = \frac{1}{2\pi}, \label{pwd}$$ where ${\cal B}$ stands for the Beta function. Thus, it follows via Eq. (\[eq:phient\]) that the knowledge $R_\phi$ vanishes. Conversely, we note that the states which minimize $R_\phi$ are the Wigner-Dicke states. To see this, we observe that if ${\cal P}(\phi)$ is constant, then in Eq. (\[2a.7\]), each term in the summation, which is proportional to $e^{i(m-n)\phi}$, must individually be independent of $\phi$. Since $\phi$ is arbitrary, this is possible only if $m=n$, i.e., the state $\rho^s$ is diagonal in the Wigner-Dicke basis. Thus, MXK states of $m$ correspond precisely to minimum knowledge (MNK) states of $\phi$.
The plot of relative entropy $R_m$ for all atomic coherent states is given by the bold curve in Figure (\[fig:minentcoh\]). The equatorial states on the Bloch sphere, the MXK of $\phi$, are precisely equivalent to the MNK states of $m$ (characterized by $R_m=0$), as can be seen from comparing the dashed and bold curves in Figure (\[fig:minentcoh\]). Thus number and phase share with MUBs the reciprocal property that maximum knowledge of one of them is simultaneous with minimal knowledge of the other, but differs from MUBs in that the maximum possible knowledge of $\phi$ is less than $\log(d) = 1$ bit, essentially on account of its POVM nature.
Two variables form a quasi-MUB if any MXK state of either variable is an MNK state of the other, where the knowledge of the MXK state may be less than $\log d$ bits. Thus, $J_z$ and $\phi$ are quasi-MUB’s (but not MUB’s), and are complementary in the extended sense.
From the dot-dashed curve in this Figure, we numerically find an expression of the uncertainty principle to be $$\label{eq:1bit}
R_T \equiv R_\phi + R_m \le 1$$ for all states (pure or in general mixed) in $\mathbb{C}^2$, in analogy with Eq. (\[eq:ra\]). The inequality is saturated for the Wigner-Dicke states.
As an expression of the uncertainty principle, the relation (\[eq:1bit\]) still leaves some room for improvement. First, it is not a tight bound. In particular, for equatorial states it permits $R_\phi$ to be as high as 1, whereas as seen from the dashed curve in Figure \[fig:minentcoh\], the maximum value of $R_\phi \approx
0.245$. We note that the bound cannot be tightened simply by decreasing the r.h.s, since it is saturated for Wigner-Dicke states. Further, the variable $\phi$ takes values in the interval $[0,2\pi]$ irrespective of the dimensionality of the Hilbert space, unlike $m$, which takes $d$ values. Consequently, $R_\phi$, unlike $R_m$, is not seen to be bounded by the dimension of the Hilbert space in a straightforward way. To see that in general $R[p(x)]$ increases without bound, consider the probability distribution $p(x)=x_0 > 1$ in $x \in [0,\frac{1}{x_0}]$ and $p(x)=0$ in $x \in (\frac{1}{x_0},1]$, for which we find $R(p(x)||1) =\log x_0$.
One way to address these problems is to generalize (\[eq:1bit\]) to a family of inequalities, parametrized by $\mu>0$, of the form $$\label{eq:2bit}
R_S(\mu) \equiv \mu R_\phi + R_m \le 1$$ for all states in $\mathbb{C}^2$. We find that the largest value of $\mu$ such that inequality (\[eq:2bit\]) is satified over all state space is $(r_\phi)^{-1} \approx 4.085$, where $r_\phi$ is the value of $R_\phi$ for the equatorial states, the MXK states of $\phi$. A plot of $R_S(1/r_\phi)$ over pure states is shown as the dotted curve in Figure \[fig:minentcoh\]. Comparing this with the dot-dashed curve in Figure \[fig:minentcoh\], we find that $R_S(\mu)$ is bounded more tightly than $R_T \equiv R_S(1)$.
From Figure \[fig:minentcoh\], we find that the two Wigner-Dicke states and all equatorial states may be regarded as [*coherent*]{} with respect to the number-phase pair, in that they maximize the knowledge sum and are thus closest to classical states. We note of course that this definition of state coherence differs from the conventional one for atomic states, defined with respect to angular momentum operators. Unless we use $\mu R_\phi$ in place of $R_\phi$, only the Wigner-Dicke states could be called coherent in the new sense.
We now briefly extend the entropic version of complementarity to a higher spin system, which is seen to present a new feature. We consider a spin-3/2 (four-level) system, whose general state is given by the ansatz $$|\psi\rangle = r_\alpha e^{i\theta_\alpha}|\frac{3}{2},-\frac{3}{2}\rangle +
r_\beta e^{i\theta_\beta}|\frac{3}{2},-\frac{1}{2}\rangle +
r_\gamma e^{i\theta_\gamma}|\frac{3}{2},+\frac{1}{2}\rangle +
r_\delta |\frac{3}{2},+\frac{3}{2}\rangle)
\label{eq:ansatz}$$ where $r_\alpha^2 + r_\beta^2+r_\gamma^2+r_\delta^2=1$, and a global phase is omitted. Using Eq. (\[eq:ansatz\]) in Eq. (\[2a.7\]), we find $$\begin{aligned}
P(\phi) &=& \frac{1}{\pi}\left[\frac{1}{2} +
\left(\frac{15\pi r_\alpha r_\beta}{32\sqrt{3}}\right)
\cos(\phi-\theta_\alpha\theta_\beta) +
\left(\frac{r_\alpha r_\gamma}{\sqrt{3}}\right)
\cos(2\phi-\theta_\alpha \theta_\gamma) \right. \nonumber \\ &+&
\left(\frac{ 3\pi r_\alpha r_\delta}{32}\right)
\cos(3\phi-\theta_\alpha) +
\left(\frac{ 9\pi r_\beta r_\gamma}{32}\right)
\cos(\phi-\theta_\beta+\theta_\gamma) +
\left(\frac{r_\beta r_\delta}{\sqrt{3}}\right)
\cos(2\phi-\theta_\beta) \nonumber \\ &+&
\left. \left(\frac{15\pi r_\gamma r_\delta}{32\sqrt{3}}\right)
\cos(\phi-\theta_\gamma)\right].\end{aligned}$$
As before, we compute ‘number’ knowledge $R_m(r_\alpha,r_\beta,r_\gamma)$ using Eq. (\[eq:rf\]), and phase knowledge $R_\phi(r_\alpha,r_\beta, r_\gamma,
\theta_\alpha,\theta_\beta,\theta_\gamma)$ using Eq. (\[eq:phient\]). It may be verified that for ‘number’ states (for which $r_\alpha$ or $r_\beta$ or $r_\gamma$ or $r_\delta$ is 1), $R_\phi=0$. In fact, it may be seen from Eqs. (\[2a.6\]), (\[2a.7\]) and (\[pwd\]), that a general property of atomic systems is that a Wigner-Dicke state is equivalent to a MNK phase state in any finite dimension. On the other hand, numerically searching over all possible states of the form (\[eq:ansatz\]), we find that the maximum value 0.86 bits of $R_\phi$ occurs at $\Psi(r_\alpha=0.36,r_\beta=0.61,r_\gamma=0.61,
\theta_\alpha=\pi,\theta_\beta=0,\theta_\gamma=\pi)$, which is not an equal amplitude superposition of ‘number’ states. Thus, remarkably, for the spin-3/2 case, MXK phase states do not correspond to MNK ‘number’ states, even though the converse is true. We expect that this unidirectionally (as against mutually) unbiased behavior will persist even for higher spin systems. Phase and ‘number’ therefore do not here form a quasi-MUB as defined for the single qubit case, and may be considered complementary only in an even more weak sense. This is in contrast to the case where the observables are Hermitian, where five MUBs are known to exist in four dimensions [@dur05].
As in the two-level case, one way to address this problem is to generalize (\[eq:1bit\]) to a family of inequalities, parametrized by $\mu_2>0$, of the form $$\label{eq:4bit}
R_S(\mu_2) \equiv \mu_2 R_\phi + R_m \le 2$$ over all states in $\mathbb{C}^4$. Our strategy is to numerically search over all states of the form (\[eq:ansatz\])– other than the Wiger-Dicke states, where $R_m=2$ and $R_\phi=0$– in order to determine the largest value of $\mu_2$ such that inequality (\[eq:4bit\]) is [*just*]{} satified, i.e., the inequality must be satisfied at all points, with the equality being valid for at least one point. By this method, we find $\mu_2 = 1.973$ with the maximum $R_S(\mu_2)$ of 2 occuring at $\psi_p \equiv
\psi(r_\alpha=0.24,r_\beta=0.64,r_\gamma=0.68,
\theta_\alpha=\pi,\theta_\beta=0,\theta_\gamma=\pi)$. As states that maximize the knowledge sum $R_S(\mu_2)$, we may regard $\psi_p$ and the Wigner-Dicke states as coherent states from the viewpoint of number-phase entropy.
Application to open systems \[sec:open\]
========================================
Here we study the effect of noise coming from open quantum system effects, on the atomic number-phase complementarity developed in the previous section. The noise effects we consider come from non-dissipative as well as dissipative interactions of the atomic system $S$ with its environment which is modelled as a bath of harmonic oscillators starting in a squeezed thermal state [@bg07; @gp; @sqgen]. This enables us to consider the effect of bath squeezing on the complementarity. We briefly recapitulate previous work [@sr07; @bg07; @sb06; @gp; @sqgen] related to the effect of various noisy channels on the ‘number’ and phase distributions. In Section \[sec:openphase\] we consider the effect of the phase damping channel which is the information theoretic analogue of the non-dissipative open system effect [@bg07; @gp] while in Section \[sec:opengen\] we consider the effect of the squeezed generalized amplitude damping channel which is the information theoretic analogue of the dissipative open system effect [@gp; @sqgen]. Intuitively, one would expect that open system effects, like measurements, cannot increase the knowledge sum. Interestingly, we find that this is not true for certain regimes of the squeezed generalized amplitude damping channel.
Phase damping channel \[sec:openphase\]
---------------------------------------
The ‘number’ and phase distributions for a qubit starting from an atomic coherent state $|\alpha^\prime,\beta^\prime\rangle$, and subjected to a phase damping channel due to its interaction with a squeezed thermal bath, are [@sr07; @bg07; @sb06] $$\begin{aligned}
p(m) &=& \left( \begin{array}{c} 2j \\ j+m\end{array}\right)
(\sin(\alpha^{\prime}/2))^{2(j+m)}
(\cos(\alpha^{\prime}/2))^{2(j-m)} \nonumber \\
{\cal P}(\phi) &=& {1 \over 2 \pi}\left[1 + {\pi \over 4}
\sin(\alpha^{\prime}) \cos(\beta^{\prime} + \omega t - \phi) e^{-
(\hbar \omega)^2 \gamma(t)}\right]. \label{2a.9}
\label{eq:atomcohqnd}\end{aligned}$$ $R_\phi$ (Eq. (\[eq:phient\])) is invariant under the translation of $\phi \longrightarrow \phi+a$. Setting $a= -\beta^\prime - \omega t$, we find that $R_\phi$ is independent of $\beta^\prime$. A derivation of Eq. (\[eq:atomcohqnd\]) can be found in Refs. [@sb06; @sr07]. For completeness, the expression for $\gamma(t)$ in Eq. (\[2a.9\]) is given in Appendix \[secap:qnd\] and the physical process underlying the phase damping channel discussed.
Figure \[fig:sqminfiqnd\] depicts the effect of phase damping noise on the knowledge sum $R_S$. Comparing it with the noiseless case (dotted curve in Figure \[fig:minentcoh\]), we find a reduction in the total knowledge $R_S$, as expected. It follows from Eq. (\[eq:atomcohqnd\]) that $R_m$ remains unaffected under the action of this channel. Thus, the effect of noise on $R_S$ is due entirely to its effect on $R_\phi$, which decreases in the presence of noise [*for all pure states*]{} (because $\beta^\prime$ does not play any role and because the plot represents all possible values of $\alpha^\prime$).
Figure \[fig:sqminfiqnd\] shows that squeezing has the detrimental effect of impairing phase knowledge for all regimes of the parameter space. This is in marked contrast to the case of the squeezed generalized amplitude damping noise, discussed in Section (\[sec:opengen\]). Thus, squeezing, like temperature, has the overall detrimental effect of impairing $R_S$. This is consistent for the case of a QND interaction (which generates a phase damping channel [@bg07; @gp]) of the system with its environment, i.e., $[H_S, H_{SR}] = 0$, as also corrorborated by the observation that squeezing and temperature concurrently impair geometric phase [@gp] and phase diffusion [@sb06; @sr07] and suggests that squeezing, like temperature, should adversely affect channel capacity for phase damping noise.
Squeezed generalized amplitude damping channel \[sec:opengen\]
--------------------------------------------------------------
The ‘number’ and phase distributions for a qubit starting from an atomic coherent state $|\alpha^\prime,\beta^\prime\rangle$, and subjected to a squeezed generalized amplitude damping channel [@sqgen] due to its interaction with a squeezed thermal bath, are [@sr07], $$p(m=1/2,t)
= \frac{1}{2}\left[\left(1-\frac{\gamma_0}{\gamma^\beta}\right)
+ \left(1+\frac{\gamma_0}{\gamma^\beta}\right)e^{-\gamma^\beta t}\right]
\sin^2(\alpha^\prime/2) +
\frac{\gamma_-}{\gamma^\beta}\left(1-e^{-\gamma^\beta t}\right)
\cos^2(\alpha^\prime/2),
\label{eq:pmc}$$ and $$\begin{aligned}
{\cal P}(\phi) &=& \frac{1}{2 \pi} \left[1 + \frac{\pi}{4 \alpha}
\sin(\alpha^{\prime}) \Big\{\alpha \cosh(\alpha t)\cos(\phi -
\beta^{\prime}) + \omega \sinh(\alpha t) \sin(\phi - \beta^{\prime})
\right. \nonumber\\ & & \left. - \gamma_0 \chi \sinh(\alpha t) \cos(\Phi
+ \beta^{\prime} + \phi) \Big\} e^{-\frac{\gamma^{\beta} t}{2}}
\right]. \label{3p}\end{aligned}$$ A derivation of Eqs. (\[eq:pmc\]) and (\[3p\]) can be found in Refs. [@sr07]. For completeness, the parameters appearing in these equations are given in Appendix \[secap:disi\] where a brief discussion of the physical process behind the squeezed generalized amplitude damping channel is also made.
Figures \[fig:sqminfi\](a) and (b) depict the effect of squeezed generalized amplitude damping noise on $\mu R_\phi$ (Eq. (\[eq:2bit\])), without and with bath squeezing, respectively. Comparing them with the noiseless case of Figure \[fig:minentcoh\] (which, it may be noted, is unscaled by $\mu$), we find as expected that noise impairs phase knowledge. However, comparing Figure \[fig:sqminfi\](b) with (a), we find that squeezing has the beneficial effect of relatively improving phase knowledge for certain regimes of the parameter space, and the detrimental effect of relatively impairing them in others. This property can be shown to improve the classical channel capacity [@sqgen]. Further, bath squeezing is seen to render $R_\phi$ dependent on $\beta^{\prime}$, because, as evident from Eq. (\[3p\]), $\beta^\prime$ no longer appears as a translation in $\phi$ when the squeezing parameter $\chi$ (Eq. (\[eq:M\])) is non-vanishing. On the other hand, it follows from Eq. (\[eq:pmc\]) that $R_m$ is independent of $\beta^\prime$, so that $R_S$ is dependent on $\beta^\prime$. This stands in contrast to that of the phase damping channel, where inspite of squeezing, $R_S$ remains independent of $\beta^{\prime}$ and, furthermore, squeezing impairs knowledge of $\phi$ in all regimes of the parameter space.
A point worth noting is that, in contrast to the phase damping channel, in a squeezed generalized amplitude damping channel, $R_m$ and $R_S$ are not necessarily non-increasing functions of time. Figure \[fig:usq\](a) depicts the effect of squeezed generalized amplitude damping channel on $R_S$, by bringing out the behavior of $R_S$ as a function of bath exposure time. The dashed curve shows that squeezing has a detrimental effect on the knowledge sum, as one would usually expect. A surprising departure from this behavior may be noted for the case of the bold curve, which corresponds to the action of a dissipative interaction with an unsqueezed vacuum bath, where the knowledge sum $R_S$ increases to 1. This counterintuitive behavior is due to the quantum deleting action, a contractive map whereby any initial state, including a mixed state, is asymptotically prepared in the pure state $|{\frac{1}{2}},-{\frac{1}{2}}\rangle$ for vanishing temperature, and a mixture of $|0\rangle$ and $|1\rangle$ states for finite temperature, where the asymptotic mixture is determined purely by the environmental parameters of $T$ and $r$, and not by the system’s initial state [@qdele]. A similar effect was noted in [@gp93], where in a study of quantum state diffusion of an open system it was shown that for a specific noise, due to a particular system-reservoir interaction, there can be a reduction in the quantum dispersion entropy leading to localization.
It follows from the complementarity relation Eq. (\[eq:2bit\]), that in the asymptotic limit of the deleting action, $R_\phi$ goes to 0 for both $|0\rangle$ and $|1\rangle$, and hence also, by the convexity of $R$, for any mixture that is diagonal in this basis. More generally, it is seen from Figure \[fig:usq\](b) that for all initial pure states, $R_\phi$ falls monotonically. This is to be expected since this noise prepares an asymptotic state that lies on the $z$-axis of the Bloch sphere, which implies by the convexity property of $R_\phi$ and the fact that $R_\phi=0$ for the north and south pole states, that asymptotically $R_\phi=0$ for [*all initial pure states*]{}.
Conclusions \[sec:concl\]
=========================
In this work, we have investigated the number-phase complementarity in atomic systems from an entropic perspective through the number and phase distributions. Here number distribution refers to the probability distribution of measurement in the Wigner-Dicke basis (Eq. (\[pnum\])), while phase distribution is defined by Eq. (\[2a.7\]). We derive an uncertainty principle in terms of the Kullbäck-Leibler or relative entropy $R$ of number and phase with respect to a uniform distribution. Since $R$ can be regarded as a measure of knowledge of a random variable, the entropic uncertainty principle takes the form of an upper bound on the sum of number knowledge ($R_m$) and phase knowledge ($R_\phi$). The choice of relative entropy over Shannon entropy was motivated by the fact that the latter is not strictly positive when applied to continuous probability distributions.
In the single-qubit case, number and phase are regarded as quasi-MUBs in the sense that any state maximizing knowledge of one variable simultaneously minimizes knowledge of the other, but maximum phase knowledge is strictly less than 1 bit (and less than $\log d$ bits in $d$ dimensions).
Since $R_\phi$ is strictly less than one, the relative entropic formulation of the uncertainty principle does not tightly constrain $R_m$. We define a family of inequalities, parametrized by $\mu$ (Eq. (\[eq:2bit\])), that improves the upper bound on $R_m$. When $\mu=1$, we obtain Eq. (\[eq:1bit\]), and get the tightest bound for equatorial states (with the right hand side saturated) when $\mu
\approx 4.085$. We briefly study the extension of the above concepts to a four-level system, where we find that the sense in which number and phase are said to be complementary must be further weakened to include unidirectional (but not mutual) unbiasedness. In particular, whereas phase is unbiased with respect to number, the converse is not true.
Finally, we study the complementary behavior of number and phase of a qubit subjected to the influence of its environment. For a qubit starting from an atomic coherent state $|\alpha^\prime,\beta^\prime\rangle$, the translation symmetry of $R_\phi$ in $\beta^\prime$ is broken by the introduction of squeezing in the bath, for the case of a dissipative system-bath interaction (Figure \[fig:sqminfi\](b)), but not in the case of a non-dissipative interaction. In the case of a purely decohering interaction, characterized by a phase damping channel, we find that noise invariably impairs the knowledge sum for these complementary variables (Eq. (\[eq:2bit\])), as expected. However, in the case of a squeezed generalized amplitude channel, the knowledge sum can increase in certain regimes. As a particularly dramatic illustration, when an initially maximally mixed state ${\frac{1}{2}}(|{\frac{1}{2}},{\frac{1}{2}}\rangle\langle{\frac{1}{2}},{\frac{1}{2}}| +
|{\frac{1}{2}},{\rm-}{\frac{1}{2}}\rangle\langle{\frac{1}{2}},{\rm-}{\frac{1}{2}}|)$ is subjected to an unsqueezed vacuum bath, $R_S$ rises from 0 to 1 asymptotically.
These results could be potentially useful for applications in quantum communication and quantum cryptography [@gis02] involving atomic systems. The present work brings forth a number of open questions concerning an information theoretic study of complementarity in atomic systems involving continuous-valued POVMs, of which we list some here. Of immediate interest is the question whether the Shannon entropy of ${\cal P}(\phi)$ remains positive for all possible pure and mixed states. If yes, then one may revert back from the use of the knowledge variable $R$ to that of entropy $S$. Also of interest is to analytically derive the bounds on the weighted knowledge sum, which we have obtained here numerically. Finally, it is of interest to explore the full scope and implications of one-way biasedness, and its connection to complementarity.
Phase damping channel \[secap:qnd\]
===================================
Consider the Hamiltonian $$\begin{aligned}
H & = & H_S + H_R + H_{SR} \nonumber \\ & = & H_S +
\sum\limits_k \hbar \omega_k b^{\dagger}_k b_k + H_S
\sum\limits_k g_k (b_k+b^{\dagger}_k) + H^2_S \sum\limits_k
{g^2_k \over \hbar \omega_k}. \end{aligned}$$ Here $H_S$, $H_R$ and $H_{SR}$ stand for the Hamiltonians of the system, reservoir and system-reservoir interaction, respectively. $H_S$ is a generic system Hamiltonian which can be specified depending on the physical situation. $b^{\dagger}_k$, $b_k$ denote the creation and annihilation operators for the reservoir oscillator of frequency $\omega_k$, $g_k$ stands for the coupling constant (assumed real) for the interaction of the oscillator field with the system. The last term on the right-hand side of Eq. (1) is a renormalization inducing ‘counter term’. Since $[H_S, H_{SR}]=0$, the Hamiltonian (1) is of QND type. The system-plus-reservoir composite is closed and hence obeys a unitary evolution given by $$\rho (t) = e^{- iHt / \hbar} \rho (0) e^{iHt / \hbar} ,$$ where $$\rho (0) = \rho^s (0) \rho_R (0),$$ i.e., we assume separable initial conditions. Here $\rho_R (0)$ is the initial density matrix of the reservoir which we take to be a squeezed thermal bath given by $$\rho_R(0) = S(r, \Phi) \rho_{th} S^{\dagger} (r, \Phi), \label{rhorin}$$ where $$\rho_{th} = \prod_k \left[ 1 - e^{- \beta \hbar \omega_k}
\right] e^{-\beta \hbar \omega_k b^{\dagger}_k b_k} \label{rhoth}$$ is the density matrix of the thermal bath at temperature $T$, with $\beta \equiv 1/(k_B T)$, $k_B$ being the Boltzmann constant, and $$S(r_k, \Phi_k) = \exp \left[ r_k \left( {b^2_k \over 2} e^{-2i
\Phi_k} - {b^{\dagger 2}_k \over 2} e^{2i \Phi_k} \right)
\right] \label{sqop}$$ is the squeezing operator with $r_k$, $\Phi_k$ being the squeezing parameters [@cs85]. Here we take the system to be a two-level atomic system, with the Hamiltonian $$H_S = {\hbar \omega \over 2} \sigma_z, \label{4a1}$$ $\sigma_z$ being the usual Pauli matrix. The reduced density matrix of the system, in the basis of the Wigner-Dicke states $|j, m
\rangle$, after time $t$ is [@gp] $$\rho^s_{m,n}(t) =
\pmatrix {\cos^2({\theta_0 \over 2})
& {1 \over 2} \sin(\theta_0)
e^{-i (\omega t + \phi_0)} e^{-(\hbar \omega)^2 \gamma(t)}
\cr {1 \over 2} \sin(\theta_0)
e^{i(\omega t + \phi_0)} e^{-(\hbar \omega)^2 \gamma(t)}
& \sin^2({\theta_0 \over 2})}, \label{4a4}$$ from which the Bloch vectors can be extracted to yield $$\begin{aligned}
\langle \sigma_x (t) \rangle &=& \sin(\theta_0) \cos(\omega t +
\phi_0) e^{-(\hbar \omega)^2 \gamma(t)}, \nonumber\\ \langle
\sigma_y (t) \rangle &=& \sin(\theta_0) \sin(\omega t + \phi_0)
e^{-(\hbar \omega)^2 \gamma(t)}, \nonumber\\ \langle \sigma_z
(t) \rangle &=& \cos(\theta_0). \label{4a5} \end{aligned}$$ Here $\gamma(t)$ comes due to the interaction with the environment and for the case of an Ohmic bath with spectral density $$I(\omega) = {\gamma_0 \over \pi} \omega e^{-\omega/\omega_c},
\label{2.5}$$ where $\gamma_0$ and $\omega_c$ are two bath parameters characterizing the quantum noise, it can shown that using Eq. (\[2.5\]) one can obtain [@bg07] in the $T = 0$ limit, $$\gamma (t) = {\gamma_0 \over 2\pi} \cosh (2r) \ln
(1+\omega^2_c t^2) - {\gamma_0 \over 4\pi} \sinh (2r) \ln
\left[ {\left( 1+4\omega^2_c(t-a)^2\right) \over \left( 1+
\omega^2_c (t-2a)^2 \right)^2} \right] -
{\gamma_0 \over 4\pi} \sinh (2r) \ln (1+4a^2\omega^2_c) ,
\label{2.7}$$ where the resulting integrals are defined only for $t > 2a$. In the high $T$ limit, $\gamma (t)$ can be shown to be [@bg07] $$\begin{aligned}
\gamma (t) & = & {\gamma_0 k_BT \over \pi \hbar \omega_c} \cosh
(2r) \left[ 2\omega_c t \tan^{-1} (\omega_c t) + \ln \left( {1
\over 1+\omega^2_c t^2} \right) \right] -
{\gamma_0 k_BT \over 2\pi \hbar \omega_c} \sinh (2r) \nonumber \\
&\times& \Bigg[
4\omega_c (t-a) \tan^{-1} \left( 2\omega_c (t-a) \right)
- 4\omega_c (t-2a) \tan^{-1} \left( \omega_c
(t-2a) \right) + 4a\omega_c \tan^{-1} \left( 2a\omega_c \right) \nonumber \\
&+& \ln \left( {\left[ 1+\omega^2_c (t-2a)^2
\right]^2 \over \left[ 1+4\omega^2_c (t-a)^2 \right]} \right) +
\ln \left( {1 \over 1+4a^2\omega^2_c} \right) \Bigg] ,
\label{eq:gamma} \end{aligned}$$ where, again, the resulting integrals are defined for $t > 2a$. Here we have for simplicity taken the squeezed bath parameters as $$\begin{aligned}
\cosh \left( 2r(\omega) \right) & = & \cosh (2r),~~ \sinh
\left( 2r (\omega) \right) = \sinh (2r), \nonumber\\ \Phi
(\omega) & = & a\omega, \label{eq:a} \end{aligned}$$ where $a$ is a constant depending upon the squeezed bath. The results pertaining to a thermal bath can be obtained from the above equations by setting the squeezing parameters $r$ and $\Phi$ to zero. $\sigma_x$, $\sigma_y$, $\sigma_z$ are the standard Pauli matrices. It can be easily seen from the above Bloch vector equations that the QND evolution causes a coplanar, fixed by the polar angle $\theta_0$, in-spiral towards the $z$-axis of the Bloch sphere. This is the characteristic of a phase-damping channel [@nc00].\
\
Squeezed generalized amplitude damping channel \[secap:disi\]
=============================================================
Here the reduced dynamics of the two level atomic system (\[4a1\]) interacting with a squeezed thermal bath under a weak Born-Markov and rotating wave approximation is studied. This implies that here the system interacts with its environment via a non-QND interaction, i.e., $[H_S, H_{SR}] \ne 0$ such that along with a loss in phase information, energy dissipation also takes place. The evolution has a Lindblad form which in the interaction picture is given by [@bp02] $$\begin{aligned}
{d \over dt}\rho^s(t) & = & \gamma_0 (N + 1) \left(\sigma_-
\rho^s(t) \sigma_+ - {1 \over 2}\sigma_+ \sigma_- \rho^s(t) -{1
\over 2} \rho^s(t) \sigma_+ \sigma_- \right) \nonumber \\ & & +
\gamma_0 N \left( \sigma_+ \rho^s(t) \sigma_- - {1 \over
2}\sigma_- \sigma_+ \rho^s(t) -{1 \over 2} \rho^s(t) \sigma_-
\sigma_+ \right) \nonumber \\ & & - \gamma_0 M \sigma_+
\rho^s(t) \sigma_+ -\gamma_0 M^* \sigma_- \rho^s(t) \sigma_- .
\label{4a6} \end{aligned}$$ Here $$N = N_{\rm th}(\cosh^2 r + \sinh^2 r) + \sinh^2 r, \label{4a7}$$ $$M = -{1 \over 2} \sinh(2r) e^{i\Phi} (2 N_{th} + 1),
\label{eq:M}$$ and $$N_{\rm th} = {1 \over e^{\hbar \omega /(k_B T)} - 1},
\label{4a9}$$ where $N_{\rm th}$ is the Planck distribution giving the number of thermal photons at the frequency $\omega$, and $r$, $\Phi$ are squeezing parameters of the bath. The case of a thermal bath without squeezing can be obtained from the above expressions by setting these squeezing parameters to zero. $\gamma_0$ is a constant typically denoting the system-environment coupling strength. This equation can be expressed in a manifestly Lindblad form as $$\frac{d}{dt}\rho^s(t) = \sum_{j=1}^2\left(
2R_j\rho^s R^{\dag}_j - R_j^{\dag}R_j\rho^s - \rho^s R_j^{\dag}R_j\right),
\label{Lindblad}$$ where $R_1 = (\gamma_0(N_{\rm th}+1)/2)^{1/2}R$, $R_2 =
(\gamma_0N_{\rm th}/2)^{1/2}R^{\dag}$. Here $R = \sigma_-\cosh(r) +
e^{i\Phi}\sigma_+\sinh(r)$, and $\sigma_{\pm} =
\frac{1}{2}\left(\sigma_x \pm i\sigma_y\right)$. If $T=0$, so that $N_{\rm th}=0$, then $R_2$ vanishes, and a single Lindblad operator suffices. The fact that the above equation can be expressed in the form (\[Lindblad\]) guarantees a Kraus or operator- sum representation [@nc00] for the evolution of the reduced density matrix. It can be seen that the reduced density matrix, obtained by solving Eq. (\[4a6\]) in the Bloch form, shrinks towards the asymptotic equilibrium state $\rho_{asymp}$, given by $$\rho_{asymp} = \pmatrix{1-p & 0 \cr 0 & p}, \label{4a13}$$ where $p = \frac{1}{2}\left[1 + \frac{1}{(2N+1)}\right]$. For the case of zero squeezing and zero temperature, this action corresponds to an amplitude-damping channel [@nc00; @gp] with the Bloch sphere shrinking to a point representing the state $|0 \rangle$ (the south pole of the Bloch sphere) while for the case of finite $T$ but zero squeezing, the above action corresponds to a generalized amplitude-damping channel [@nc00; @gp] with the Bloch sphere shrinking to a point along the line joining the south pole to the center of the Bloch sphere. The center of the Bloch sphere is reached in the limit of infinite temperature. Thus, the interaction with the environment provides a contractive map, such that the asymptotic state is pure ($p=1$), corresponding to the deletion action [@qdele], or mixed ($p < 1$), depending on environmental conditions. For finite $T$ and bath squeezing, the above corresponds to a squeezed generalized amplitude damping channel [@sqgen].
In Eq. (\[3p\]), the parameter $\alpha$ is given by $$\alpha = \sqrt{\gamma^2_0 |M|^2 - \omega^2}, \label{3n}$$ while $$\gamma^{\beta} = \gamma_0 (2 N + 1).
\label{eq:gammabeta}$$
[100]{} K. Kraus, Phys. Rev. D **35**, (1987) 3070.
H. Maassen and J. B. M. Uffink, Phys. Rev. Lett. **60**, (1988) 1103.
D. Deutsch, Phys. Rev. Lett. **50**, (1983) 631.
M. Nielsen and I. Chuang, *Quantum Computation and Quantum Information* (Cambridge 2000).
A. Galindo, M.A. Martin-Delgado, Rev. Mod. Phys. [**74**]{}, (2000) 347.
D. T. Pegg and S. M. Barnett, J. Mod. Opt. [**36**]{}, (1989) 7; Phys. Rev. A **39**, (1989) 1665.
S. Abe, Phys. Lett. A **166**, (1992) 163.
S. Wehner and A. Winter, eprint arXiv:0710.1185.
I. D. Ivanovic, J. Phys. A **14**, (1981) 3241.
T. Durt, J. Phys. A: Math. Gen. **38**, (2005) 5267.
S. Kullback and R. A. Leibler, Ann. of Math. Stat. **22**, (1951) 79.
S. Massar, eprint quant-ph/0703036.
V. Perinova, A. Luks and J. Perina, *Phase in Optics* (World Scientific, Singapore 1998).
P. A. M. Dirac, Proc. R. Soc. Lond. A [**114**]{}, (1927) 243.
L. Susskind and J. Glogower, Physics **1**, (1964) 49.
P. Carruthers and M. M. Nieto, Rev. Mod. Phys. **40**, (1968) 411.
J. H. Shapiro, S. R. Shepard and N. C. Wong, Phys. Rev. Lett. **62**, (1989) 2377.
*Quantum Phase and Phase Dependent Measurements*, Eds. W. P. Schleich and S. M. Barnett, Phys. Scr. (Special issue) [**T48**]{}, (1993) 1-144.
J. H. Shapiro and S. R. Shepard, Phys. Rev. A **43**, (1991) 3795.
M. J. W. Hall, Quantum Opt. **3**, (1991) 7.
G. S. Agarwal, S. Chaturvedi, K. Tara and V. Srinivasan, Phys. Rev. A **45**, (1992) 4904.
R. Srikanth and S. Banerjee, Phys. Lett. A **367**, (2007) 295; quant-ph/0611263.
S. Banerjee and R. Srikanth, Phys. Rev. A:**76**, (2007) 062109; eprint arXiv:0706.3633.
H.-P. Breuer and F. Petruccione, *The Theory of Open Quantum Systems* (Oxford University Press 2002).
S. Banerjee and R. Ghosh, J. Phys. A: Math. Theo. **40**, (2007) 13735; eprint quant-ph/0703054.
V. B. Braginsky, Yu. I. Vorontsov and K. S. Thorne, Science **209**, (1980) 547.
V. B. Braginsky and F. Ya. Khalili, in *Quantum Measurements*, edited by K. S. Thorne (Cambridge University Press, Cambridge 1992).
D. F. Walls and G. J. Milburn, *Quantum Optics* (Springer, Berlin 1994).
W. H. Zurek, in *The Wave-Particle Dualism*, edited by S. Diner, D. Fargue, G. Lochak and F. Selleri (D. Reidel Publishing Company, Dordrecht 1984).
C .M. Caves, K. D. Thorne, R. W. P. Drever, V. D. Sandberg and M. Zimmerman, Rev. Mod. Phys. **52**, (1980) 341.
M. F. Bocko and R. Onofrio, Rev. Mod. Phys. [**68**]{}, (1996) 755.
S. Banerjee, J. Ghosh and R. Ghosh, Phys. Rev. A **75**, (2007) 062106; eprint quant-ph/0703055.
S. Banerjee and R. Srikanth, Euro. Phys. J. D **46**, (2008) 335; eprint quant-ph/0611161.
R. Srikanth and S. Banerjee, Phys. Rev. A **77**, (2008) 012318; arXiv:0707.0059.
G. S. Agarwal and R. P. Singh, Phys. Lett. A **217**, (1996) 215.
M. A. Rashid, J. Math. Phys. **19**, (1978) 1391.
G. S. Agarwal and R. R. Puri, Phys. Rev. A **41**, (1990) 3782.
F. T. Arecchi, E. Courtens, R. Gilmore and H. Thomas, Phys. Rev. A **6**, (1972) 2211.
A. S. Holevo, *Probabilistic and Statistical Aspects of Quantum Theory* (North Holland 1982).
N Gisin and I. C. Percival, J. Phys. A: Math. Gen. **26**, 2233 (1993).
N. Gisin, G. Ribordy, W. Tittel and H. Zbinden, Rev. Mod. Phys. **74**, 145 (2002).
Caves C M and Schumaker B L, Phys. Rev. A **31**, (1985) 3068; Schumaker B L and Caves C M, Phys. Rev. A **31**, (1985) 3093.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Marc Boullé, Fabrice Clérot, Carine Hue'
title: |
Revisiting enumerative two-part crude MDL\
for Bernoulli and multinomial distributions\
(Extended version)
---
Introduction {#secIntroduction}
============
Model selection is a key problem in statistics and data mining, and the MDL approaches [@Rissanen78] to model selection have been extensively studied in the literature [@Grunwald07], with successful applications in many practical problems. Simple models such as Bernoulli or mainly multinomial distributions are important because they are easier to analyze theoretically and useful in many applications. For example, the multinomial distribution has been used as a building block in more complex models, such as naive Bayes classifiers [@MononenEtAl07], Bayesian networks [@RoosEtAl08], decision trees [@VoisineEtAlAKDM09] or coclustering models [@BoulleHOPR10; @GuigouresEtAlECML15]. These models involve up to thousands of multinomials blocks, some of them with potentially very large numbers of occurrences and outcomes. For example, the text $\times$ word coclustering of the 20-newsgroup dataset described in [@BoulleHOPR10] exploits a main multinomial block with around two millions words (occurrences) distributed on 200,000 coclusters (outcomes). In [@GuigouresEtAlECML15], half a billion call detail records (occurrences) are distributed on one million coclusters (outcomes). These various and numerous applications critically rely on the use of effective and efficient MDL code lengths to get a robust and accurate summary of the data.
The MDL approaches come with several flavors, ranging from theoretical but not computable to practical but sub-optimal. Ideal MDL [@VitanyiEtAl00] relies on the Kolmogorov complexity, that is the ability of compressing data using a computer program. However, it suffers from large constants depending on the description method used and cannot be computed, not even approximated in the case of two-part codes [@AdriaansEtAl07]. Practical MDL leverages description methods that are less expressive than general-purpose computer languages. It has been employed to retrieve the best model given the data in case of families of parametrized statistical distributions. Crude MDL is a basic MDL approach with appealing simplicity. In two-part crude MDL, you just have to encode the model parameters and the data given the parameter, with a focus on the code length only. However, crude MDL suffers from arbitrary coding choices. Modern MDL relies on universal coding resulting in Refined MDL [@Grunwald07], with much stronger foundations and interesting theoretical properties. In this paper, we investigate the enumerative two-part crude MDL code for the Bernoulli and multinomial models, exhibit a strong connection with the NML approach, with surprising impacts on the estimation of the model complexity and superior compression performance.
The rest of the paper is organized as follows. For self-containment reasons, Section \[secBernoulli\] presents standard codes for the Bernoulli distribution: one simplistic two-part crude MDL code as well as a refined MDL code based on the NML approach. Section \[secRevisitedEnumerative\] describes a particular two-part crude MDL code based on enumerations and establishes the connection of its parameter coding length with its NML parametric complexity. Section \[secComparison\] proceeds with a deep comparison between this enumerative MDL code and the NML code presented in Section \[secBernoulli\]. Section \[secMultinomial\] suggests an extension of the enumerative two-part crude MDL code to multinomial distributions and Section \[secComparisonM\] compares this code with the alternative NML code. Finally, Section \[secConclusion\] summarizes this paper.
Standard MDL codes for Bernoulli strings {#secBernoulli}
========================================
We briefly present one simplistic example of two-part crude MDL code for encoding binary strings using the Bernoulli model, as well as a modern MDL code based on NML. This has been presented many times in the literature, e.g. [@HansenEtAl01b; @Grunwald07; @RooijEtAl09].
Let us consider the Bernoulli model with $\theta \in [0, 1]$ in the case of binary sequences $x^n \in X^n$ of size $n$. Let $k(x^n)$ be the number of ones in $x^n$.
Simplistic two-part crude MDL approach {#secSimplistic}
--------------------------------------
Using a two-part version of the MDL principle [@Grunwald07], we select the best hypothesis $H$ that explains the data $D$ by minimizing the sum $L(H) + L(D|H)$, where $L(H)$ is the coding length of the hypothesis and $L(D|H)$ is the coding length of the data encoded with the help of the hypothesis.
In the case of the Bernoulli model, we have to encode the parameter $\theta$ and the data $x^n$ given $\theta$. The number of ones in the binary string $x^n$ is between 0 and $n$. The $\theta$ parameter can thus be chosen among $(n+1)$ values $\theta=\frac {0} {n}, \frac {1} {n}, \frac {2} {n}, \ldots, \frac {n} {n}$, and be encoded using $L(\theta) = \log (n+1)$ bits.
For $\theta \in \{0, 1 \}$, the string $x^n$ is degenerated with only zeros or ones, and its coding length given $\theta$ is $L(x^n|\theta) = 0$.
For $\theta = \frac {k} {n},\; 0 < k < n$, every symbol of the string $x^n$ can be encoded using $-\log \frac {k} {n}$ bit for a one and $-\log \frac {n-k} {n}$ bit for a zero, leading to $L(x^n|\theta) = -k \ln \frac {k} {n} - (n-k) \ln \frac {n-k} {n}$.
This gives a total code length of
$$L(\theta = \frac {k} {n}, x^n) = \log(n+1) + \left(-k \log \frac {k} {n} - (n-k) \log \frac {n-k} {n}\right).$$
Equivalently, the likelihood of the whole string $x^n$ can be estimated as $P(x^n|\theta = \frac {k} {n}) = (\frac {k} {n})^k (\frac {n-k} {n})^{n-k}$, with $L(x^n|\theta) = - \log P(x^n|\theta = \frac {k} {n})$.
Using the Shannon entropy $H(\frac {k} {n}) = -\frac {k} {n} \log (\frac {k} {n}) - \frac {n-k} {n} \log (\frac {n-k} {n})$, we also have $L(x^n|\theta) = n H(\theta)$.
Standard NML Approach {#secNML}
---------------------
The simplistic two-part MDL code defined previously suffers from some arbitrary choices and may be suboptimal at best, with arbitrary bad behavior for small sample sizes [@Grunwald07].
In the case of the Bernoulli model, this is pointed out in [@RooijEtAl09],
> “Example 5. $\ldots$ A uniform code uses $L(\theta) = \log(n + 1)$ bits to identify an element of this set. Therefore the resulting regret is always exactly $\log(n + 1)$. By using a slightly more clever discretisation we can bring this regret down to about $\frac {1} {2} \log n + O(1)$, which we mentioned is usually achievable for uncountable single parameter models.”
Using universal coding, a much more grounded approach is proposed to better evaluate the model complexity, based on the Shtarkov NML code, which provides strong theoretical guarantees [@Rissanen00].
It exploits the following NML distribution $\overline{P}_{nml}^{(n)}$ on $X^n$:
$$\label{eqnNML}
\overline{P}_{nml}^{(n)}(x^n) = \frac { P_{\widehat{\theta}(x^n)}(x^n)}
{\sum_{y^n \in X^n} {P_{\widehat{\theta}(y^n)}(y^n)}}$$
where $\widehat{\theta}(x^n)$ is the model parameter that maximizes the likelihood of $x^n$.
The log of the denominator stands for the *parametric complexity* $COMP^{(n)}(\theta)$ of the model whereas the negative log of the numerator is the *stochastic complexity* of the data given the model. The sum of both terms provides the NML code. It is noteworthy that the NML code is a one-part rather than two-part code: data is encoded with the help of all the model hypotheses rather than the best hypothesis.
In the case of the Bernoulli model, $\widehat{\theta}(x^n) = k(x^n)/n$. We have
$$\begin{aligned}
COMP^{(n)}(\theta) &=& \log \sum_{y^n \in X^n} {P_{\widehat{\theta}(y^n)}(y^n)}, \\
&=& \log \sum_{k=0}^n {{{n}\choose{k}} \left(\frac {k}{n}\right)^k \left(\frac {n-k}{n}\right)^{n-k}}.\end{aligned}$$
Using the Stirling’ formula together with the Fisher information provides the following accurate approximation [@Rissanen96]: $$\begin{aligned}
COMP^{(n)}(\theta) &=& \frac{1}{2} \log \frac{n}{2 \pi} + \int_{\theta} {\sqrt{ det I(\theta)}} +o(1),\\
&=& \frac{1}{2} \log \frac{n \pi}{2} + o(1).\end{aligned}$$
Remarkably, this is in line with the classical BIC regularization term $\frac{1}{2}\log n$.
Revisiting enumerative two-part crude MDL {#secRevisitedEnumerative}
=========================================
We present the enumerative two-part crude MDL code for Bernoulli distributions, suggest a finite data sample Bayesian interpretation and show a connection with the NML approach.
Enumerative two-part crude MDL {#secEnumerativeB}
------------------------------
We present an alternative type of two-part crude MDL code for Bernoulli distributions. It has already been proposed in the past literature, under the names of *index* or *enumerative* code (see for example @Grunwald07 Example 10.1 *Coding by Giving an Index*).
First, like in Section \[secSimplistic\], we enumerate all possible $\theta = \frac{i}{n}$ parameter values given the sample size $n$. We then use $\log (n+1)$ bits to encode $\theta$. Second, given $\widehat{\theta}(x^n) = \frac{k(x^n)}{n}$, we enumerate all the ${{n}\choose{k}}$ binary sequences with $k$ ones and encode the data $x^n$ using $\log {{n}\choose{k}}$ bits. This gives a total code length of
$$L(\widehat{\theta}(x^n), x^n) = \log(n+1) + \log \frac{n!} {k! (n-k)!},$$
Interestingly, this crude MDL approach results in the same code length as that obtained in [@HansenEtAl01b] using *Predictive Coding* or *Mixture Coding* with a uniform prior.
For the case of the Bayes mixture model with uniform prior $w(\theta) = 1,\; \theta \in [0,1]$, we have $$\begin{aligned}
P_{Bayes}(x^n) &=& \int_0^1{w(\theta) P_{\theta}(x^n)d\theta}, \\
&=& \int_0^1{\theta^{k} (1-\theta)^{n-k} d\theta},\\
&=& \frac{1}{n+1} \frac{k! (n-k)!} {n!} .\end{aligned}$$ The negative log of $P_{Bayes}(x^n)$ actually corresponds to the code length of the enumerative code.
This code has also been studied by @Grunwald07 (Chapter 10, Section 10.2) under the name *Conditional Two-Part Universal Code*, which suggests that at least for the Bernoulli model, this code is strictly preferable to the ordinary two-part code.
Bayesian interpretation {#secEnumBayesian}
-----------------------
Let $\mathcal{M} = \{ P_\theta \, | \, \theta \in [0,1] \}$ be the class of all Bernoulli distributions. We propose to focus on the family of models $\mathcal{M}^{(n)} = \{P_\theta \,|\, \theta = \frac{i}{n},\; 0 \leq i \leq n\}$ that are models of description for finite size data samples. $\mathcal{M}^{(n)}$ is related to the set of all the possible maximum likelihood estimates of $\theta$ (from $\mathcal{M}$) for binary strings of size $n$. The interest of using $\mathcal{M}^{(n)}$ is that the number of model parameters is now finite instead of uncountable infinite. Using a uniform prior on the model parameters in $\mathcal{M}^{(n)}$, we get $P(\theta = \frac{i}{n}) = 1/{|\mathcal{M}^{(n)}|}$, leading to $L(\theta) = \log (n+1)$.
Given $\theta = \frac{i}{n} \in \mathcal{M}^{(n)}$, we now have to encode the data $x^n$.
If $k(x^n)/n \neq \theta$, we cannot encode the data and $P(x^n|\theta) = 0$.
If $k(x^n)/n = \theta$, the observed data is consistent with the model parameter, and we assume that all the possible observable data are uniformly distributed. The number of binary strings with $k$ ones is the binomial coefficient ${{n}\choose{k}}$. Thus the probability of observing one of them is $P(x^n|\widehat{\theta}(x^n)) = 1/{{n}\choose{k}}$. We have a discrete likelihood that concentrates the probability mass on binary strings that can be observed given the model parameter. As a result, coding lengths are defined only for strings that are consistent with the model parameter. This gives a total code length of
$$L(\widehat{\theta}(x^n), x^n) = \log(n+1) + \log \frac{n!} {k! (n-k)!},$$
defined only when $\theta = \widehat{\theta}(x^n)$.
#### Generative model for the enumerative Bernoulli distribution.
Given a sequence length $n$ and $\theta = \frac{i}{n} \in \mathcal{M}^{(n)}$, we can formulate these models as generative models of sequences with exactly $i$ ones and $n-i$ zeros. For example, from a sequence of $n$ zeros, we randomly choose $i$ times without replacement a zero in the sequence and replace it with a one. For this generative model, we have the following likelihood, as seen previously: $$P(x^n|\theta = \frac{i}{n}) = {\mathbb{1}_{\left\{ \frac{i}{n} = \frac{k(x^n)}{n}\right\}}} 1/{{n}\choose{k(x^n)}}.$$ For the case of the Bayes mixture model with uniform prior $w(\theta) = \frac{1}{n+1}, \theta = \frac{i}{n},\; 0 \leq i \leq n$, we have $$\begin{aligned}
P_{Bayes}(x^n) &=& \sum_{i=0}^n{w(\frac{i}{n}) P(x^n|\theta = \frac{i}{n})}, \\
&=& \frac{1}{n+1} \frac{k(x^n)! (n-k(x^n))!} {n!} .\end{aligned}$$ The negative log of this probability actually corresponds to the code length of the enumerative code. Interestingly, the standard Bernoulli model and the enumerative one are related to slightly different generative models, but their Bayes mixture under the uniform prior leads to the same distribution. In Section \[NMLinterpretation\], we will see that on the opposite, their normalized maximum likelihood distribution is not the same.
#### Cardinality of models spaces.
Let us consider the union of the $\mathcal{M}^{(n)}$ models for all the sample sizes: $$\mathcal{M}^{(\mathbb{N})} = \cup_{n \in \mathbb{N}}{\mathcal{M}^{(n)}}.$$ Interestingly, $\mathcal{M}^{(\mathbb{N})}$ is very close to $\mathcal{M}$, with $\theta \in \mathbb{Q}$ rather than $\theta \in \mathbb{R}$. Thus, the number of model parameters in $\mathcal{M}^{(\mathbb{N})}$ is countable infinite rather than uncountable infinite, which provides a significant simplification.
NML interpretation {#NMLinterpretation}
------------------
Let us compute the NML parametric complexity of this enumerative code, on the basis of the discrete likelihood presented in Section \[secEnumBayesian\]. We have
$$\begin{aligned}
COMP^{(n)}(\theta) &=& \log \sum_{y^n \in X^n} {P_{\widehat{\theta}(y^n)}(y^n)}, \\
&=& \log \sum_{k=0}^n {{{n}\choose{k}} \left({1} /{{n}\choose{k}}\right)},\\
&=& \log (n+1).\end{aligned}$$
Interestingly, we find exactly the same complexity term $\log (n+1)$ as the coding length of the best hypothesis in the enumerative two-part crude MDL code presented in Section \[secEnumerativeB\]. This shows that the enumerative code is both a two-part and a one-part code. It is parametrization invariant and optimal w.r.t. the NML approach, with minimax regret guarantee. Surprisingly, its parametric complexity is asymptotically twice that of the NML code described in Section \[secNML\]. We further investigate on the comparison between the enumerative and NML codes in next section
Code comparison for the Bernoulli distribution {#secComparison}
==============================================
In this section, we compare the NML code (Section \[secNML\]) and enumerative two-part crude MDL codes (Section \[secRevisitedEnumerative\]) for the Bernoulli distribution.
Notation
--------
Let us use the names *simplistic*, *NML* and *enumerative* for the specific MDL codes presented in Sections \[secSimplistic\], \[secNML\] and \[secEnumerativeB\]. We also consider the *random* code as a baseline: it corresponds to a direct encoding of each binary string $x^n$ with a coding length of $n \log 2$. The likelihood of each string $x^n$ is $1/2^n$, and as $\sum_{k=0}^n {{{n}\choose{k}} 1/2^n} = 1$, we have $COMP_{random}^{(n)}(\emptyset) = 0$ and $L_{random}\left(x^n|\emptyset \right) = n \log 2$.
Table \[tableCodes\] reminds the parametric and stochastic complexity of each considered code.
Code name $COMP_{name}^{(n)}$ $L_{name}\left(x^n|\widehat{\theta}(x^n) \right)$
--------------- ------------------------------------------- ---------------------------------------------------
*enumerative* $\log (n+1)$ $\log \frac {n!} {k! (n-k)!}$
*NML* $\frac{1}{2} \log \frac{n \pi}{2} + o(1)$ $\log \frac {n^n} {k^k (n-k)^{n-k}}$
*simplistic* $\log (n+1)$ $\log \frac {n^n} {k^k (n-k)^{n-k}}$
*random* $0$ $n \log 2$
: Parametric and stochastic complexity per code.[]{data-label="tableCodes"}
As for the simplistic code, the coding length of the parameter is presented in place of the parametric complexity. The total coding length of the simplistic code has an overhead of about $\frac{1}{2} \log n$ compared to the NML code. This confirms that the simplistic code is dominated by the NML code, as expected.
Stochastic complexity term
--------------------------
The stochastic complexity term of the enumerative code is always smaller than that of the NML code for non-degenerated binary strings:
$$\label{sc_Bernoulli}
\forall n, \forall x^n \in X^n, \; 0 <k(x^n) < n, \;
L_{enum}\left(x^n|\widehat{\theta}(x^n)\right) < L_{nml}\left(x^n|\widehat{\theta}(x^n)\right).$$
An intuitive proof relies on the fact that the enumerative MDL likelihood assigns the same probability to all binary strings having the same number of ones, with a null probability for all the other strings. The NML likelihood also assigns the same probability to all binary strings having the same number of ones, but with a non-null probability for the other strings. Then they have to share a smaller probability mass, resulting in a smaller probability per string and a strictly greater coding length.
To gain further insights, let us approximate the difference of coding length: $$\delta L\left(x^n|\widehat{\theta}(x^n)\right) = L_{nml}\left(x^n|\widehat{\theta}(x^n)\right) -L_{enum}\left(x^n|\widehat{\theta}(x^n)\right).$$ Using the approximation given in [@Grunwald07] (formula 4.36) with the Bernoulli parameter $\theta = \widehat{\theta}(x^n)$ , we have
$$\begin{aligned}
L_{enum}\left(x^n|\theta\right) &=& \log {{n}\choose{\theta n}},\\
&=& n H(\theta) - \log \sqrt{2 \pi n \mathrm{var} (\theta)} + O(1/n),\\
&=& L_{nml}\left(x^n|\theta\right) -\frac{1}{2} \log (2 \pi n \mathrm{var}(\theta)) + O(1/n).\end{aligned}$$
We get
$$\label{deltaL}
\delta L\left(x^n|\widehat{\theta}(x^n)\right) = \frac{1}{2} \log (2 \pi n \mathrm{var}(\theta)) + O(1/n).$$
The difference of coding length is always positive but not uniform:
- for $k(x^n) = 0$, $\delta L\left(x^n|\widehat{\theta}(x^n)\right) = 0$,
- for $k(x^n) \approx n/2$, $\delta L\left(x^n|\widehat{\theta}(x^n)\right) \approx \frac{1}{2} \log (\frac {n \pi}{2})$.
These results demonstrate that the enumerative code provides a better encoding of the data with the help of the model for any binary strings, all the more for strings with equidistributed zeros and ones. The gain in coding length compared to the NML code grows as the logarithm of the sample size.
Parametric complexity term
--------------------------
Using inequality \[sc\_Bernoulli\] and as the parametric complexity of code is the sum of the stochastic complexity over all possible strings, we get:
$$\label{pc_Bernoulli}
\forall n > 1, COMP_{enum}^{(n)} > COMP_{NML}^{(n)}.$$
Both terms are equal for $n=1$ and asymptotically, the parametric complexity of the enumerative code is twice that of the NML code (see Table \[tableCodes\]).
![Parametric complexity for the Bernoulli model.[]{data-label="fig_comp"}](CompareBernoulliCOMP.pdf){width="85.00000%"}
We now focus on the non-asymptotic behavior of the parametric complexity terms and their approximations. Figure \[fig\_comp\] shows the value of the parametric complexity of the Bernoulli model, using the enumerative code, the NML code (exact numerical computation and approximation), as well as the related BIC penalization term.
The approximation of the NML parametric complexity is very good as soon as $n$ is beyond 100, but less accurate for small sample sizes. Asymptotically, the parametric complexity of the enumerative code is twice that of the NML approach. It is always greater, but for very small sample sizes, the difference becomes smaller and smaller.
Overall code length
-------------------
Both the enumerative and NML codes exploit universal distributions on all binary strings $x^n \in X^n$. The compression of the data with the help of the model is better for the enumerative distribution, at the expense of a worse parametric complexity. Adding the parametric complexity and stochastic complexity terms, previous sections show that the NML code is much shorter for degenerated binary strings:
$$\begin{aligned}
\mathrm{for} \; k(x^n) = 0 \;\, \mathrm{or} \;\, k(x^n) = n, \quad \quad&&\\ \nonumber
L_{enum}\left(\widehat{\theta}(x^n), x^n\right) &\approx& \log n\\ \nonumber
L_{nml}\left(\widehat{\theta}(x^n), x^n\right) &\approx& \frac{1}{2} \log n + \frac{1}{2} \log \frac{\pi}{2},\end{aligned}$$
whereas the enumerative code is slightly shorter for equidistributed binary strings (where $H(\widehat{\theta}(x^n)) \approx \log 2$), with a margin of $\log \frac{\pi}{2}$:
$$\begin{aligned}
\label{eqn_mixture}
\mathrm{for} \; k(x^n) \approx n/2, \quad \quad&&\\ \nonumber
L_{enum}\left(\widehat{\theta}(x^n), x^n\right) &\approx& \frac{1}{2} \log n - \frac{1}{2} \log \frac{\pi}{2} + n \log 2,\\ \nonumber
L_{nml}\left(\widehat{\theta}(x^n), x^n\right) &\approx& \frac{1}{2} \log n + \frac{1}{2} \log \frac{\pi}{2} + n \log 2.\end{aligned}$$
Expectation of the coding length of all binary strings
------------------------------------------------------
![Expected overhead of coding length w.r.t. random model.[]{data-label="fig_codinglength"}](CompareBernoulliCodingLength.pdf){width="85.00000%"}
Let us now estimate the expectation of the coding length for all binary strings under the uniform distribution, where $\forall x^n \in X^n, p(x^n) = 1/{2^n}$. $$\begin{aligned}
\label{eqn_codinglength}
\mathrm{E}\left(L\left(\widehat{\theta}(x^n), x^n\right)\right) &=& \frac{1} {2^n} \sum_{x^n \in X^n} {L\left(\widehat{\theta}(x^n), x^n\right)},\\ \nonumber
&=& \frac{1} {2^n} \sum_{k=0}^n {{{n}\choose{k}} L\left(\widehat{\theta}(x^n), x^n\right)}.\end{aligned}$$
We perform an exact numerical calculation using the exact value of the NML model complexity term, for all $n, 1 \leq n \leq 1000$. Figure \[fig\_codinglength\] reports the expected coding length for the enumerative and NML codes minus that of the random code ($n \log 2$). The results show that both codes have an average overhead of about $1/2 \log n$ compared to the direct encoding of the binary strings, and that, under the uniform distribution, the enumerative code always compresses the data better on average that the NML code, especially in the non-asymptotic case.
Actually, averaging on all binary strings is the same as considering exhaustively all the binary string outcomes of a Bernoulli distribution with parameter $\theta = 1/2$. Using the central limit theorem, the proportion of binary strings $x^n$ where $k(x^n)/n \approx 1/2$ goes to 1 as $n$ goes to infinity, which explains why the shorter coding lengths obtained with the enumerative code for binary strings with $k(x^n)/n \approx 1/2$ provide the main contribution in the expectation. Using Formula \[eqn\_mixture\], the expected coding length of the enumerative code is asymptotically better than that of the NML code by a margin of $\log \frac{\pi}{2}$.
Percentage of compressible binary strings {#percentCompressibleBernoulli}
-----------------------------------------
We now focus on the percentage $p_{compressible}$ of compressible binary strings using both the enumerative and NML codes, that is the percentage of binary strings with coding length shorter than $n \log 2$:
$$\begin{aligned}
\label{eqn_percentcompressible}
p_{compressible} &=& \frac{1} {2^n} \sum_{x^n \in X^n}
{\mathbb{1}_{\left\{L\left(\widehat{\theta}(x^n), x^n\right) \leq n \log 2\right\}}},\\ \nonumber
&=& \frac{1} {2^n} \sum_{k=0}^n {{{n}\choose{k}}
\mathbb{1}_{\left\{L\left(\widehat{\theta}(x^n), x^n\right) \leq n \log 2\right\}}}.\end{aligned}$$
![Percentage of compressible binary strings.[]{data-label="fig_percentcompressible"}](CompareBernoulliPercentCompressible.pdf){width="85.00000%"}
As previously, we perform an exact numerical calculation for all $n, 1 \leq n \leq 1000$. Figure \[fig\_percentcompressible\] shows that the percentage of compressible strings decreases at a rate of $O(1/\sqrt{n})$ for both codes, as expected. However, the enumerative code always compresses more binary strings than the NML code. Due to the discrete decision threshold in formula \[eqn\_percentcompressible\], the exact computed percentage values are not smooth like in Figure \[fig\_codinglength\], especially in the non-asymptotic case for small string sizes, but the overall tendency appears clearly for large sample sizes. In the asymptotic case, around $60\%$ more strings can be compressed using the enumerative code (empirical evaluation).
Distribution of compression rates {#compressionrateBernoulli}
---------------------------------
We now focus on the distribution of the compression rates, that is the ratio of the coding length of a binary string using the Bernoulli versus the random model:
$$\label{eqn_compressionrate}
\%compression = \frac{L\left(\widehat{\theta}(x^n), x^n\right)} {n \log 2}.$$
![Inverse cumulative distribution of compression rates for strings of size 10, 100, 1000.[]{data-label="fig_compressionrate"}](CompareBernoulliCompressionRate10.pdf "fig:"){width="70.00000%"}\
![Inverse cumulative distribution of compression rates for strings of size 10, 100, 1000.[]{data-label="fig_compressionrate"}](CompareBernoulliCompressionRate100.pdf "fig:"){width="70.00000%"}\
![Inverse cumulative distribution of compression rates for strings of size 10, 100, 1000.[]{data-label="fig_compressionrate"}](CompareBernoulliCompressionRate1000.pdf "fig:"){width="70.00000%"}
Figure \[fig\_compressionrate\] shows the inverse cumulative distribution of compression rates for binary strings of size 10, 100, 1000, using the NML or enumerative code . For example, from the 1024 ($2^{10}$) strings of size 10, only $0.2\%$ (the two “pure” strings) can be compressed better with the NML than with the enumerative code. All the other strings are better compressed with the enumerative code. For both codes, $11\%$ of the strings are compressible ($\%compression < 1$). For string of size 100, only $3.0\; 10^{-15}\%$ of the strings are better compressed with the NML code. However, $2.1\%$ of the strings are compressible using the NML code, which is less that the $3.5\%$ obtained using the enumerative code. The tendency is the same for string of size 1000. A tiny portion of the strings, those with almost only zeros or only ones, are better compressed using the NML code. All the other string are better compressed using the enumerative code, with difference growing for balanced strings. This results in a greater number of compressible strings using the enumerative code.
Let us evaluate the asymptotic value of the Bernoulli parameter $\theta$ for which both codes achieve the same compression rate. Using Table \[tableCodes\] and Formula \[deltaL\] in the asymptotic case, we get :
$$\begin{aligned}
\label{eqn_compareCodes}
\delta\left(COMP^{(n)} + L\left(x^n|\widehat{\theta}(x^n)\right) \right) = 0
&\Leftrightarrow& \frac{1}{2} \log \frac{n \pi}{2} - \log(n+1) +\frac{1}{2} \log (2 \pi n \mathrm{var}(\theta)) = 0 \nonumber \\
&\Leftrightarrow& \log (2 \pi n \mathrm{var}(\theta)) = \log \frac {(n+1)^2}{n \pi /2} \nonumber \\
&\Leftrightarrow& \mathrm{var}(\theta) = \frac {(n+1)^2}{n^2 \pi^2} \approx \frac {1}{\pi^2} \nonumber \\
&\Leftrightarrow& \theta(1-\theta) = \frac {1}{\pi^2}\end{aligned}$$
Equation \[eqn\_compareCodes\] has two solutions: $\theta = 1/2 (1 \pm \sqrt{1-4/{\pi^2}})$, that is $\theta \approx 0.114$ and $\theta \approx 0.886$. Thus asymptotically, the enumerative code better compresses the strings for $\theta \in [0.114, 0.886]$, that is around $77\%$ of the values of $\theta$.
Overall, both the NML and enumerative codes have the same asymptotic behavior, with tiny differences in compression rates. However, the enumerative code allows to better compress far more strings, both in the non-asymptotic and asymptotic cases.
Detection of a biased coin {#biasedCoin}
--------------------------
We apply the previous Bernoulli codes to the problem of detection of a biased coin. A fair coin is a randomizing device with two states named *heads* and *tails* that are equally likely to occur. It can be modeled using a Bernoulli process with $\theta_{fair} = \frac{1}{2}$. For a biased coin, the heads and tails are not equally likely to occurs, and the related Bernoulli parameter is $\theta_{bias} \neq \frac{1}{2}$.
The problem is to determine whether a coin is biased given a limited sample of Bernoulli trials. Given a sample $x^n$ of trials, we compute the coding length of this sample using either the NML or the enumerative code and decide that the coin is biased if its coding length is shorter than that of the random code ($n\log 2$). For a given size $n$ and a code (e.g. enumerative or NML), we compute the probability of detecting the biased coin by averaging the detection over all the possible samples of size $n$. Formally, for each code, we thus compute:
$$\begin{aligned}
\label{eqn_biasedCoin}
prob^D (\theta_{bias}, n)
&=& \mathrm{E}_{B(\theta_{bias})}
\left( \mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) < n \log 2 \right\} } \right) \\
&=& \sum_{k=0}^n {{{n}\choose{k}} \theta_{bias}^{k} (1-\theta_{bias})^{n-k}
\mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) < n \log 2 \right\} } }.\end{aligned}$$
![Probability of detection of of a biased coin where $\theta_{bias} = 0.40$.[]{data-label="fig_BiasedCoinProbDetection"}](StudyBiasedCoin40DetectionThreshold.pdf "fig:"){width="70.00000%"}\
The issue is to be able to detect the biased coin with the minimum number of trials. Using Formula \[eqn\_biasedCoin\], we then determine the first value of $n$ where the probability of detecting the biased coin is beyond $50\%$. For example, Figure \[fig\_BiasedCoinProbDetection\] shows the probability of detection of a biased coin ($\theta_{bias} = 0.40$) for sample size rangin from 1 to 1000, using the enumerative and NML codes. The horizontal gray line represents a probability of $50\%$ of detecting the bias. As Formula \[eqn\_biasedCoin\] is not strictly increasing with $n$ and unstable for tiny $n$ (for reasons similar as in Section \[percentCompressibleBernoulli\]), we collect the two following lower and upper thresholds of sample sizes:
$$\begin{aligned}
\label{eqn_thresholdBiasedCoin}
\underline{n}^{\; D} (\theta_{bias})
&=& \min_{n \geq 10} \{prob^D (\theta_{bias}, n) \geq 50\% \},\\
\overline{n}^{\; D} (\theta_{bias})
&=& \max_{n \geq 10} \{prob^D (\theta_{bias}, n) \leq 50\% \}.\end{aligned}$$
In Figure \[fig\_BiasedCoinProbDetection\] for example, we have $\underline{n}^{\; D} (\theta_{bias}) = 96$ and $\overline{n}^{\; D} (\theta_{bias}) = 115$ for the enumerative code and $\underline{n}^{\; D} (\theta_{bias}) = 126$ and $\overline{n}^{\; D} (\theta_{bias}) = 145$ for the NML code, that thus needs around $10\%$ more trials to detect the biased coin.
![Minimum sample size to detect a biased coin with probability greater than $50\%$.[]{data-label="fig_thresholdBiasedCoin"}](StudyBiasedCoinDetectionSizeThreshold.pdf "fig:"){width="70.00000%"}\
Figure \[fig\_thresholdBiasedCoin\] shows the detection thresholds computed using the NML or enumerative codes for $\theta_{bias} \in [0.35; 0.5]$. As expected, the min sample size necessary to detect a biased coin increases quickly when $\theta_{bias}$ becomes close to $\frac{1}{2}$. For $\theta_{bias} \approx 0.46$, around 1,000 trials are necessary to detect the bias, and for $\theta_{bias} \approx 0.495$, around 100,000 trials are necessary. Although all the thresholds are quite close, the enumerative code always needs smaller sample sizes to detect the biased coin.
![Minimum sample size to detect a biased coin with probability greater than $50\%$.[]{data-label="fig_thresholdBiasedCoinRatio"}](StudyBiasedCoinDetectionSizeThreshold_ratio.pdf "fig:"){width="70.00000%"}\
To better compare the threshold without being hampered by the logarithmic scale of the sample size, Figure \[fig\_thresholdBiasedCoinRatio\] shows all the detection thresholds normalized by the enumerative upper threshold. The lower and upper thresholds converge both for the enumerative and NML codes. However, the difference between code does not vanish with the sample size. At least withing the range explored in this experiment, up to 100,000 trials, the enumerative code always needs around $10\%$ less samples on average than the NML code to detect a biased coin.
#### False versus true positive rate.
The probability of detecting a bias when a coin is actually biased can be interpreted as a true positive rate, and when the coin is fair as a false positive rate. Given this, the enumerative code needs less samples than the NML code to detect a bias with a true positive rate greater than $50\%$. In the case of a fair coin, the false positive rate of both codes decreases at a rate of O$(1/\sqrt n)$, as shown in the experiment related to the percentage of compressible strings (cf. Section \[percentCompressibleBernoulli\]: formula \[eqn\_percentcompressible\] is the same as formula \[eqn\_thresholdBiasedCoin\] for $\theta_{bias} = \frac{1}{2}$). Still, the false positive rate is about $60\%$ higher for the enumerative code than for the NML code.
Overall, the enumerative code compresses most binary strings slightly better than the NML code, resulting in a better sensitivity to biased coins at the expense of more false detections in case of fair coins.
Biased versus fair coin classification {#secCoinClassification}
--------------------------------------
To further investigate on the comparison between the NML and enumerative codes, we suggest a classification experiment where the objective is to predict whether a coin if fair or biased. Let $\theta_{bias} \in [0;1]$ and $n \in \mathbb{N}^*$ be fixed parameters. The instances to classify are sequences $x^n$ of $n$ trials generated with equal probability ($p_F=p_B = \frac{1}{2}$) either from a fair coin ($\theta = \frac{1}{2}$) or from a biased coin ($\theta = \theta_{bias}$). The objective is to predict whether the coin that produced each sequence was fair or biased. As in Section \[biasedCoin\], we evaluate both the NML and enumerative codes as classifiers by predicting a bias if they can encode a sequence with a coding length shorter than that of the random code ($n\log 2$), and predicting fair otherwise.
[Real $\downarrow$ Predicted $\rightarrow$]{} Bias Fair
----------------------------------------------- ------ ------
Bias TP FN
Fair FP TN
: Coin classification results.[]{data-label="coinContigencyTable"}
The result can be analyzed in terms of a contingency table, as illustrated in Table \[coinContigencyTable\]:
- true positive (TP): detecting bias correctly,
- false positive (FP): detecting bias when there is none,
- true negative (TN): detecting fair correctly,
- false negative (FN): detecting fair when the coin is biased.
In this experiment, we focus on the correct detections:
- sensitivity or true positive rate $TPR= TP/(TP+FN)$ for the correct detection of bias,
- specificity or true negative rate $TNR= TN/(TN+FP)$ for the correct detection of fair,
- accuracy $ACC = (TP+TN)/(TP+FP+FN+TN)$ for the global rate of correct detection.
For given $\theta_{bias}$ and $n$ parameters and for each code, we compute the expectation of the indicators by integrating other the distribution of all the sequences issued from the generation process.
$$\begin{aligned}
\label{eqn_TPR}
E(TPR)
&=& \mathrm{E}_{B(\theta_{bias})}
\left( \mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) < n \log 2 \right\} } \right),\\
&=& \sum_{k=0}^n {{{n}\choose{k}} \theta_{bias}^{k} (1-\theta_{bias})^{n-k}
\mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) < n \log 2 \right\} } },\\
E(TNR)
&=& \mathrm{E}_{B(1/2)}
\left( \mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) \geq n \log 2 \right\} } \right), \\
&=& \frac{1}{2^n} \sum_{k=0}^n {{{n}\choose{k}}
\mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) \geq n \log 2 \right\} } },\\
E(ACC)
&=& p_B E(ETR) + p_F E(TNR),\\
&=& \frac{E(ETR) + E(TNR)}{2}.\end{aligned}$$
![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_0501.pdf "fig:"){width="45.00000%"} ![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_0501_delta.pdf "fig:"){width="45.00000%"}\
![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_051.pdf "fig:"){width="45.00000%"} ![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_051_delta.pdf "fig:"){width="45.00000%"}\
![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_055.pdf "fig:"){width="45.00000%"} ![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_055_delta.pdf "fig:"){width="45.00000%"}\
![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_060.pdf "fig:"){width="45.00000%"} ![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_060_delta.pdf "fig:"){width="45.00000%"}\
![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_075.pdf "fig:"){width="45.00000%"} ![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_075_delta.pdf "fig:"){width="45.00000%"}\
![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_090.pdf "fig:"){width="45.00000%"} ![Classification of coins using the NML and Enum codes for different biases. Accuracy (left) and difference $(Enum - NML)$ for the true positive, false negative and accuracy (right).[]{data-label="fig_CoinClassification"}](StudyCoinDetectionAccuracy_090_delta.pdf "fig:"){width="45.00000%"}
We perform the coin classification experiment for $\theta_{bias} \in \{0.501, 0.51, 0.55, 0.75, 0.60, 0.90\}$ and $n$ ranging from 1 to $10,000$ using the enumerative and the NML codes ($n$ up to $10,000,000$ for $\theta_{bias} = 0.501$). Figure \[fig\_CoinClassification\] reports the accuracy results (left) as well as the difference $(Enum - NML)$ of the three indicators.
Overall, both codes exhibit a similar behavior w.r.t. the coin classification problem, with accuracy increasing from 0.5 for small $n$ to 1 for large $n$, and a slow increase rate for small bias and a fast one for large bias. Except in the tiny samples with $n \leq 20$, the difference between any of the three indicators never exceeds around 15%. However, there are some interesting differences. As noticed in Section \[biasedCoin\], the enumerative code has a better sensitivity at the expense to a worse specificity, and the aggregated accuracy result exhibits a variety of behaviors. When the bias is small ($\theta_{bias}$ close from $\frac{1}{2}$), the enumerative code is far more sensitive while being a little less specific, resulting in more accurate predictions in the non-asymptotic case. When the bias is large ($\theta_{bias}$ far from $\frac{1}{2}$), both codes get almost the same sensitivity while the enumerative code remains less specific, resulting in slightly less accurate predictions. In all cases, the differences between both codes get tiny for large $n$, in the asymptotic case.
The case of multinomial distribution {#secMultinomial}
====================================
Let us consider the multinomial model with parameter $\theta = (\theta_1, \ldots, \theta_m), \; \sum_{j=1}^m {\theta_j}=1, \forall j, \theta_j> 0$, such that $P_{\theta}(X=j) = \theta_j$, in the case of m-ary sequences $x^n \in X^n$ of size $n$. For a given sequence $x_n$, $P_{\theta}(x_n) = \prod_{j=1}^m {\theta_j^{n_j}}$, where $n_j$ is the number of occurrences of outcome $j$ in sequence $x^n$.
Standard NML approach {#secMultinomialNML}
---------------------
As pointed out in [@RooijEtAl09],
> “The NML distribution has a number of significant practical problems. First, it is often undefined, because for many models the denominator in (\[eqnNML\]) is infinite, even for such simple models as the Poisson or geometric distributions. Second, $X^n$ is exponentially large in $n$, so calculating the NML probability exactly is only possible in special cases such as the Bernoulli model above, where the number of terms is reduced using some trick. Something similar is possible for the more general multinomial model (...), but in most cases \[it\] has to be approximated, which introduces errors that are hard to quantify.”
The parametric complexity of the NML universal model with respect to a k-parameter exponential family model is usually approximated by $\frac{k}{2} \log \frac{n}{2 \pi}$ [@Grunwald07]. In the case of the multinomial distribution with $(m-1)$ free parameters, this gives $\frac{m-1}{2} \log \frac{n}{2 \pi}$. A better approximation based on Rissanen’s asymptotic expansion [@Rissanen96] is presented in [@Kontkanen2009]: $$\label{compM_R}
COMP_{nml}^{(n)}(\theta) = \frac{m-1}{2} \log \frac{n}{2 \pi} + \log \frac{\pi^{m/2}}{\Gamma (m/2)} + o(1),$$ where $\Gamma(.)$ is the Euler gamma function. Still in [@Kontkanen2009], a sharper approximation based on Szpankowski’s approximation is presented. This last approximation, far more complex is very accurate w.r.t. $n$, with $o(\frac{1}{n^{3/2}})$ precision. We present below its first terms until $o(\frac{1}{\sqrt n})$, which actually are the same that in Rissanen’s approximation: $$\label{compM_S}
COMP_{nml}^{(n)}(\theta) = \frac{m-1}{2} \log \frac{n}{2} + \log \frac{\sqrt \pi}{\Gamma (m/2)} + o(\frac{1}{\sqrt n}),$$
Finally, [@KontkanenEtAl07] propose an exact computation of the multinomial stochastic complexity, at the expense of sophisticated algorithms with quasilinear computation time.
Enumerative two-part crude MDL {#secEnumerativeM}
------------------------------
We apply the same approach as in the case of the Bernoulli model (Sections \[secEnumerativeB\] and \[secEnumBayesian\]). Given a sample size $n$, the number of tuples $(n_1, n_2, \ldots, n_m)$ such that $\sum_{j=1}^m {n_j} = n$ is ${{n+m-1}\choose{m-1}}$. We then encode the multinomial model parameter using a uniform prior $$P\left(\theta = \left(\frac{n_1}{n}, \frac{n_2}{n}, \ldots, \frac{n_m}{n}\right) \right) = 1 / {{n+m-1}\choose{m-1}},$$ leading to $L(\theta) = \log {{n+m-1}\choose{m-1}}$. Second, we have to encode the data $x^n$ at best given the $\theta$ parameter.
We suggest using a probability distribution for encoding the finite size data sample $x^n$, with the following likelihood.
For $\theta \neq \left(\frac{n_1(x^n)}{n}, \frac{n_2(x^n)}{n}, \ldots, \frac{n_m(x^n)}{n}\right)$, we cannot encode the data and $P(x^n|\theta) = 0$.
For $\theta = \widehat{\theta}(x^n) = \left(\frac{n_1(x^n)}{n}, \frac{n_2(x^n)}{n}, \ldots, \frac{n_m(x^n)}{n}\right)$, the observed data is consistent with the model parameter and we assume that all the possible observable data are uniformly distributed. The number of m-ary strings where the number of occurrences of outcome $j$ is $n_j$ is given by the multinomial coefficient $\frac{n!}{n_1! n_2! \ldots n_m!}$. Thus the probability of observing one particular m-ary string is $P(x^n|\widehat{\theta}(x^n)) = 1/\frac{n!}{n_1! n_2! \ldots n_m!}$. This gives a total code length of
$$L(\widehat{\theta}(x^n), x^n) = \log {{n+m-1}\choose{m-1}} + \log \frac{n!}{n_1! n_2! \ldots n_m!},$$
defined only when $\theta = \widehat{\theta}(x^n)$.
NML interpretation {#nml-interpretation}
------------------
Let us compute the NML parametric complexity of this enumerative code on the basis of the discrete likelihood. We have
$$\begin{aligned}
COMP^{(n)}(\theta) &=& \log \sum_{y^n \in X^n} {P_{\widehat{\theta}(y^n)}(y^n)}, \\
&=& \log \sum_{\{n_1+\ldots+n_m=n\}} {\frac{n!}{n_1! n_2! \ldots n_m!} \left({1} / \frac{n!}{n_1! n_2! \ldots n_m!} \right)},\\
&=& \log {{n+m-1}\choose{m-1}}.\end{aligned}$$
Interestingly, we find exactly the same complexity term as the coding length of the best hypothesis in the enumerative approach, that simply relies on counting the possibilities for the model parameters. Like in the Bernoulli case, this shows that the enumerative code is both a two-part and a one-part code, optimal w.r.t. the NML approach and parametrization invariant. We have an exact formula for the complexity term, very simple to compute. Using Stirling’s approximation $\log n! = n \log n - n + \frac{1}{2} \log {2 \pi n} + O(1/n)$, we get the following asymptotic approximation:
$$\label{compEnum_M}
COMP^{(n)}(\theta) = (m-1)(\log n - \log (m -1) +1) -\frac{1}{2} \log {2 \pi (m-1)} + o(\frac{1}{n}).$$
Once again, this asymptotic model complexity is twice that of the alternative classical NML code or the standard BIC regularization term $(m-1)\log n$.
Code comparison for the multinomial distributions {#secComparisonM}
=================================================
In this section, we compare the NML code (Section \[secMultinomialNML\]) and enumerative two-part crude MDL codes (Section \[secEnumerativeM\]) for the multinomial distribution.
Notation {#secNotationM}
--------
Let us use the names *NML* and *enumerative* for the specific MDL codes presented in Sections \[secMultinomialNML\] and \[secEnumerativeM\]. We also consider the *random* code as a baseline: it corresponds to a direct encoding of each binary string $x^n$ with a coding length of $n \log m$. The likelihood of each string $x^n$ is $1/m^n$, and as $\sum_{\{n_1+\ldots+n_m=n\}} {\frac{n!}{n_1! n_2! \ldots n_m!} 1/m^n} = 1$, we have $COMP_{random}^{(n)}(\emptyset) = 0$ and $L_{random}\left(x^n|\emptyset \right) = n \log m$.
Table \[tableCodesM\] reminds the parametric and stochastic complexity of each considered code for the multinomial distribution.
Code name $COMP_{name}^{(n)}$ $L_{name}\left(x^n|\widehat{\theta}(x^n) \right)$
--------------- ----------------------------------------------------------------------------------------------- ---------------------------------------------------
*enumerative* $\log {{n+m-1}\choose{m-1}}$ $\log \frac {n!} {n_1! \ldots n_m!}$
*NML* $\frac{m-1}{2} \log \frac{n}{2} + \log \frac{\sqrt \pi}{\Gamma (m/2)} + o(\frac{1}{\sqrt n})$ $\log \frac {n^n} {n_1^{n_1} \ldots n_m^{n_m}}$
*random* $0$ $n \log m$
: Parametric and stochastic complexity per code.[]{data-label="tableCodesM"}
Stochastic complexity term {#secStochasticM}
--------------------------
The stochastic complexity term of the enumerative code is always smaller than that of the NML code for non-degenerated m-ary strings:
$$\label{sc_Multinomial}
\forall n, \forall x^n \in X^n \; \mbox{such that} \; (\max_{1 \leq j \leq m} n_j ) < n \; \mbox{ then} \;
L_{enum}\left(x^n|\widehat{\theta}(x^n)\right) < L_{nml}\left(x^n|\widehat{\theta}(x^n)\right).$$
An intuitive proof relies on the fact that the enumerative MDL likelihood assigns the same probability to all m-ary strings having the same number of occurrence per outcome $j$, with a null probability for all the other strings. The NML likelihood also assigns the same probability to these m-ary strings, but with a non-null probability for the other strings. Then they have to share a smaller probability mass, resulting in a smaller probability per string and a strictly greater coding length.
To gain further insights, let us approximate the difference of coding length for the stochastic complexity term: $$\delta L_{SC}\left(x^n|\widehat{\theta}(x^n)\right) = L_{nml}\left(x^n|\widehat{\theta}(x^n)\right) -L_{enum}\left(x^n|\widehat{\theta}(x^n)\right).$$
We assume that $\forall j, n_j > 0$ and $n_j \approx n \widehat{\theta}_j$. Using Stirling’s approximation $\log n! = n \log n - n + \frac{1}{2} \log {2 \pi n} + O(1/n)$, we get $$\begin{aligned}
L_{enum}\left(x^n|\widehat{\theta}(x^n)\right)
&=& \log \frac{n!}{n_1! n_2! \ldots n_m!} \\
&=& n \log n -n + \frac{1}{2} \log {2 \pi n} + O(1/n) \\
&& -\sum_{j=1}^m {\left(n_j \log n_j - n_j + \frac{1}{2} \log {2 \pi n_j} + O(1/n_j)\right)} \\
&=& \log \frac {n^n} {n_1^{n_1} \ldots n_m^{n_m}} - \frac{m-1}{2} \log {2 \pi n}
- \frac{1}{2} \log \prod_{j=1}^m {\widehat{\theta}_j} + O(1/n) \\\end{aligned}$$
We get
$$\label{deltaM_SC}
\delta L_{SC}\left(x^n|\widehat{\theta}(x^n)\right) = \frac{m-1}{2} \log {2 \pi n}
+ \frac{1}{2} \log \prod_{j=1}^m {\widehat{\theta}_j} + O(1/n).$$
It is noteworthy that the $\mathrm{var}(\widehat{\theta})$ term in the Bernoulli case (see Formula \[deltaL\]) generalizes to a $\prod_{j=1}^m {\widehat{\theta}_j}$ term in the multinomial case.
These results demonstrate that the enumerative code provides a better encoding of the data with the help of the model for any m-ary strings. The gain in coding length compared to the NML code is always positive and grows asymptotically as $(m-1)/2$ times the logarithm of the sample size.
Parametric complexity term {#secParametricM}
--------------------------
Using inequality \[sc\_Multinomial\] and as the parametric complexity of code is the sum of the stochastic complexity over all possible strings, we get:
$$\label{pc_Multinomial}
\forall n > 1, COMP_{enum}^{(n)} > COMP_{NML}^{(n)}.$$
Both terms are equal for $n=1$ and asymptotically, the parametric complexity of the enumerative code is twice that of the NML code (see Formulas \[compM\_S\] and \[compEnum\_M\]).
![Parametric complexity for the multinomial model with 10 or 100 categories.[]{data-label="fig_compM"}](CompareMultinomialCOMP10.pdf "fig:"){width="85.00000%"}\
![Parametric complexity for the multinomial model with 10 or 100 categories.[]{data-label="fig_compM"}](CompareMultinomialCOMP100.pdf "fig:"){width="85.00000%"}
We now focus on the non-asymptotic behavior of the parametric complexity terms and their approximations. Figure \[fig\_compM\] shows the value of the parametric complexity of the multinomial model, using the enumerative code, the NML code (exact numerical computation and Rissanen or Szpankowski based approximations: see Section \[secMultinomialNML\]), as well as the related BIC penalization term.
The BIC approximation is very bad, all the more as $m$ increases. The Rissanen approximation of the NML parametric complexity is very good as soon as $n$ is about 100 times the number $m$ of categories, but less accurate for small sample sizes. As expected, the Szpankowski based approximation if much sharper, being accurate as soon as $n$ is beyond $m$, but with bad accuracy for $n << m$.
Let us now compute an asymptotic approximation of the difference of parametric complexity between the two codes: $$\delta L_{PC} COMP^{(n)}(\theta) = COMP_{nml}^{(n)}(\theta) - COMP_{enum}^{(n)}(\theta).$$
Using previous approximations presented in Formulas \[compM\_S\], \[compEnum\_M\] and the Stirling’s approximation of the Gamma function, we get:
$$\begin{aligned}
\label{eqn_diffPC_M}
\delta L_{PC} COMP^{(n)}(\theta)
&=& \frac{m-1}{2} \log \frac{n}{2} + \log \sqrt \pi
-(\frac{m}{2} \log \frac{m}{2} - \frac{m}{2} -\frac{1}{2} \log \frac{m}{4 \pi})\\
&& - ((m-1)(\log n - \log (m -1) +1) -\frac{1}{2} \log {2 \pi (m-1)}) \\
&& + o(\frac{1}{\sqrt n}),\\
&=& -\frac{m-1}{2} \log n -\frac{m-1}{2} \log 2 + \frac{1}{2}\log \pi \\
&&-\frac{m}{2} \log m + \frac{m}{2} \log 2 + \frac{m}{2} + \frac{1}{2} \log m - \frac{1}{2} \log \pi - \log 2\\
&& + (m-1)\log (m -1) - (m-1) \\
&& + \frac{1}{2} \log 2 + \frac{1}{2} \log \pi + \frac{1}{2} \log (m-1) + o(\frac{1}{\sqrt n}),\\
&=& -\frac{m-1}{2} \log {n m e} + (m - \frac{1}{2}) \log (m-1) + \frac{1}{2} \log {\pi e} + o(\frac{1}{\sqrt n}).\end{aligned}$$
We obtain $$\begin{aligned}
\label{deltaM_PC}
\delta L_{PC} COMP^{(n)}(\theta)
&=& -\frac{m-1}{2} \log n + \frac{m}{2}\log{\frac{m}{e}} \\ \nonumber
&& +\log \frac{e}{\sqrt \pi} + (m-\frac{1}{2})\log(1-\frac{1}{m}) + o(\frac{1}{\sqrt n})\end{aligned}$$
and for $n >> m$
$$\begin{aligned}
\label{deltaM_PC2}
\delta L_{PC} COMP^{(n)}(\theta)
&=& -\frac{m-1}{2} \log n + \frac{m}{2}\log{\frac{m}{e}}
-\log \sqrt \pi + o(\frac{1}{m}) + o(\frac{1}{\sqrt n}).\end{aligned}$$
This result demonstrates that the difference of parametric complexity increases as the logarithm of the sample size. The speed of increase is with a factor $(m-1)/2$, but for small sample sizes (typically $n \leq m$), the difference remains small.
![Ratio of enumerative to the NML parametric complexities for the multinomial model with up to 100,000 outcomes.[]{data-label="fig_ratio_compM"}](StudyMultinomialCOMP.pdf){width="85.00000%"}
We illustrate this behavior in the non-asymptotic case. Figure \[fig\_ratio\_compM\] focuses on the ratio of the exact parametric complexity terms for the enumerative and NML codes. This ratio always increases from 1 for $n=1$ to 2 when $n$ goes to infinity, with the speed of convergence decreasing as the number $m$ of outcomes increases.
Overall code length {#secOverallM}
-------------------
Both the enumerative and NML codes exploit universal distributions on all m-ary strings $x^n \in X^n$. The compression of the data with the help of the model is better for the enumerative distribution, at the expense of a worse parametric complexity. The overall code length is the sum of the parametric and stochastic complexities. Using previous approximations presented in Formulas \[deltaM\_SC\], \[deltaM\_PC\], we obtain the following approximation of the difference of overall code lengths between the two codes:
$$\begin{aligned}
\label{eqn_mixtureM}
\Delta L\left(\widehat{\theta}(x^n), x^n\right)
&=& \delta L_{PC} COMP^{(n)}(\theta) + \delta L_{SC}\left(x^n|\widehat{\theta}(x^n)\right),\\
&=& -\frac{m-1}{2} \log n + \frac{m}{2}\log{\frac{m}{e}}
+\log \frac{e}{\sqrt \pi} + (m-\frac{1}{2})\log(1-\frac{1}{m})\\
&& + \frac{m-1}{2} \log {2 \pi n}
+ \frac{1}{2} \log \prod_{j=1}^m {\widehat{\theta}_j} + o(\frac{1}{\sqrt n}).\end{aligned}$$
We obtain
$$\begin{aligned}
\label{eqn_mixtureM2}
\Delta L\left(\widehat{\theta}(x^n), x^n\right)
&=& \frac{m}{2} \log \frac{m 2 \pi}{e} + \frac{1}{2} \log \prod_{j=1}^m {\theta_j}
+\log \frac{e}{\sqrt 2} + (m-\frac{1}{2})\log(1-\frac{1}{m})
+ o(\frac{1}{\sqrt n}),\end{aligned}$$
and for $n >> m$
$$\begin{aligned}
\label{eqn_mixtureM3}
\Delta L\left(\widehat{\theta}(x^n), x^n\right)
&=& \frac{m}{2} \log \frac{m 2 \pi}{e}
+ \frac{1}{2} \log \prod_{j=1}^m {\theta_j}
- \log \sqrt 2 + o(\frac{1}{m}) + o(\frac{1}{\sqrt n}) .\end{aligned}$$
Asymptotically, the difference in overall code length does not depend on the size $n$ of the string. Both codes differ by a margin that depends essentially on the number $m$ of outcomes and of the multinomial parameter $\theta$.
#### Case of balanced multinomial distributions.
The term $\log \prod_{j=1}^m {\theta_j}$ is minimal for equidistributed multinomial distribution ($\theta_j=1/m$). For such distributions, we get $$\begin{aligned}
\label{eqn_mixtureM4}
\Delta L\left(\widehat{\theta}(x^n), x^n\right)
&=& \frac{m}{2}\log \frac{2 \pi}{e} + \log \frac{e}{\sqrt 2} + o(\frac{1}{m}) + o(\frac{1}{\sqrt n}),\end{aligned}$$ which means that the enumerative code compresses the strings better than the NML code with a margin that increases linearly with $m$.
#### Case of degenerated multinomial distributions.
In case of multinomial distributions with one single observed outcome (e.g. $\widehat{\theta} = (1, 0, \ldots, 0)$), both the NML and enumerative codes have a null stochastic complexity and the difference between the coding lengths reduces to the difference between the parametric complexity terms (see Formula \[deltaM\_PC\]). In this extreme case, the NML code compresses the string better with a margin that grows as $\frac{m-1}{2}$ times the logarithm of the sample size.
#### Case of unbalanced multinomial distributions.
Let us study the boundary between balanced distributions and degenerated distributions, where the enumerative code dominates the NML code and conversely. We are seeking for distributions where both codes achieve approximately the same compression. For that purpose, let us consider peaked multinomial distributions, with most of the probability mass for the first outcome ($\theta_1 = \theta_{max}$) and the rest of the probability mass equistributed for the other outcomes ($\theta_j = \theta_{min}, 2 \leq j \leq m$, with $\theta_{min} = \frac {1 - \theta_{max}} {m-1}$). Using Formula \[eqn\_mixtureM3\], we thus try to find the peaked distribution $\theta = (\theta_{max}, \theta_{min}, \ldots, \theta_{min})$ such that $\Delta L\left(\widehat{\theta}(x^n), x^n\right) = o(1)$. The solution is obtained for
$$\label{peakThresholdMax}
\theta_{max} = 1-\frac{m-1}{m + \log m}\frac{e}{2 \pi} > 0.56,$$
$$\label{peakThresholdMin}
\theta_{min} = \frac{e}{2 \pi (m + \log m)} < 0.44\frac{1}{m},$$
leading to
$$\Delta L\left(\widehat{\theta}(x^n), x^n\right) = \alpha + o(\frac{1}{m}) + o(\frac{1}{\sqrt n})$$
with $\alpha \approx 0.37$. This peaked distribution is at the limit where the NML code compresses the data better than the enumerative code. Numerical experiments, not reported here, confirm the accuracy of Formulas \[peakThresholdMax\] and \[peakThresholdMin\] and show that the asymptotic value of the peak probability $\theta_{max}$ behaves as a lower bound of the obtained probability in the non asymptotic case.
For Bernoulli distributions, we had $\theta_{max} \approx 0.886$ (see Formula \[eqn\_compareCodes\]), and not surprisingly, $\theta_{max}$ decreases with $m$ (see Formula \[peakThresholdMax\]). Interestingly, $\theta_{max}$ is always greater than $0.56$ whatever $m$. This means that when $m$ increases, the ratio $\theta_{max}/\theta_{min}$ grows linearly with $m$ and the fraction of multinomial distributions where the NML code dominates the enumerative code becomes negligible.
#### Synthesis.
The overall difference of coding length is in favor of the enumerative code for balanced strings with a margin that increases linearly with $m$. The NML code is better only for heavily unbalanced strings, where the most frequent outcome occurs more that half of the times, whatever be $m$. Under the uniform distribution, such unbalanced strings are far less frequent than balanced strings, and the enumerative code compresses most strings better than the NML code.
Expectation of the coding length of all m-ary strings
-----------------------------------------------------
![Expected overhead of coding length w.r.t. random model for $m=5$ (left) and $m=10$ (right).[]{data-label="fig_codinglengthM"}](CompareMultinomial5CodingLength.pdf "fig:"){width="49.00000%"} ![Expected overhead of coding length w.r.t. random model for $m=5$ (left) and $m=10$ (right).[]{data-label="fig_codinglengthM"}](CompareMultinomial10CodingLength.pdf "fig:"){width="49.00000%"}
Let us estimate the expectation of the coding length for all m-ary strings under the uniform distribution, where $\forall x^n \in X^n, p(x^n) = 1/{m^n}$. $$\begin{aligned}
\label{eqn_codinglengthM}
\mathrm{E}\left(L\left(\widehat{\theta}(x^n), x^n\right)\right) &=& \frac{1} {m^n} \sum_{x^n \in X^n} {L\left(\widehat{\theta}(x^n), x^n\right)},\\ \nonumber
&=& \frac{1} {m^n} \sum_{\{n_1+\ldots+n_m=n\}} {\frac{n!}{n_1! n_2! \ldots n_m!} L\left(\widehat{\theta}(x^n), x^n\right)}.\end{aligned}$$
We perform an exact numerical calculation using the exact value of the NML parametric complexity term, for $m=5$ ($1 \leq n \leq 100$) and $m=10$ ($1 \leq n \leq 50$). Let us notice that the sum in Formula \[eqn\_codinglengthM\] involves more than ten billions terms for $m=10$ and $n=50$. Figure \[fig\_codinglengthM\] reports the expected coding length for the enumerative and NML codes minus that of the random code ($n \log m$). The results show that both codes have an asymptotic overhead that grows as $(m-1)/2 \log n$, compared to the direct encoding of the binary strings. Under the uniform distribution, the enumerative code always compresses the data better than the NML code, especially in the non-asymptotic case. As for the Bernoulli codes, most possible m-ary strings are almost equidistributed and their shorter coding lengths obtained with the enumerative provide the main contribution in the expectation of the coding length. Following Formula \[eqn\_mixtureM3\], the enumerative code compresses the m-ary strings better than the NML code with a margin that grows linearly with $m$.
Percentage of compressible m-ary strings {#percentCompressibleMultibomial}
----------------------------------------
We now focus on the percentage $p_{compressible}$ of compressible m-ary strings using both the enumerative and NML codes, that is the percentage of m-ary strings with coding length shorter than $n \log m$:
$$\begin{aligned}
\label{eqn_percentMcompressible}
p_{compressible} &=& \frac{1} {m^n} \sum_{x^n \in X^n}
{\mathbb{1}_{\left\{L\left(\widehat{\theta}(x^n), x^n\right) \leq n \log m\right\}}},\\ \nonumber
&=& \frac{1} {m^n} \sum_{\{n_1+\ldots+n_m=n\}} {\frac{n!}{n_1! n_2! \ldots n_m!}
\mathbb{1}_{\left\{L\left(\widehat{\theta}(x^n), x^n\right) \leq n \log m\right\}}}.\end{aligned}$$
![Percentage of compressible m-ary strings (left: $m=3$; right: $m=5$).[]{data-label="fig_percentMcompressible"}](CompareMultinomial3PercentCompressible.pdf "fig:"){width="49.00000%"} ![Percentage of compressible m-ary strings (left: $m=3$; right: $m=5$).[]{data-label="fig_percentMcompressible"}](CompareMultinomial5PercentCompressible.pdf "fig:"){width="49.00000%"}
As previously, we perform an exact numerical calculation for all $n, 1 \leq n \leq n_{max}$, with $n_{max} = 5000$ for $m=3$ and $n_{max} = 500$ for $m=5$. Figure \[fig\_percentMcompressible\] shows that empirically, beyond the non-asymptotic case, the percentage of compressible strings decreases at a rate of $O(1/n^{(m-1)/2})$ for both codes. However, the enumerative code always compresses more binary strings than the NML code.
![Ratio of compressible binary strings using the enumerative rather than the NML code (left: $m=2$; center: $m=3$; right: $m=5$).[]{data-label="fig_ratiocompressible"}](CompareBernoulliRatioCompressible.pdf "fig:"){width="39.00000%"} ![Ratio of compressible binary strings using the enumerative rather than the NML code (left: $m=2$; center: $m=3$; right: $m=5$).[]{data-label="fig_ratiocompressible"}](CompareMultinomial3RatioCompressible.pdf "fig:"){width="29.00000%"} ![Ratio of compressible binary strings using the enumerative rather than the NML code (left: $m=2$; center: $m=3$; right: $m=5$).[]{data-label="fig_ratiocompressible"}](CompareMultinomial5RatioCompressible.pdf "fig:"){width="26.00000%"}
To better characterize the behavior of each code, especially in the non-asymptotic case, we report in Figure \[fig\_ratiocompressible\] the ratio of the number of compressible strings of the enumerative to the NML code, for $m=2$ (Bernoulli), $m=3$ and $m=5$. Empirically, beyond the non-asymptotic case, this ratio converges to a constant that increases with $m$: around 1.6 for $m=2$, 2.5 for $m=3$ and above 5 for $m=5$. We expect that this empirical behavior generalizes for larger $m$, but empirical evaluation is not feasible for large $m$, even for small $n$. On the other hand, studying the asymptotic behavior of this ratio is a non trivial task, beyond the scope of this paper.
Detection of a biased dice {#biasedDice}
--------------------------
We apply the previous multinomial codes to the problem of detection of a biased dice. A fair dice is a randomizing device with $m$ outcomes that are equally likely to occur, which can be modeled using a multinomial process with equidistributed $\theta_j = \frac{1}{m}$. Among all the possibilities of bias, we choose a simple family of peaked multinomial distributions, like those presented in Section \[secOverallM\]. A peak biased dice is then determined by one single parameter $\theta_{bias} > \frac{1}{m}$, with $\theta_1 = \theta_{bias}$ and $\theta_j = \frac{1-\theta_{bias}}{m-1}, \forall j, 1 \leq j \leq m$.
The problem is to determine whether a dice is biased given a limited sample of multinomial trials. Given a sample $x^n$, we compute the coding length of this sample using either the NML or the enumerative code and decide that the dice is biased if its coding length is shorter than that of the random code ($n\log m$). For a given size $n$ and a code (e.g. enumerative or NML), we compute the probability of detecting the biased dice by averaging the detection over all the possible samples of size $n$. Formally, we thus compute:
$$\begin{aligned}
\label{eqn_biasedDice}
prob^D (\theta_{bias}, n)
&=& \mathrm{E}_{M(\theta_{bias})}
\left( \mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) < n \log m \right\} } \right) \\
&=& \sum_{\{n_1+\ldots+n_m=n\}} {\frac{n!}{n_1! n_2! \ldots n_m!} \theta_{bias}^{n_1} (\frac{1-\theta_{bias}}{m-1})^{n-n_1}
\mathbb{1}_{ \left\{ L\left(\widehat{\theta}(x^n), x^n\right) < n \log m \right\} } }. \nonumber\end{aligned}$$
The issue is to be able to detect a biased dice with the minimum sample size. Using Formula \[eqn\_biasedDice\], we then determine the first value of $n$ where the probability of detecting the biased dice is beyond $50\%$:
$$\begin{aligned}
\label{eqn_thresholdBiasedDice}
\underline{n}^{\; D} (\theta_{bias})
&=& \min_{n \geq 10} \{prob^D (\theta_{bias}, n) \geq 50\% \}.\end{aligned}$$
![Minimum sample size to detect a biased dice (left: $m=3$; right: $m=5$) with probability greater than $50\%$.[]{data-label="fig_thresholdBiasedDice"}](StudyBiasedDice3DetectionSizeThreshold.pdf "fig:"){width="49.00000%"} ![Minimum sample size to detect a biased dice (left: $m=3$; right: $m=5$) with probability greater than $50\%$.[]{data-label="fig_thresholdBiasedDice"}](StudyBiasedDice5DetectionSizeThreshold.pdf "fig:"){width="49.00000%"}
Figure \[fig\_thresholdBiasedDice\] shows the detection thresholds computed using the NML or enumerative codes for dices with $m=3$ and $m=5$. As expected, the minimum number of trials necessary to detect a biased dice increases when $\theta_{bias}$ decreases. Although all the thresholds are quite close, the enumerative code always needs smaller sample sizes to detect the biased dice. For example, for $m$=5, the enumerative code needs around $40\%$ less samples than the NML code to detect a biased dice with $\theta_{bias} \approx 0.3$. According to the experiment, the relative difference decreases as the detection threshold increases, but this could not be studied further for heavy computational reasons.
Biased versus fair dice classification {#secDiceClassification}
--------------------------------------
Like in the case of Bernoulli distributions, the enumerative code compresses most m-ary strings slightly better than the NML code, resulting in a better sensitivity to biased dices at the expense of more false detections in case of fair dices. Interestingly, the difference of behavior between the two codes increases for larger $m$.
We do not extend the coin classification experiment (see Section \[secCoinClassification\]) to dices, because there are multiple free parameters to define biased dice and because the calculation of the expectation of accuracy is too computationally intensive. Still, we expect that the results might be similar, with overall a similar behavior w.r.t. the detection of biased dice, but better detection for the enumerative code in the non asymptotic case for small bias.
Conclusion {#secConclusion}
==========
In this paper, we have revisited the enumerative two-part crude MDL code for the Bernoulli model, which compares favorably with the alternative standard NML code. We have suggested a Bayesian interpretation of the enumerative code, that relies on models for finite size samples and results in a discrete definition of the likelihood of the data given the model parameter. We have shown that the coding length of the model parameter is exactly the same as the model complexity computed by applying the NML formula using the definition of the enumerative maximum likelihood. This means that the enumerative code is both a one-part and two part code, which brings parametrization independence, optimality and simplicity. Surprisingly, the obtained parametric complexity is twice that of the alternative classical NML code or the standard BIC regularization term. The enumerative code has a direct interpretation in terms of two part codes for finite sample data. The model parameter is encoded using a uniform prior w.r.t. the sample size and the data are also encoded using a uniform prior among all the binary strings of given size that can be generated using the model parameter. This explains why the enumerative code provides a more parsimonious encoding of the data given the parameter, which compensates the larger model complexity term. Experimental comparisons between the enumerative and NML codes show that they are very similar, with small differences only. Under the uniform distribution, the enumerative code compresses most individual sequences slightly better, resulting in a slightly better compression on average. An application to the detection of biased coins demonstrates that the enumerative code has a better sensitivity to biased coins at the expense of more false detections in case of fair coins, but the differences are small and vanish asymptotically.
Extension to the multinomial model is also presented. Using the same approach, we obtain a very simple and interpretable analytic formula for the parametric complexity term, that once again is approximately twice that of the alternative classical NML code or the standard BIC regularization term. The resulting code, both one-part and two-part, is optimal w.r.t. NML approach and parameterization invariant, with a much simpler parametric complexity term. It compresses most strings better than the “standard” NML code with a constant margin and extremely few heavily unbalanced strings with a margin logarithmic in the sample size. Experimental comparisons extend the results obtained with Bernoulli distributions. Both codes are very similar, with small differences that roughly increase linearly with the number of model parameters.
Altogether, the theoretical and experimental results suggest that one might use the enumerative code rather than NML in practice, for Bernoulli and multinomial distributions.
Adriaans, P. and Vit[á]{}nyi, P. (2007). The power and perils of [MDL]{}. In [*IEEE International Symposium on Information Theory*]{}, pages 2216–2220.
Boullé, M. (2011). Data grid models for preparation and modeling in supervised learning. In Guyon, I., Cawley, G., Dror, G., and Saffari, A., editors, [ *Hands-On Pattern Recognition: Challenges in Machine Learning, volume 1*]{}, pages 99–130. Microtome Publishing.
, S. and Grünwald, P. (2009). Luckiness and regret in minimum description length inference.
Grünwald, P. (2007). . Adaptive computation and machine learning. MIT Press.
Guigour[è]{}s, R., Gay, D., Boull[é]{}, M., Cl[é]{}rot, F., and Rossi, F. (2015). Country-scale exploratory analysis of call detail records through the lens of data grid models. In [*[ECML/PKDD]{}*]{}, pages 37–52.
Hansen, M. and Yu, B. (2001). Model selection and the principle of minimum description length. , 96:746–774.
Kontkanen, P. (2009). . Department of Computer Science, series of publications A, report, 2009-11. University of Helsinki.
Kontkanen, P. and Myllymäki, P. (2007). A linear-time algorithm for computing the multinomial stochastic complexity. , 103(6):227–233.
Mononen, T. and Myllymäki, P. (2007). Fast [NML]{} computation for naive bayes models. In [*10th International Conference on Discovery Science*]{}, pages 151–160.
Rissanen, J. (1978). Modeling by shortest data description. , 14:465–471.
Rissanen, J. (1996). Fisher information and stochastic complexity. , 42(1):40–47.
Rissanen, J. (2000). Strong optimality of the normalized ml models as universal codes. , 47:1712–1717.
Roos, T., Silander, T., Kontkanen, P., and Myllymäki, P. (2008). . In [*Information Theory and Applications Workshop*]{}, pages 272–276. IEEE.
Vit[á]{}nyi, P. and Li, M. (2000). Minimum description length induction, [B]{}ayesianism, and [K]{}olmogorov complexity. , 46:446–464.
Voisine, N., Boullé, M., and Hue, C. (2009). A bayes evaluation criterion for decision trees. , 292:21–38.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show how lasers may create fields which couple to neutral atoms in the same way that the electromagnetic fields couple to charged particles. These fields are needed for using neutral atoms as an [*analog quantum computer*]{} for simulating the properties of many-body systems of charged particles. They allow for seemingly paradoxical geometries, such as a ring where atoms continuously reduce their potential energy while moving in a closed path. We propose neutral atom experiments which probe quantum Hall effects and the interplay between magnetic fields and periodic potentials.'
author:
- 'Erich J. Mueller'
title: 'Artificial electromagnetism for neutral atoms: Escher staircase and Laughlin liquids'
---
Recently, many researchers have expressed interest in using ultracold alkali atoms as [*analog quantum computers*]{} to simulate properties of solid state systems [@simulation]. For example, the leading model of high temperature superconductivity, the Hubbard model, can be studied by placing alkali atoms in an [*optical lattice*]{} – a periodic potential formed by interfering several laser beams. Experimental realizations of the Hubbard model could show whether it captures the phenomena of high temperature superconductivity. Similarly, cold gases provide an ideal setting for studying models of quantum Hall effects [@bosonqh] and exotic phase transitions [@exotic].
A major impediment to studying some of these models, such as those describing quantum Hall effects, is the lack of fields which couple to the neutral atoms in the same way that the electric and magnetic fields couple to charged particles. Here, we show how to create these [*artificial*]{} electromagnetic fields. Since these fields are only analogies of the real electric and magnetic field they do not obey Maxwell’s equations. One can therefore create [*unphysical*]{} and counterintuitive field configurations which lead to a set of as-yet unstudied behavior. Among our examples of these seemingly [*impossible*]{} field configurations, we describe an ‘Escher staircase’ setup where atoms can move around a closed path, continually reducing their potential energy.
The literature already contains several, somewhat limited, implementation of electrical and magnetic fields for neutral atoms. Experimentalists routinely use the Earth’s gravitational field as an analog of a uniform electric field [@gravelectric]. They also study systems in non-inertial frames: uniform acceleration is equivalent to a constant electric field [@acceleratinglattices], while circular motion corresponds to a uniform magnetic field [@rotating]. Recently, Jaksch and Zoller [@zoller] described a method where an effective magnetic field can be applied to two-state atoms in an appropriately designed optical lattice in the presence of an external ‘electric field’. Our approach is an elaboration of Jaksch’s, where the two-state atoms are replaced by three-state atoms. This allows us to overcome the major deficiency of Jaksch and Zoller’s scheme: we do not need an external electric field to generate the magnetic field. This improvement comes at the cost of more complicated laser configurations.
As in these prior analogs of electromagnetism, our artificial fields contain no dynamical degrees of freedom. Therefore they neither give rise to analogs of Coulomb interactions between the neutral atoms, nor do they support analogs of light.
Subsequent to the preparation of this manuscript, another scheme for generating analogs of electromagnetic fields was suggested by Sorensen, Demler, and Lukin [@sorensen]. That work uses time dependent hopping matrix elements along with a large oscillating quadrupolar potential. Compared to our approach, Sorensen et al. use a much simpler laser configuration, however there are nontrivial technical issues involved with the stability of the oscillating potential.
[**Basic Setup:**]{} Our approach relies upon creating an optical lattice with three distinct sets of minima. Each of these minima trap a different internal state of the neutral atoms. The internal states will be labeled ’${{\rm A}}$’, ’${{\rm B}}$’, and ’${{\rm C}}$’, and the minima will be labeled by their location and by the state that is trapped at that location. For example, figure 1(a) shows a one-dimensional array labeled as $\cdots$-${{\rm A}}_1$-${{\rm B}}_2$-${{\rm C}}_3$-${{\rm A}}_4$-${{\rm B}}_5$-${{\rm C}}_6$-${{\rm A}}_7$-${{\rm B}}_8$-$\cdots$. Importantly, this setting breaks parity symmetry.
{width="\columnwidth"}
Looking at this one dimensional chain, an atom in state ${{\rm A}}$, sitting in site ${{\rm A}}_4$, is immobile. The atom cannot hop to site ${{\rm B}}_5$ or ${{\rm C}}_3$, because it would need some mechanism for changing its internal state. The probability of tunneling by three sites to ${{\rm A}}_1$ or ${{\rm A}}_7$ is astronomically small.
We turn on hopping between site ${{\rm A}}_4$ and ${{\rm B}}_5$ by introducing a laser with the following properties: (i) the laser frequency $\omega_{AB}$ is close to the energy differenced between the internal states ${{\rm A}}$ and ${{\rm B}}$ (ie. $\omega_{AB}\sim E_{{\rm A}}-E_{{\rm B}}$); (ii) the laser polarization is chosen so that the transition from internal state ${{\rm A}}$ to ${{\rm B}}$ is allowed; (iii) the laser cannot induce transitions from states ${{\rm A}}$ to ${{\rm C}}$ or from ${{\rm B}}$ to ${{\rm C}}$, either because the transition is forbidden, or because the detuning is too great. One does not have to use a single laser to drive this transition but can instead use a Raman transition, which involves multiple lasers and the virtual occupation of one or more intermediate state. In the presence of this laser field, the atom can explore a two state Hilbert space. In the rotating wave approximation, the time dependent Schroedinger equation is $$\label{matham}
\textstyle
\begin{array}{c}
\textstyle i\partial_t \left(
\begin{array}{c}
\psi_{{{\rm A}}4}\\
\psi_{{{\rm B}}5}
\end{array}
\right)
=
H(t) \left(
\begin{array}{c}
\psi_{{{\rm A}}4}\\
\psi_{{{\rm B}}5}
\end{array}
\right)\\[4mm]
H(t)=\left(\begin{array}{cc}
E_{{\rm A}}&-\Omega_{{{\rm A}}{{\rm B}}}e^{-i(\omega_{{{\rm A}}{{\rm B}}}t+\phi)}\\
-\Omega_{{{\rm A}}{{\rm B}}}e^{i(\omega_{{{\rm A}}{{\rm B}}}t+\phi)}&E_{{\rm B}}\end{array}\right).
\end{array}$$ The quantum mechanical amplitude for the particle being in state ${{\rm A}}$ (${{\rm B}}$) on site ${{\rm A}}_4$ (${{\rm B}}_5$) is $\psi_{A4}$ ($\psi_{{{\rm B}}5}$). The energy of the internal states ${{\rm A}}/{{\rm B}}$ are $E_{{{\rm A}}/{{\rm B}}}$. The Rabi frequency $\Omega_{{{\rm A}}{{\rm B}}}$ is proportional to the product of the laser amplitude and the overlap between the states trapped in ${{\rm A}}_4$ and ${{\rm B}}_5$. We take $\Omega_{{{\rm A}}{{\rm B}}}$ to be real, and introduce a phase $\phi$, which is related to the phase of the coupling laser. In particular, if we translated the entire lattice by some distance $\bf r$, the phase $\phi$ would change by $\phi\to\phi+{\bf q\cdot r}$, where $\bf q$ is the wave-vector of the coupling laser [@raman].
This, and future Hamiltonians are more compactly written in a second quantized notation, $$\begin{aligned}
H&=&E_{{\rm A}}\hat\psi_{{{\rm A}}4}^\dagger \hat\psi_{{{\rm A}}4}
+E_{{\rm B}}\hat\psi_{{{\rm B}}5}^\dagger \hat\psi_{{{\rm B}}5}\\\nonumber&&
-\Omega_{{{\rm A}}{{\rm B}}}\left(e^{-i(\omega_{{{\rm A}}{{\rm B}}}t+\phi)}
\hat\psi_{{{\rm A}}4}^\dagger \hat\psi_{{{\rm B}}5}
+e^{i(\omega_{{{\rm A}}{{\rm B}}}t+\phi_{{{\rm A}}})}
\hat\psi_{{{\rm B}}5}^\dagger \hat\psi_{{{\rm A}}4}\right)\end{aligned}$$ where, for example, creation and annihilation operators $\hat\psi_{{{\rm A}}4}^\dagger$ and $\hat\psi_{{{\rm A}}4}$ add and remove a particle from site ${{\rm A}}_4$ in internal state ${{\rm A}}$. In the non-interacting system, the operators $\hat \psi$ obey the same equations of motion as the wave-function $\psi$ in (\[matham\]). At the single-particle level it does not matter whether we use bosonic or fermionic commutation relations. Where no confusion will result, we may neglect the letter ${{\rm A}}$ which denotes the internal state.
We apply time-dependent canonical transformations of the form $\hat\psi_j\to e^{i f(t)}\hat\psi_j$, $\hat\psi_j^\dagger\to e^{-i f(t)}\hat\psi_j^\dagger$. As is readily verified from the equations of motion (\[matham\]), under this transformation the Hamiltonian becomes $H\to H-f^\prime(t) \hat\psi_j^\dagger \hat\psi_j$. In particular we can construct a time independent Hamiltonian by transforming into the ‘rotating frame,’ $$\begin{aligned}
\hat\psi_{{{\rm A}}4}&\to& e^{-i(E_{{\rm A}}t-\phi)}\hat\psi_{{{\rm A}}4}\\
\hat\psi_{{{\rm B}}5}&\to& e^{-i(E_{{\rm B}}t+\Delta_{{{\rm A}}{{\rm B}}})}\hat\psi_{{{\rm B}}5}\\
H&=&-\tau (\hat\psi_{4}^\dagger \psi_{5}+ \hat\psi_{5}^\dagger \psi_{4})
+\Delta \psi_5^\dagger\psi_5,\end{aligned}$$ where $\tau=\Omega_{{{\rm A}}{{\rm B}}}$ and $\Delta=\omega_{{{\rm A}}{{\rm B}}}-(E_{{\rm A}}-E_{{\rm B}})$. Introducing two more lasers, coupling states ${{\rm B}}$-${{\rm C}}$, and ${{\rm C}}$-${{\rm A}}$ with appropriately chosen intensities and detunings, this same procedure yields the Hamiltonian $$H=\textstyle\sum_j\left( j \Delta (\hat\psi_j^\dagger\hat\psi_j)-\tau
(\hat\psi_j^\dagger\hat\psi_{j+1}+\hat\psi_{j+1}^\dagger\hat\psi_j)
\right),$$ corresponding to a one-dimensional chain of sites in a uniform electric field. As is shown below, this same approach can produce electric fields in higher dimensions. In this case, momentum transfer from the lasers will generate an effective magnetic field. [**Higher Dimensions:**]{} In more complicated geometries there may not be a Canonical transformation which leads to a time independent Hamiltonian. However, the time dependence takes a simple form if one transforms $$\hat\psi_{\mu_j j} \to e^{-iE_{\mu_j }t} \hat\psi_{\mu_j j},$$ where $j$ labels the site located at $\bf r_j$, and $\mu_j={{\rm A}},{{\rm B}},{{\rm C}}$ gives the internal state which is trapped at that site. The Hamiltonian then becomes $$\label{hop}
H=-\sum_{\langle i j\rangle} \tau_{\mu_i\mu_j}\left(e^{i {\bf q}_{\mu_i\mu_j}\cdot {\bf R_{ij}} }e^{-i\Delta_{\mu_i\mu_j}t}
\psi_{\mu_ii}^\dagger \psi_{\mu_jj}+{\rm H.C.}\right).$$ The sum includes all nearest neighbor sites $\langle ij\rangle$. The internal state trapped at site $i$ is $\mu_i$. The bond position is ${\bf R_{ij}}=({\bf r_i+r_j})/2$. The hopping is $\tau_{\mu\nu}=\Omega_{\mu\nu}$ for $\mu\neq\nu$, and $\tau_{\mu\mu}=\tau_0$. The parameter $\tau_0$ is given by the overlap of the wavefunctions on neighboring sites. The wave-vector of the laser coupling state $\mu$ to $\nu$ is ${\bf q_{\mu\nu}}$ (so $\bf q_{\mu\mu}=0$). The detuning is $\Delta_{\mu\nu}=\omega_{\mu\nu}-(E_\mu-E_\nu)$ when $\mu\neq\nu$, and $\Delta_{\mu\mu}=0$. The letters ${\rm H.C.}$ denote the Hermitian conjugate of the previous term.
If all of the laser intensities are adjusted so that $\tau_{\mu\nu}=\tau_0$ for all $\mu,\nu$, then equation (\[hop\]) is formally the equation of motion of a particle with charge $e$ in a vector potential defined on the bonds by $$\label{vecpot}
\frac{e}{c}{\bf A(R_{ij})\cdot r_{ij}} ={\bf q}_{\mu_i\mu_j}\cdot {\bf R_{ij}}-\Delta_{\mu_i\mu_j} t,$$ where ${\bf r_{ij}=r_i-r_j}$.
Using this mapping to a vector potential, we can construct many interesting field configurations. For example, consider a lattice with the striped geometry shown in figure 1(b), where as one moves in the $\bf\hat x$ direction, one encounters alternating rows of sites ${{\rm A}}$, ${{\rm B}}$, and ${{\rm C}}$. With this geometry, only the x-component of the vector potential, $A_x$, will be non-zero. In the simplest case, where each of the three coupling lasers have the same wave-vector $\bf q$ and detuning $\Delta$, the vector potential is ${\bf A(r)}={\bf \hat x}(c/ed)({\bf q\cdot r}-\Delta t)$, where $d$ is the lattice spacing. This corresponds to a uniform electric field ${\bf E} = -{\bf \hat x} \Delta c/ed$ and a uniform magnetic field ${\bf B}= {\bf \hat x\times q} (c/ed)$. By changing the relative angle between $\bf q$ and the $\bf\hat x$ axis, one can control the strength of the magnetic field. Since the recoil momentum $q$ can be made comparable to the inverse lattice spacing, one should be able to construct extremely large fields where flux through a unit cell of the lattice exceeds the magnetic flux quantum $\Phi_0=2\pi c/e$.
If $q$ is aligned with the hopping direction, then the effective magnetic field vanishes, resulting in an electric field without a magnetic field.
[**Applications:**]{} Earlier we introduced some interesting problems which could be addressed by applying effective electric and magnetic fields to a system of particles on a lattice. Here we discuss a further possibilities.
At moderate values of the “magnetic field" experiments could explore how the periodic potential affects vortex structures in a Bose condensate [@duine]. One could also study vortex physics near “pairing transitions" where the structure of vortices change [@pairing]
At much larger fields ($\Phi\sim\Phi_0$) Jaksch and Zoller [@zoller] recently discussed the exciting idea of using neutral atoms to study the fractal energy spectrum that Hofstadter [@hofstadter] predicted for noninteracting charged particles on a lattice in a magnetic field. The spectral gaps would be observable as plateaus in the density of noninteracting harmonically trapped fermions. It would be even more exciting to explore an interacting system in this same regime, and study fractional quantum Hall physics, and the interplay between quantum Hall effects, Mott insulating physics, and this fractal single-particle spectrum [@qhlat]. The simplest such experiment would use the geometry in figure 1(b), and allow the system to equilibrate with $\Delta=0$. All single particle observables are measurable through imaging, while photoassociation provides access to the short range pair correlation function[@bosonqh]. Some transport measurements are possible by detuning the lasers so that $\Delta\neq0$.
Several authors have shown that for filling fractions $1/2<\nu<6$, bosons with short range interactions in a strong magnetic field will form non-trivial many-body states [@bosonqh]. Fermions are more tricky, as s-wave interactions (which dominate at low temperatures) cannot lead to fractional quantum Hall effects in fermions. However, resonantly enhanced p-wave interactions can lead to such correlated states [@regnault].
Previous proposals for creating analogs of quantum Hall states in cold atoms relied upon rotation to provide the effective vector potential. Such schemes are made difficult by the need to carefully balance the centripetal force which maintains rotation and the harmonic trapping potential. The window of rotation speeds for finding strongly-correlated physics falls off with the inverse of the number of particles. The present approach does not require this delicate balancing of forces, and therefore allows one to study these effects in a macroscopic system.
Not only are magnetic fields of interest, but so are large electric fields. For example Sachdev et al. [@sachdev] have discussed the intricate Mott-Insulator states which are found when the ‘voltage difference’ between neighboring wells is comparable to the on-site repulsion. The method presented here is a powerful tool for studying such states.
[**Unphysical Fields:**]{} We once again emphasize that although $\bf A$ couples to the neutral atoms as if it were a vector potential, it does not obey Maxwell’s equations. Consequently, one can engineer seemingly paradoxical geometries. Consider, for instance, the ring of sites illustrated in figure 1c, with all detunings set equal. According to equation (\[vecpot\]), there is a uniform ‘electric’ field pointing along the chain. Thus a particle can move around the ring, continuously moving to a lower potential energy, returning to the starting point, but (by conservation of energy) having gained a great deal of kinetic energy. One can repeat the process [*ad infinitum*]{}; the maximum velocity is limited only by Umklapp processes. That is, when the particles deBroglie wavevector coincides with the intersite distance, the matter-wave is Bragg reflected off of the lattice, and reverses direction. If the chain was not bent in a circle, this reflection would lead to the familiar Bloch oscillations. No conservation laws are violated by this continuous acceleration, as the lasers provide a source of energy and momentum.
This bizarre situation where a particle can reduce its potential energy by moving in a closed path is reminiscent of the optical illusion in MC Escher’s print “Ascending and Descending," where a staircase forms a continuously descending closed loop. The quantum mechanical properties of a particle in such a chain of $N$ sites are ascertained by noting that the Hamiltonian, $H=-\tau \sum_{j=1}^N( e^{i\delta t } \psi_j^\dagger \psi_{j+1}+e^{-i\delta t} \psi_{j+1}^\dagger \psi_{j}),$ with $\psi_{N+1}\equiv \psi_1$, is translationally invariant, and therefore extraordinarily simple in momentum space. In terms of operators $a_k=\sum_j e^{-2\pi i j k/N}\psi_j/\sqrt{N}$, the Hamiltonian is diagonal, $H=\sum_k E_k(t) a_k^\dagger a_k.$ The eigenvalues $E_k(t)=-2 \tau \cos(2\pi k/N+\delta t)$ are time dependent, reflecting the non-equilibrium nature of the system. The motion of a wave-packet is determined by the instantaneous phase velocity $$\begin{aligned}
v &=& \frac{d N}{2\pi} \frac{\partial E_k}{\partial k} = 2 \tau d \sin(2\pi k/N + \delta t),\end{aligned}$$ which oscillates as a function of time. The factor of $dN/2\pi$, where $d$ is the intersite spacing, converts the velocity into physical units. This oscillation is exactly the Bragg diffraction previously mentioned. During one period of oscillation, the particle moves around the ring approximately $2 \tau/(N\delta)$ times.
A more complicated geometry with similar paradoxical properties is illustrated in figure 1d. In this structure, a triangular lattice is formed from three interpenetrating sublattices with wells of type $A$, $B$, and $C$. Here, a constant detuning yields a very intricate ‘unphysical’ electric field configuration: arrows depict directions in which hopping reduces the potential energy. Upon traversing alternate plaquettes, a particle can continuously increase, or decrease its potential energy. To understand the behavior of a particle in this lattice, one once again relies upon translational invariance, and introduces operators $a_k=\sum_{\bf r} \psi_{\bf r} e^{-ik\cdot r}$, where $k$ lies in the first Brillouin zone (BZ) of the triangular lattice, and the sum is over all lattice sites. The Hamiltonian is then $$\begin{aligned}
H &=& -\tau \sum_r \left[e^{i\delta t} \left({\textstyle\sum_{j=1}^3 \psi_r^\dagger \psi_{r+r_j}}
\right)+{\rm H.C.}\right]\\
&=&-2\tau \int_{BZ}\frac{d^2k}{\Omega} a_k^\dagger a_k \textstyle \sum_{j=1}^3 \cos({\bf k\cdot r_j}+\delta t) .\end{aligned}$$ The lattice generators $\{{\bf r_1,r_2,r_3}\}$ connect nearest neighbor sites, and are illustrated by arrows in figure 1d. Only two of these generators are linearly independent ($\bf r_1+r_2+r_3=0$). The area of the first Brillouin zone is $\Omega=8\pi^2/\sqrt{3} d^2$, where $d$ is the lattice spacing. Again, the group velocity of a wave packet is simply the gradient of the energy $E_k=-2\tau \sum_j \cos({\bf k\cdot r_j}+\delta t)$. Of particular note is the fact that at the zone center ($k=0$) the group velocity is alway zero. Thus a stationary packet remains stationary. This result is not surprising, since there is nothing in the geometry which picks out a direction in which the packet could start to move.
More surprising is the fact that the effective mass, related to the curvature of $E_k$ is oscillatory at $k=0$, spending equal amounts of time positive and negative. When the effective mass is negative, quantum diffusion acts opposite to its normal behavior, and wave packets become sharper. Thus localization occurs: the wave packet’s size oscillates periodically, rather than continually growing. Similarly, if the packet has a small momentum with $|k|\ll 2\pi/d$, then the particle does not simply propagate ballistically, but its velocity oscillates sinusoidally about $v=0$, and the particle is trapped near its initial location.
[**Physical Realization:**]{} There are many ways to engineer the three-state lattices described above. The difficult task is to produce the confinement and Raman couplings with a small number of lasers in a geometry which can be easily implemented. A detailed analysis of the various configurations goes beyond the scope of this paper, and a more comprehensive article is in preparation.
A key idea is that if the internal states are related by symmetries (ex. a spin-1 multiplet), then the various traps can be created by the same lasers, and the ($A$-$B$) and ($B$-$C$) Raman transitions can use the same drive. Driving transitions with microwave or RF fields, rather than lasers, will reduce the need for optical access [@rf].
An alternative approach is to note that one can create analogs of electromagnetic fields even if the sites $A$, $B$, and $C$, trap atoms in the same state. One can instead rely on a superlattice structure, where the energies of the three sites differ by large amounts [@superlattice]. Hopping is only possible if a Raman laser supplies the missing energy; detuning and recoil give the same effects as in the case with different internal states.
[99]{}
W. Hofstetter, J.I. Cirac, P. Zoller, E. Demler, M.D. Lukin, Phys. Rev. Lett. [**89**]{}, 220407 (2002) E. Jané, G. Vidal, W. Dür, P. Zoller, J.I. Cirac, Quant. Inf. and Comp. [**3**]{}, 15 (2003).
N.R. Cooper and N.K. Wilkin, Phys. Rev. B [**60**]{}, R16279 (1999); S. Viefers, T.H. Hansson, and S.M. Reimann, Phys. Rev. A [**62**]{}, 053604 (2000); N.K. Wilkin and J.M.F. Gunn, Phys. Rev. Lett. [**84**]{}, 6 (2000); N.R. Cooper, N.K. Wilkin, and J.M.F. Gunn, [*ibid.*]{} [**87**]{}, 120405 (2001); B. Paredes, P. Fedihev, J.I. Cirac, and P. Zoller, [*ibid.*]{} [**87**]{}, 010402 (2001); T.-L. Ho and E. J. Mueller, [*ibid.*]{} [**89**]{}, 050401 (2002); J.W. Reijnders, F.J.M. van Lankvelt, K. Schoutens, and N. Read, [*ibid.*]{} 120401 (2002); B. Paredes, P. Zoller, and J.I. Cirac, Phys. Rev. A [**66**]{}, 033609 (2002); N. Regnault, and Th. Jolicoer, Phys. Rev. Lett. [**91**]{}, 030402 (2003).
T. Senthil, L. Balents, S. Sachdev, A. Vishwanath, and M. P. A. Fisher, cond-mat/0312617.
B. P. Anderson and M. A. Kasevich, Science [**282**]{}, 1686 (1998). K. W. Madison, C. F. Bharucha, P. R. Morrow, S. R . Wilkinson, Q. Niu, B. Sunaram, and M. G. Raizen, Appl. Phys. B [**65**]{}, 693 (1997).
K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. [**84**]{}, 806 (2000) J. R. Abo-Shaeer, C. Raman, J. M. Vogels, and W. Ketterle, Science [**292**]{}, 476 (2001); P. C. Haljan, I. Coddington, P. Engels, and E. A. Cornell, Phys. Rev. Lett. [**87**]{}, 210403 (2001) E. Hodby, G. Hechenblaikner, S. A. Hopkins, O. M. Maragò, and C. J. Foot, Phys. Rev. Lett. [**88**]{}, 010405 (2002) D. Jaksch and P. Zoller, New J. Phys. [**5**]{}, 56 (2003) A. Sorensen, E. Demler, and M. Lukin, cond-mat/0405079 (2004).
If the coupling involves a multi-photon Raman transition, then the wave-vector $q$ is the appropriate sum/difference of the wave-vectors of each of the lasers, corresponding to the recoil momentum associated with the transition. Colinear Raman beams yield $q\sim0$.
J. W. Reijnders and R. A. Duine, cond-mat/0401583 (2004). H. Pu, L. O. Baksmaty, S. Yi, and N. P. Bigelow, cond-mat/0404750 (2004).
M. W. J. Romans, R. A. Duine, S. Sachdev, and H. T. C. Stoof, cond-mat/0312446; L. Radzihovsky, J. Park, and P. B. Weichman, cond-mat/0312237; A. Kuklov, N. Prokov’ev, and B. Svistunov, Phys. Rev. Lett. [**92**]{}, 050402 (2004); Phys. Rev. A [**69**]{}, 025601 (2004).
D. R. Hofstadter, Phys. Rev. B [**14**]{}, 2239 (1976) X.-G. Wen and Y.-S. Wu, Phys. Rev. Lett. [**70**]{}, 1501 (1993); D. Pfannkuche and A. H. Macdonald, Phys. Rev. B [**56**]{}, R7100 (1997).
N. Regnault, and Th. Jolicoeur, cond-mat/0404093 (2004).
S. Sachdev, K. Sengupta, and S. M. Girvin, Phys. Rev. B [**66**]{}, 075128 (2002).
To achieve large analog magnetic fields, at least one transition must be driven by Raman lasers.
S. Peil, J. V. Porto, B. L. Tolra, J. M. Obrecht, B. E. King, M. Subbotin, S. L. Rolston, and W. D. Phillips, Phys. Rev. A, [**67**]{}, 051603 (2003).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Speech-to-text translation (ST) has recently become an increasingly popular topic of research, partly due to the development of benchmark datasets. Nevertheless, current datasets cover a limited number of languages. With the aim to foster research in massive multilingual ST and ST for low resource language pairs, we release CoVoST 2, a large-scale multilingual ST corpus covering translations from 21 languages into English and from English into 15 languages. This represents the largest open dataset available to date from total volume and language coverage perspective. Data sanity checks provide evidence about the quality of the data, which is released under CC0 license. We also provide extensive speech recognition, bilingual and multilingual machine translation and ST baselines.'
author:
- |
Changhan Wang$^{\star}$, Anne Wu$^{\star}$, Juan Pino$^{\star}$\
Facebook AI\
`{changhan,annewu,juancarabina}@fb.com`\
bibliography:
- 'eacl2021.bib'
title: 'CoVoST 2: A Massively Multilingual Speech-to-Text Translation Corpus'
---
Introduction
============
The development of benchmark datasets, such as MuST-C [@di-gangi-etal-2019-must], Europarl-ST [@iranzosnchez2019europarlst] or CoVoST [@wang-etal-2020-covost], has greatly contributed to the increasing popularity of speech-to-text translation (ST) as a research topic. MuST-C provides TED talks translations from English into 8 European languages, with data amounts ranging from 385 hours to 504 hours, thereby encouraging research into end-to-end ST [@alex2016listen] as well as one-to-many multilingual ST [@gangi2019onetomany]. Europarl-ST offers translations between 6 European languages, with a total of 30 translation directions, enabling research into many-to-many multilingual ST [@inaguma2019multilingual]. The two corpora described so far involve European languages that are in general high resource from the perspective of machine translation (MT) and speech. CoVoST is a multilingual and diversified ST corpus from 11 languages into English, based on the Common Voice project [@ardila-EtAl:2020:LREC]. Unlike previous corpora, it involves low resource languages such as Mongolian and it also enables many-to-one ST research. Nevertheless, for all corpora described so far, the number of languages involved is limited.
In this paper, we describe CoVoST 2, an extension of CoVoST [@wang-etal-2020-covost] that provides translations from English (En) into 15 languages—Arabic (Ar), Catalan (Ca), Welsh (Cy), German (De), Estonian (Et), Persian (Fa), Indonesian (Id), Japanese (Ja), Latvian (Lv), Mongolian (Mn), Slovenian (Sl), Swedish (Sv), Tamil (Ta), Turkish (Tr), Chinese (Zh)—and from 21 languages into English, including the 15 target languages as well as Spanish (Es), French (Fr), Italian (It), Dutch (Nl), Portuguese (Pt), Russian (Ru). The overall speech duration is extended from 700 hours to 2880 hours. The total number of speakers is increased from 11K to 78K. We make data available at <https://github.com/facebookresearch/covost> under CC0 license.
Dataset Creation
================
Data Collection and Quality Control
-----------------------------------
Translations are collected from professional translators the same way as for CoVoST. We then conduct sanity checks based on language model perplexity, LASER [@artetxe-schwenk-2019-margin] scores and a length ratio heuristic in order to ensure the quality of the translations. Length ratio and LASER score checks are conducted as in the original version of CoVoST. For language model perplexity checks, 20M lines are sampled from the OSCAR corpus [@ortiz-suarez-etal-2020-monolingual] for each CoVoST 2 language, except for English, Russian for which pre-trained language models [@ng-etal-2019-facebook] are utilized[^1]. 5K lines are reserved for validation and the rest for training. BPE vocabularies of size 20K are then built on the training data, with character coverage 0.9995 for Japanese and Chinese and 1.0 for other languages. A Transformer *base* model [@vaswani2017attention] is then trained for up to 800K updates. Professional translations are ranked by perplexity and the ones with the lowest perplexity are manually examined and sent for re-translation as appropriate. In the data release, we mark out the sentences that cannot be translated properly[^2].
Dataset Splitting
-----------------
Original Common Voice (CV) dataset splits utilize only one sample per sentence, while there are potentially multiple samples (speakers) available in the raw dataset. To allow higher data utilization and speaker diversity, we add part of the discarded samples back while keeping the speaker set disjoint and the same sentence assignment across different splits. We refer to this extension as CoVoST splits. As a result, data utilization is increased from 44.2% (1273 hours) to 78.8% (2270 hours). We by default use CoVoST train split for model training and CV dev (test) split for evaluation. The complementary CoVoST dev (test) split is useful in the multi-speaker evaluation [@wang-etal-2020-covost] to analyze model robustness, but large amount of repeated sentences (e.g. on English and German) may skew the overall BLEU (WER) scores.
Statistics
----------
Basic statistics of CoVoST 2 are listed in Table \[tab:covost\_stats\], including speech duration, speaker counts as well as token counts for both transcripts and translations. As we can see, CoVoST 2 is diversified with large sets of speakers even on some of the low-resource languages (e.g. Persian, Welsh and Dutch). Moreover, they are distributed widely across 66 accent groups, 8 age groups and 3 gender groups.
Models
======
Our speech recognition (ASR) and ST models share the same BLSTM-based encoder-decoder architecture [@berard2018end], which is similar to the Listen, Attend and Spell (LAS) architecture [@chan2016listen; @chiu2017stateoftheart; @park2019specaugment]. Specifically, on the encoder side, audio features $\textbf{x}\in\mathbb{R}^{T\times d_0}$ are first fed into a two-layer DNN with $tanh$ activations and hidden sizes $d_1$ and $d_2$. Then two 2D convolutional layers with kernel size $3$x$3$ and stride $2$x$2$ are applied to reduce the sequence length to $\frac{T}{4}$. Both convolutional layers have 16 output channels and project the features to $4d_2$ dimensions after flattening. Finally, the features are passed to a stack of $l_e$ bidirectional LSTM layers of hidden size $d_3$ to form encoder output states $\textbf{h}\in\mathbb{R}^{T\times 2d_3}$. For the decoder side, a stack of $l_d$ LSTM layers with hidden size $2d_3$ and additive attention [@bahdanau2014neural] is applied, followed by a linear projection to size $d_o$. In the multilingual setting (En$\rightarrow$All and All$\rightarrow$All), we follow @inaguma2019multilingual to force decoding into a given language by using a target language ID as the first token.
For MT, we use a Transformer *base* architecture [@vaswani2017attention] with $l_e$ encoder layers, $l_d$ decoder layers, 0.3 dropout, and shared embeddings for encoder/decoder inputs and decoder outputs. For multilingual models, encoders and decoders are shared as preliminary experimentation showed that this approach was competitive.
Experiments
===========
We provide MT, cascaded ST and end-to-end ST baselines under bilingual settings as well as multilingual settings: All$\rightarrow$En (A2E), En$\rightarrow$All (E2A) and All$\rightarrow$All (A2A). Similarly for ASR, we provide both monolingual and multilingual baselines.
Experimental Settings
---------------------
For all texts, we normalize the punctuation and build vocabularies with SentencePiece [@kudo-richardson-2018-sentencepiece] without pre-tokenization. For ASR and ST, character vocabularies with 100% coverage are used. For bilingual MT models, BPE [@sennrich-etal-2016-neural] vocabularies of size 5k are learned jointly on both transcripts and translations. For multilingual MT models, BPE vocabularies of size 40k are created jointly on all available source and target text. For MT and language pair $a$-$b$, we also contrast using only $a$-$b$ training data and both $a$-$b$ and $b$-$a$ training data. The latter setting is referred to as +Rev subsequently.
We extract 80-channel log-mel filterbank features (windows with 25ms size and 10ms shift) using Kaldi [@povey2011kaldi], with per-utterance cepstral mean and variance normalization applied. We remove training samples having more than 3,000 frames or more than 512 characters for GPU memory efficiency.
For ASR and ST, we set $d_1=256$, $d_2=128$, $d_3=512$ and $d_o=128$. We use $l_e=3$ and $l_e=2$ for bilingual models and $l_e=5$ and $l_e=3$ for multilingual models. We adopt SpecAugment [@park2019specaugment] (LB policy without time warping) to alleviate overfitting. To accelerate model training, we pre-train all non-English ASR and all ST models with English ASR model encoder. For MT, we set $l_e=l_d=3$ for bilingual models and $l_e=l_d=6$ for multilingual models. All models are implemented in Fairseq [@ott2019fairseq].
We use a beam size of 5 for all models and length penalty 1. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-sensitive detokenized BLEU [@papineni2002bleu] using sacreBLEU [@post-2018-call] with default options, except for English-Chinese and English-Japanese where we report character-level BLEU. For ASR, we report character error rate (CER) on Japanese and Chinese (no word segmentation) and word error rate (WER) on the other languages using VizSeq [@wang2019vizseq]. Before calculating WER (CER), sentences are tokenized by sacreBLEU tokenizers, lowercased and with punctuation removed (except for apostrophes and hyphens).
Monolingual and Bilingual Baselines
-----------------------------------
Table \[tab:mono\_mt\_st\_results\] reports monolingual baselines for ASR and bilingual MT, cascaded ST (C-ST) and end-to-end ST baselines. As expected, the quality of transcriptions and translations is very dependent on the amount of training data per language pair. The poor results obtained on low resource pairs can be improved by leveraging training data from the opposite direction for MT and C-ST. These results serve as baseline for the research community to improve upon, including methods such as multilingual training, self-supervised pre-training and semi-supervised learning.
Multilingual Baselines
----------------------
A2E, E2E and A2A baselines are reported in Table \[tab:to\_en\_st\_results\] for language pairs into English and in Table \[tab:from\_en\_st\_results\] for language pairs out of English. Multilingual modeling is shown to be a promising direction for low-resource ST.
Conclusion
==========
We introduced CoVoST 2, the largest speech-to-text translation corpus to date for language coverage and total volume, with 21 languages into English and English into 15 languages. We also provided extensive monolingual, bilingual and multilingual baselines for ASR, MT and ST. CoVoST 2 is free to use under CC0 license and enables the research community to develop methods including, but not limited to, massive multilingual modeling, ST modeling for low resource languages, self-supervision for multilingual ST, semi-supervised modeling for multilingual ST.
[^1]: <https://github.com/pytorch/fairseq/tree/master/examples/language_model>
[^2]: They are mostly extracted from articles without context, which lack clarity for appropriate translations.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the Lagrangian of gravity covariantly amended by the mass and polynomial interaction terms with arbitrary coefficients, and reinvestigate the consistency of such a theory in the decoupling limit, up to the fifth order in the nonlinearities. We calculate explicitly the self-interactions of the helicity-0 mode, as well as the nonlinear mixing between the helicity-0 and -2 modes. We show that ghost-like pathologies in these interactions disappear for special choices of the polynomial interactions, and argue that this result remains true to all orders in the decoupling limit. Moreover, we show that the linear, and some of the nonlinear mixing terms between the helicity-0 and -2 modes can be absorbed by a local change of variables, which then naturally generates the cubic, quartic, and quintic Galileon interactions, introduced in a different context. We also point out that the mixing between the helicity-0 and 2 modes can be at most quartic in the decoupling limit. Finally, we discuss the implications of our findings for the consistency of the effective field theory away from the decoupling limit, and for the Boulware-Deser problem.'
---
[NYU-TH-06/13/10]{}
0.9cm
**Generalization of the Fierz-Pauli Action**
0.7cm
Claudia de Rham$^1$ and Gregory Gabadadze$^2$
0.3cm
*$^1$Départment de Physique Théorique, Université de Genève,*
*24 Quai E. Ansermet, CH-1211, Genève, Switzerland*
*$^2$Center for Cosmology and Particle Physics, Department of Physics,*
*New York University, New York, NY, 10003, USA*
1.9cm
Introduction and summary
========================
In this work we study the covariant polynomial potential of a relativistic and symmetric rank-2 tensor field living in four-dimensional flat space-time.
We start with the mass term in the potential. Poincaré symmetry in four dimensions imposes that any massive spin-2 state has to have five physical degrees of freedom – namely, the helicity-$\pm2$, helicity-$\pm1$, and helicity-$0$ modes. The quadratic potential that describes these degrees of freedom is that of Fierz and Pauli (FP), [@FP]. The latter is known to be the unique ghost-free and tachyon-free mass term for the spin-2 state [@Nieu].
No matter how small the graviton mass is in the FP theory, the helicity-0 state couples to the trace of the matter stress-tensor with the same strength as the helicity-$2$ does [@vDVZ]. This discontinuity would rule out, on simple observational grounds, the FP mass term for gravity.
As argued first by Vainshtein, the discontinuity problem can be cured by the nonlinear interactions which would become comparable to the linear terms already for very weak fields [@Arkady]. Then, the non-linearities could give rise to the screening of the helicity-0 mode at observable scales, rendering the theory compatible with the known empirical data [@Arkady; @DDGV].
However, the very same non-linearities that cure the discontinuity problem typically give rise to a ghost in massive gravity, [@BD]. This ghost, sometimes referred to as the Boulware-Deser (BD) mode, emerges as a sixth degree of freedom, that is infinitely heavy on a flat background, but becomes light on any reasonable nontrivial background ([*e.g.*]{}, on a cosmological background [@GGruzinov], or on the weak background of a lump of static matter [@AGS; @Creminelli; @DeffayetRombouts]). It is straightforward to see this ghost in the so-called decoupling limit [@AGS], in which the dynamics of the helicity-0 mode can be made manifest. Then, the sixth degree of freedom ends up being related to the nonlinear interactions of the helicity-0 mode [@AGS; @Creminelli; @DeffayetRombouts][^1].
The obvious question to ask is then whether there exists a nonlinear model that exhibits the Vainshtein mechanism, but without the ghost mode. This question was raised in Ref. [@AGS], and studied in detail in Ref. [@Creminelli]. The latter work argued that at the cubic order the ghost can be avoided by tuning the coefficients of the quadratic and cubic order terms. Recently, the cubic terms were calculated in a nonlinear massive spin-2 theory of Refs. [@GG; @Claudia], where it was shown that the necessary tuning is in fact automatic in this model, and the theory is ghost-free to that order [@cubic]!
In the present work we focus instead on addressing this question at higher orders, and in a model-independent framework. We therefore allow for arbitrary nonlinearities in the potential up to the quintic order, but restrict ourselves to considerations in the decoupling limit only.
Our result clashes with one of the conclusions of Ref. [@Creminelli] which states that the quartic interactions in the decoupling limit ineradicably lead to a ghost. Regretfully, the decoupling limit Lagrangian obtained in Ref. [@Creminelli] is not reparametrization invariant neither at the cubic nor quartic order, and gives a tensor equation that does not satisfy the Bianchi identity. The ghost found in the decoupling limit of Ref. [@Creminelli] is an artifact of these properties. Hence, we re-investigate this issue in the present work. We find a decoupling limit Lagrangian that is similar to that of Ref. [@Creminelli], but differs from it in detail, by coefficients of various tensorial structures. In particular, due to those coefficients, our Lagrangian is reparametrization invariant, and naturally leads to a tensor equation for which the Bianchi identity is automatically satisfied (as it should be since the helicity-2 mode only mixes linearly in the decoupling limit). Then, not surprisingly, we arrive to a different conclusion, that the quartic theory is also ghost-free in the decoupling limit. Moreover, we go on one step further and investigate the quintic-order theory, which we also show is ghost-free in the decoupling limit. This also allows us to understand the structure of the interactions to all orders and to argue that the decoupling limit can be at most quintic order in interactions (or quartic in the mixing between the helicity-0 and 2 modes) in the ghost-free theory.
Finally, as a corollary, we find that the decoupling limit of the most general consistent theory of massive gravity gives rise to the quadratic, cubic, quartic and quintic Galileon kinetic interactions introduced in Ref. [@Nicolis:2008in] in a different context (namely, as a generalization of the special cubic term appearing in the decoupling limit of DGP [@Dvali:2000hr] found in Ref. [@Ratt]). The Galileon interactions share the important properties of (i) being local, (ii) preserving the shift and galilean symmetry in the field space of the helicity-0 mode (in particular, in the kinetic and self-interaction terms but not in interactions with matter), (iii) giving rise to equations of motion with a well-defined Cauchy problem. Since then, the Galileons have developed their own independent and interesting life (see, [*e.g.*]{}, [@CedricGalileon; @deRham:2010eu]). We show here that the Galileons naturally arise in the decoupling limit of a general theory of massive gravity. This also helps to prove that upon appropriate choices of the coefficients in the potential, the decoupling limit of massive gravity is stable, at least up to the quintic order in interactions.
We continue this section with a discussion and summary of our main results in more technical terms, before turning to the detailed calculations in the subsequent sections.
In analogy with a massive non-Abelian (Higgs-less) spin-1 [@Khriplovich], the dynamics of the helicity-0 mode, $\pi$, can be extracted in a generic theory of gravity with a nonlinear potential by taking the decoupling limit [@AGS] m 0, , [keeping ]{} \_5 (m\^4 )\^[1/5]{} [fixed]{}. \[declim5\] Following [@AGS], in a generic case of the nonlinear potential, the corresponding Lagrangian for the helicity-0 mode reads schematically as follows: \_= [32]{} + [(\^2 )\^3\_5\^5]{}. \[Lpi5\] The cubic interaction with six derivatives gives rise to a ghost on locally nontrivial asymptotically-flat backgrounds ([*e.g.*]{} on the background of a local lump of matter). This could be seen by observing that for $\pi = \pi^{cl} +\delta \pi$, with $\pi^{cl}$ denoting the weak field of a local source, and $\delta \pi$ its fluctuation, the cubic term in (\[Lpi5\]) could generate a four-derivative quadratic term for the fluctuations. This leads to a ghost, which is infinitely heavy on Minkowski space-time, but becomes light enough to be disruptive once a reasonable local background is considered, see Refs. [@AGS; @Creminelli; @DeffayetRombouts].
To avoid pathologies such as in , the Fierz-Pauli combination in the graviton potential should be pursued further by tuning the coefficients of various higher order terms. This leads to a cancelation of all the terms for $\pi $ that are suppressed by the scales $\Lambda_5$, $\Lambda_4=(m^3 \mpl)^{1/4}$, $\Lambda_{11/3}=(m^{8}\mpl^3)^{1/11}$ etc…for any scale $\Lambda<\Lambda_3=(m^2 \mpl)^{1/3}$, such that only the terms suppressed by the scale $\Lambda_3$ survive. Then, $\Lambda_3$ is kept fixed in the decoupling limit, and the surviving terms (in addition to the linearized Einstein-Hilbert term) read as follows: =h\^$X^{(1)}\mn+\frac{1}{\Lambda_3^3}
X^{(2)}\mn+\frac{1}{\Lambda_3^6}X^{(3)}\mn$. \[s1\] Here, $h_{\mu\nu}$ denotes the canonically normalized (rescaled by $\mpl$) tensor field perturbation, while $X^{(1)}\mn,X^{(2)}\mn,$ and $X^{(3)}\mn$ are respectively, linear, quadratic and cubic in $\pi$. Importantly, they are all transverse (for instance, $X^{(1)}\mn \propto \eta_ {\mu\nu}\square\pi - \partial_\mu \partial_\nu \pi$). Not only do these interactions automatically satisfy the Bianchi identity, as they should to preserve diffeomorphism invariance, but they are also at most second order in time derivative. Hence, the interactions (\[s1\]) are linear in the helicity-2 mode, and unlike the previous results in the literature, present perfectly consistent terms, at least up to the quintic order.
Furthermore, some of the terms in (\[s1\]) can be absorbed by a local field redefinition. For instance, the quadratic term, $h^{\mu\nu} X^{(1)}\mn$, can be absorbed by a conformal transformation $h\mn \to h\mn + \eta\mn \pi$. This shift, besides removing the above mixing, generates terms of the form $ \pi X^{(2)} $ and $ \pi X^{(3)} $, which coincide, up to a total derivative, with the cubic and quartic Galileon terms [@Nicolis:2008in]. Further diagonalization of the cubic mixing term, $h^{\mu\nu} X^{(2)}\mn$, also generates the quintic Galileon, hence exhausting all the possible terms that can arise in the Galileon family at arbitrary order.
Moreover, we also point out that if the decoupling limit happens to pick up the scale $\Lambda_3$ (as opposed to another smaller scale such as $\Lambda_5$, $\Lambda_4$, etc…), the mixing between the helicity-0 and -2 modes must stop at the quartic order. Therefore, for appropriate choices of the interaction coefficients, the decoupling limit at this order is exact! It is the subsequent diagonalization of the nonlinear terms in the Lagrangian that generates the quintic Galileon.
Finally, the absence of a ghost in the decoupling limit does not prove the stability of the full theory away from the limit and the Boulware-Deser ghost is still expected to be present in general. However, it at least shows that one has a well-defined and consistent effective field theory below the scale $\Lambda_3$. Above this scale, the full theory has to be specified. We discuss related issues in section 5. Before that, our work has a two-fold motivation: (i) To establish a consistent effective field theory below $\Lambda_3$ (for the full theory to be viable its decoupling limit should be ghost-free as a necessary condition). (ii) All the known examples show that the Boulware-Deser ghost, if present in the full theory, does also show up in the decoupling limit. Therefore, it is encouraging to find no ghosts in this limit.
The paper is organized as follows: In section 2 we summarize the formalism used to study the decoupling limit of massive gravity with a general potential. We then explicitly compute the decoupling limit Lagrangian to the quartic and quintic orders in section 3. We work with a generic nonlinear completion of the FP gravity for which the scale $\Lambda_3^3=\mpl m^2$ is fixed. We argue that the $\pi$ mode does not decouple from the tensor mode, but that the interactions are free of any ghost-like pathologies. In section 4 we give a general framework for computing the Lagrangian in the decoupling limit, and argue that in theories which are consistent with the fixed scale $\Lambda_3$, at most the quartic order mixing term can be obtained, all the higher order mixing terms being zero. Moreover, we show in section 5 that upon an appropriate change of variables we recover the standard Galileon interactions. Section 6 contains some discussions on open issues and future directions addressing the consistency of massive gravity away from the decoupling limit.
Formalism
=========
Gauge invariant potential for gravity {#GI}
-------------------------------------
Below we consider in detail the decoupling limit of a general Lagrangian of a massive spin-2 field endowed with a potential on Minkowski space-time. We use the technique developed in Ref. [@AGS]. The covariant Lagrangian with the potential reads as follows: = M\^2\_[Pl]{} R - ${U}_2(g,H)+{U}_3(g,H)+{U}_4(g,H)+ {U}_5(g,H)\cdots$ , \[PF\] where $U_i$ denotes the interaction term at $i^{\rm th}$ order in $H\mn$, \[PFS\] [U]{}\_2(g,H)&=&H\^2\_-H\^2,\
[U]{}\_3(g,H)&=&c\_1 H\^3+c\_2 H H\^2+c\_3 H\^3\[L3\],\
[U]{}\_4(g,H)&=&d\_1 H\^4+d\_2 H H\^3+d\_3 H\^2H\_\^2+ d\_4 H\^2 H\^2+d\_5 H\^4\[L4\],\
[U]{}\_5(g,H)&=&f\_1 H\^5+f\_2 H H\^4+f\_3 H\^2 H\^3+ f\_4 H\_\^2 H\^3\
&+&f\_5H (H\^2)\^2+f\_6H\^3 H\^2+f\_7 H\^5 \[L5\]. Here the index contractions are performed using the inverse metric, so that $H=g^{\mu\nu}H\mn$, $H\mn^2=g^{\mu\nu}g^{\alpha\beta}H_{\mu\alpha}H_{\nu\beta}$, etc…. The coefficients $c_i$, $d_i$ and $f_i$ are a priori arbitrary, but will be determined by demanding that no ghosts are present at least up to the quintic order in the decoupling limit.
Finally, the tensor $H\mn$ is related to the metric tensor as follows: g&=&+\
&=&H+\_[ab]{}\_\^a \_\^b, where $a,b =0,1,2,3,$ $\eta_{ab}={\rm diag} (-1,1,1,1)$, and $H\mn$ is a covariant tensor as long as the four fields $\phi^a$ transform as scalars under a change of coordinates. Furthermore, expressing $\phi^a$ in terms of the coordinates $x^\alpha$, and the field $ \pi^\alpha $ as $\phi^a= (x^\alpha-\pi^\alpha)\, \delta^a_\alpha$, we obtain \[Hmn\] H=+\_\_+ \_\_- \_\_\^\_\^. In (\[Hmn\]), and in what follows, we adopt the convention that the indices on $\pi_\mu$ are raised and lowered with respect to the Minkowski metric $\eta\mn$. Crucially, the expression for the tensor $H\mn$ in differs by a minus sign in front of the last term from the analogous expression in eq. (5) used in Ref. [@Creminelli]. To emphasize the importance of this sign, we derive in \[AppCreminelli\] the decoupling limit using the opposite sign in (\[Hmn\]), recover the results of Ref. [@Creminelli], and show that the Bianchi identity is then not automatically satisfied, since the reparametrization invariance is not retained in the resulting Lagrangian.
From (\[PF\]) it is not immediately clear what is the scale of the effective field theory represented by this Lagrangian, [*i.e.*]{}, what is the energy/momentum scale by which the higher polynomial interactions would be suppressed as compared with the leading ones. This will become clear by studying the decoupling limit of the theory.
In what follows, we focus on the helicity-2 and helicity-0 modes, but ignore the vector mode. The latter enters only quadratically in the decoupling limit (since the vector does not couple to a conserved stress-tensor in the linearized order), and can be set to zero self-consistently. This does not prove that the vector sector is ghost-free, however, the findings of Ref. [@cubic] that the cubic nonlinearities for the vector are completely harmless due to the $U(1)$ gauge invariance of the resulting terms, suggest that the vector sector is not going to reintroduce the BD ghost. Therefore, we use the substitution: $
\pi_\alpha=\partial_\alpha \pi/ \Lambda_3^3,
$ so that H&=&+-\^2, where we use the same notation as in [@Creminelli], $\Pi\mn=\p_\mu\p_\nu \pi$ and $\Pi\mn^2=\eta^{\alpha\beta}\Pi_{\mu\alpha}\Pi_{\beta \nu}$. Moreover, in what follows the square brackets $[\ldots]$ will represent the trace of a tensor contracted using the Minkowski metric, [*e.g.*]{} $[\Pi^2]=\Pi^{\mu\nu}\Pi_{\mu\nu}$ and $[\Pi]^2=\Pi^{\mu}_{\mu} \Pi^{\nu}_{\nu}$.
Decoupling scale
----------------
As mentioned in the introduction, the interactions $U_2$ and $U_3$ typically lead to terms of the form $(\partial^2 \pi)^3/(\mpl m^4)$, and the decoupling limit should be taken keeping the scale $\Lambda_5^5=\mpl m^4$ fixed, while $\mpl\to \infty $ and $m\to 0$. However we will show in what follows (see also [@cubic]) that for some special values of the coefficients $c_i$, such interactions cancel (up to a total derivative), generalizing the FP term to the cubic order. This procedure can be extended further to an arbitrary order:
At a given order the leading contributions are of the form \[Ln bad\] \_n\~, then, one chooses the interactions $U_n(H)\sim H^n$ so that the above terms combine into a total derivative. At each order, there exists a unique total derivative combination $\mathcal{L}_{\rm der}^{(n)}$ that can be written as follows: \[Lder n\] \_[der]{}\^[(n)]{}=-\_[m=1]{}\^[n]{}(-1)\^m \[\^[m]{}\]\^[(n-m)]{}\_[der]{}, with $\mathcal{L}^{(0)}_{\rm der}=1$ and $\mathcal{L}^{(1)}_{\rm der}=[\Pi]$. Up to the quartic order, the total derivatives are \[L2der\] \^[(2)]{}\_[der]{}&=&\[\]\^2-\[\^2\],\
\[L3der\] \^[(3)]{}\_[der]{}&=&\[\]\^3-3 \[\]\[\^2\]+2\[\^3\],\
\[L4der\] \^[(4)]{}\_[der]{}&=&\[\]\^4-6\[\^2\]\[\]\^2+8\[\^3\] \[\]+3\[\^2\]\^2-6\[\^4\]. Moreover, at higher orders these total derivatives vanish identically, $\mathcal{L}^{(n)}_{\rm der}\equiv 0$, for any $n\ge 5$. By ensuring that all the leading terms take the form of a total derivative , all the interactions that arise at an energy scale lower than $\Lambda_3$ disappear. Keeping this in mind we will therefore consider below the following decoupling limit (firts considered in [@Ratt] in the context of the DGP model) m 0, , [keeping ]{} \_3 (m\^2 )\^[1/3]{} [fixed]{}. \[declim3\] Note that the procedure of taking the limit in the present case is well defined for fields that decay fast enough at spatial infinity. For these we introduce an infrared regulator of the theory, say a large sphere of radius $L \gg 1/m$, and take the radius to infinity, $L\to \infty $, before taking the limit (\[declim3\]). This hierarchy of scales enables us to put all the surface terms to zero before taking the decoupling limit.
Furthermore, as it should be becoming clear from the above discussions, the scale $\Lambda_3$ will end up being the effective field theory scale. The higher interaction terms, both written or implied in (\[PF\]), will be subdominant to the leading ones for energy/momentum scales below $\Lambda_3$.
Decoupling limit of massive gravity
===================================
Cubic order
-----------
We now explicitly compute the decoupling limit for the interactions considered in (\[PFS\]-\[L5\]), and thus generalize the Fierz-Pauli term to higher orders. In terms of the “Einstein operator" $\hat \E$ defined for an arbitrary symmetric field $Z\mn$ as \^\_Z\_=-12 $\Box Z\mn-\p_\mu \p_\alpha Z^\alpha_\nu-\p_\nu \p_\alpha Z^\alpha_\mu+
\p_\mu\p_\nu Z^\alpha_\alpha
-\eta\mn \Box Z^\alpha_\alpha+\eta\mn \p_\alpha \p_\beta Z^{\alpha\beta}$, the decoupling limit Lagrangian of massive gravity up to the cubic order reads as follows &=&-12 h\^ \^\_h\_+ h\^X\^[(1)]{}\
&&-$(8c_1-4)[\Pi^3]+(8c_2+4)[\Pi]
[\Pi^2]+8c_3[\Pi]^3$+h\^X\^[(2)]{}, with X\^[(1)]{}&=&\[\]-, \[X1\] and $X^{(2)}\mn$ quadratic in $\Pi$. Using the total derivative combination , the interactions arising at the scale $\Lambda_5$ can be removed by setting \[c1 and c2\] c\_1=2c\_3+1 2 c\_2=-3c\_3-12. As a result, we find the following expression for the tensor $X^{(2)}\mn$ \[X2\] X\^[(2)]{}=-(6c\_3-1){(\^2-\[\])-12 $[\Pi^2]-[\Pi]^2$}. Notice that both $X^{(1)}\mn$ and $X^{(2)}\mn$ are automatically conserved, as they should for the reparametrization invariance to be retained and the Bianchi identity to be satisfied.
Moreover, it is straightforward to check that these cubic interactions bear at most two time derivatives, and are therefore free of any ghost-like pathologies. One should also check that the lapse (which coincides with $h_{00}$ in the decoupling limit) still propagates a constraint, which is indeed the case here as neither $X^{(1)}_{00}$ nor $X^{(2)}_{00}$ contain any time derivatives. Furthermore, these cubic interactions with the specific coefficient $c_3=1/4$ have already been discussed in detail in Ref. [@cubic].
We now apply the same formalism to quartic interactions for which ghost-like pathologies have been argued to arise inexorably in Ref. [@Creminelli].
Quartic order
-------------
At the quartic order, we find the following interactions in the decoupling limit: \^[(4)]{}= h\^X\^[(3)]{}+ { (3c\_1 -4d\_1-)\[\^4\]+(c\_2-4d\_3+)\[\^2\]\^2\
+(2c\_2-4d\_2)\[\]\[\^3\]+(3c\_3-4d\_4)\[\^2\]\[\]\^2-4d\_5\[\]\^4 }, with $\Lambda_4=(\mpl m^3)^{1/4}$ and $X^{(3)}\mn$ cubic in $\Pi$. Here again the pathological terms arising at the scale $\Lambda_4$ can be removed by using the total derivative combination , and by setting $c_1$ and $c_2$ as in , as well as d\_1&=&-6d\_5+(24c\_3+5),\[d1\]\
d\_2&=&8d\_5-(6c\_3+1),\
d\_3&=&3d\_5-(12c\_3+1),\
d\_4&=&-6d\_5+34 c\_3.\[d4\] Substituting these coefficients in $X^{(3)}\mn$ we obtain the mixing term between the helicity-0 and 2 modes determined by \[X3\] X\^[(3)]{}=$c_3+8d_5${6\^3-6\[\]\^2+3(\[\]\^2-\[\^2\])\
-$[\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]$}. This expression bears two expected but important features:
- It is conserved $\partial^\mu X^{(3)}\mn=0$, as it should be for the reparametrization invariance to be present and the Bianchi identity to be automatically satisfied.
- For $i,j$ space-like indices and $0$ time-like index:
&& X\^[(3)]{}\_[ij]{}\
&& X\^[(3)]{}\_[0i]{}\
&& X\^[(3)]{}\_[00]{} .These properties ensures that no ghost-like pathology arise at the quartic level in the decoupling limit as long as the interactions come in with the generalized FP structure set by the coefficients (\[c1 and c2\]) and (\[d1\]-\[d4\]).
Quintic order
-------------
At the fifth order in the decoupling limit, we consider interactions as given in (\[L5\]). The pathological terms that scale as \_[\^5]{}\~()\^5, can be canceled with an appropriate choice of the coefficients $f_1$ to $f_6$: \[fs\]
[ccc]{} f\_1=+c\_3-6d\_5+24f\_7 , & & f\_2 = - -c\_3+6d\_5-30f\_7 ,\
f\_3=38 c\_3-5d\_5+20 f\_7 , & & f\_4=--34 c\_3+5d\_5-20f\_7 ,\
f\_5= c\_3-3d\_5+15f\_7 , & & f\_6=d\_5-10f\_7.
As a result, the quintic interactions in $\pi$ arrange themselves to form the expression for $\mathcal{L}^{(5)}_{\rm der}$, as derived from \^[(5)]{}\_[der]{}&=&24\[\^5\]-30\[\]\[\^4\]+20\[\^3\](\[\]\^2-\[\^2\])\
&& +15\[\]\[\^2\]\^2-10\[\^2\]\[\]\^3+\[\]\^50. Notice that $\mathcal{L}^{(5)}_{\rm der}$ is not simply a total derivative as for the previous orders, but instead vanishes identically. This implies in particular that any limiting Lagrangian of the form $\mathcal{L}^{(n)}\sim f(\Pi) \mathcal{L}^{(5)}_{\rm der}$, where $f$ is an analytic function, gives no dangerous $\pi$ interactions and can be used at higher orders. Beyond the quintic order the degrees of freedom in the coefficients to be tuned should therefore increase, and make it easier to remove any ghost-like interactions.
With the above choice of coefficient , the only quintic interaction in the decoupling limit then is \^[(5)]{}=h\^X\^[(4)]{}, with \[X4\] X\^[(4)]{}&\~&24( \^4-\^3)+ 12\^[(2)]{}\_[der]{}\^2-4\^[(3)]{}\_[der]{}+\^[(4)]{}\_[der]{}0, with $\mathcal{L}^{(2,3,4)}_{\rm der}$ given respectively in , and . The decoupling limit is therefore well behaved up to the quintic order, and the number of free parameters at higher orders suggests that one can always make appropriate choices to avoid any ghost mode from appearing in the entire decoupling limit. To be certain, one should however analyze a fully non-linear theory, such as the one proposed in [@GG; @Claudia].
Motivated by the above obtained results, we set up in the next section a general formalism for obtaining the interactions to all orders.
Before we do so, some important comments are in order. We might of course argue that the absence of the ghost up to the quintic order represents no proof of the stability of the theory even in the decoupling limit, since the ghost could be pushed to the next order in interactions. It is also not a proof of the consistency of the full theory, as was discussed in section 1, since the ghost may appear away from the decoupling limit. The arguments concerning these two points, respectively, are:
1. Beyond the quintic order, the number of free coefficients in the interactions seems sufficient to eliminate pathological contributions of the form $(\p\p \pi)^n$. Furthermore, beyond the quartic order all conserved tensors of the form $X^{(n)}\mn \sim (\p \p \pi)^n\mn$ vanish identically, and cannot lead to any ghost-like pathologies in the mixing $h^{\mu\nu}X^{(n)}\mn$ between the helicity-0 and 2 modes.
2. The ghost may exist in a given order away from the decoupling limit (say at the quartic or higher order), but disappear in the decoupling limit. If so, then, the ghost should come with a mass greater than $\Lambda_3$. Then, the theory would be acceptable as an effective theory below the $\Lambda_3$ scale. However, at scales above $\Lambda_3$, one would need to specify an infinite number of terms in the full nonlinear theory in order to conclude whether or not the ghost is removed by the resummation of these terms. This will be made more precise in the last section.
General formulation for an arbitrary order
==========================================
All our findings up to the quintic order presented in the previous section can be formulated in a unified way, which may also suggest how things could work at higher orders. For this, in the $N$th order expansion (so far $N\le 5$), we introduce the notations \[SN\] |U\_N(g,H)- \_[i=2]{}\^N U\_i(g,H) , where the tensor $H\mn$ is defined as in section 2. If the $N^{\rm th}$ order expression for the function $\bar U_{N}(g, H)$ satisfies |U\_[N]{}(g,H)|\_[h=0, A\_=0]{} = [total derivative]{}, \[cond\] (where $A_\mu$ denotes the helicity-1 field) then, the decoupling limit Lagrangian for the helicity-0 and -2 interactions, up to a total derivative, takes the form: \^[lim]{}\_[\_3]{}= -[12]{} [ h]{}\^ [E]{}\_\^ [h]{}\_+ [h]{}\^|X\^[(N)]{}(), \[conj\] with the conserved tensor $\bar X^{(N)}\mn$: |X\^[(N) ]{} ()= [|U\_[N]{}(g,H) h\_]{} |\_[h=0, A\_=0]{}. We have checked that the above Lagrangian gives rise to equations of motion with no more than two time derivatives and appropriate constraints for $ N\le 5$. It seems reasonable to conjecture that this will also be the case for $ N> 5$. Furthermore, in four dimensions $\bar X^{(N)}\mn$ can only contain a finite number of terms if it is local and conserved. It is therefore likely that this formalism leads to a finite number of interactions in the decoupling limit.
At a given order $n$ in the expansion, there should be enough freedom to set the polynomial $U_n(g,H)$ appropriately, so as to ensure that the leading interactions enter as a total derivative of the form , or as $f(\Pi)\mathcal{L}^{(m)}_{\rm der}$ for $m\ge5$ and $f$ being an arbitrary function of $\Pi\mn$. The resulting leading contribution is then of the form \^[(n)]{}=h\^X\^[(n)]{}, where $\beta$ depends on the coefficient $c$’s, $d$’s, etc. and $X^{(n)}\mn \sim \Pi^n\mn$ must be conserved as a straightforward consequence of reparametrization invariance in the decoupling limit (since higher interactions in $h$ are then suppressed). At each order $n$, there is a unique combination of $\Pi\mn^n$’s which is conserved. This combination is of the form X\^[(n)]{}. In four dimensions however, $\mathcal{L}^{(5)}_{\rm der}\equiv 0$ as pointed out earlier, and the same remains true at higher orders. This further implies that there is a limit on the number of possible interactions in the decoupling limit: $X^{(n)}\mn\equiv 0$ for any $n\ge 4$. This suggests that all theories of massive gravity (with the scale $\Lambda_3$) can only have at most quartic couplings between the helicity-0 and 2 modes in the decoupling limit.
Massive gravity and the Galileon
================================
When making the generalized FP choice for the coefficients (\[c1 and c2\]), (\[d1\]-\[d4\]), and (\[fs\]), the higher interactions in the decoupling limit only arise as a coupling between the tensor mode and the helicity-0 mode of the form \_[int]{}=[h]{}\^|X\^[(N)]{}=h\^ $X^{(1)}\mn+\frac{1}{\Lambda_3^3}X^{(2)}\mn+\frac{1}
{\Lambda_3^6}X^{(3)}\mn$, where $X^{(1)}$ is given by , $X^{(2)}$ by and $X^{(3)}$ by . Moreover, as emphasized before, $\partial^\mu X^{(i)}\mn=0$. We proceed further by noticing that X\^[(1,2)]{}=\^Z\^[(1,2)]{}\_, with Z\^[(1)]{}&=&,\
Z\^[(2)]{}&=&(6c\_3-1)\_\_. We can therefore diagonalize the action up to the cubic order by performing a local but nonlinear change of the variable h=h+Z\^[(1)]{}+Z\^[(2)]{}, such that, up to total derivatives, the Lagrangian is \[galgen\] &=&-12 h\^h\_ +32 +32()\^2\
&+&$\frac 12 (6c_3-1)^2-2(c_3+8d_5)$()\^2$[\Pi^2]-[\Pi]^2$ +h\^X\^[(3)]{}\
&-&(6c\_3-1)(c\_3+8d\_5)()\^2 $[\Pi]^3-3[\Pi][\Pi]^2+2[\Pi^3]$ . In the first line we see appearing the quadratic and cubic Galileon terms, [@Nicolis:2008in] (the usual kinetic term for $\pi$, as well as the interaction present in DGP). In the second line we notice the quartic Galileon interaction and finally the quintic, last interaction of the Galileon family, appears in the last line.
By setting $c_3=-8d_5$ we precisely recover the Galileon family of terms up to quartic order, and all the remaining couplings with the tensor mode disappear at the quintic order. Since there is still a lot of freedom in the coefficients at higher orders, it is only natural to expect this result to be maintained to all orders.
On the other hand, if $c_3\neq -8d_5$, then the last mixing term $h^{\mu\nu} X^{(3)}\mn$ does not seem to be removable via any [*local*]{} field redefinition. This mixing term may be crucial to address the issue of superluminality of the massive theory, as the Galileon without the mixing terms does exhibit superluminal behavior [@Nicolis:2008in].
In a more general case, as soon as the cubic Galileon is present in (\[galgen\]), we are also bound to have either the quartic Galileon and no other terms (for $c_3 =-8d_5$), or a quartic mixing and the quintic Galileon (for $4(c_3+8d_5) = (6c_3-1)^2
\neq 0$), or all of the above terms together.
If however, the cubic Galileon is absent (for $c_3=1/6$), one in general is left with the quartic Galileon and the quartic mixing term.
Finally, notice also that for the specific choice $c_3=1/6$ and $d_5=-1/48$, all the interactions at the scale $\Lambda_3$ disappear! This may be an example of a theory for which the decoupling limit picks up a higher scale $\Lambda_\star > \Lambda_3$, if such a theory exists. Alternatively, this may also be a theory in which all the nonlinear terms disappear in the decoupling limit. This would suggest that the theory has no strongly coupled behavior (, no Vainshtein mechanism), and would be ruled out observationally.
Outlook
=======
The previous analysis shows that for appropriate choices of interactions that generalize the Fierz-Pauli term to higher orders, one can construct a consistent and local theory of massive gravity where no ghost-like instabilities are present, at least up to the quintic order in the decoupling limit, and positive prospects can be foreseen for higher orders. In particular the connection with the Galileon generalization of the cubic term appearing in the DGP decoupling limit provides a natural framework for studying ghost-free theories of gravity [@Nicolis:2008in; @deRham:2010eu].
Furthermore, the decoupling limit considerations of this paper suggest that the higher non-linear terms in (\[PFS\]-\[L5\]) become equally important at the scale $\Lambda_3$. Since the scale $\Lambda_3=(\mpl m^2)^{1/3}$ is very low (typically $\Lambda_3\sim 10^{-9}$eV), the effective theory below $\Lambda_3$ can only be used for large scale cosmological studies[^2]. To extend the scope of applicability of massive gravity to shorter length scales, however, one would need to go above $\Lambda_3$, and, hence, the higher interactions should be taken into account. For a viable model, it will therefore be necessary to consider all the higher polynomial interactions, $U_n(g,H)$, and not only the ones up to the quintic order as presented here (even though the decoupling limit may only have a finite number of interactions).
A theory that provides such a resummation is the model of Refs. [@GG; @Claudia]. In particular, by integrating out the auxiliary dimension in that model, one gets an infinite series of interactions of the form (\[PFS\]-\[L5\]) and beyond, with certain specific coefficients. In [@cubic], it has been checked that the coefficients of the quadratic and cubic terms were equal to those used in section 3 for the specific choice $c_3=1/4$. Thus, in the decoupling limit, the theory is ghost-free up to the cubic order. Furthermore, the theory in the cubic order preserves the Hamiltonian constraint even away from the decoupling limit [@cubic], and the BD term cancels out in the exact all-order Hamiltonian [@GG]. Moreover, it was shown in Ref. [@Claudia] that the nonlinear terms giving rise to a ghost at a scale $\Lambda<\Lambda_3$ cancel out in that specific theory. These findings constitute an important evidence (but not a proof yet) that the theory of [@GG; @Claudia] may be consistent, at least classically, to all orders.
How about other possible theories of massive gravity that would yield the terms discussed here with the coefficients still consistent with the absence of the ghost, but not coinciding with the ones obtained in [@cubic]? Is there any hope for these theories away from the decoupling limit and above the scale $\Lambda_3$? Naively, the answer seems to be a negative one: As was shown in [@Creminelli], in the order-by-order expansion, and beginning with the quartic order, one cannot avoid higher powers of the lapse function in the Hamiltonian, and hence, the emergence of the sixth degree of freedom (which typically is a ghost) seems to be unavoidable in massive gravity [@Creminelli].
However, there may be a way to circumvent this problem in the full theory if its Hamiltonian, due to a resummation of perturbative terms, ends up having a very special dependence on the lapse and shift functions. Here we demonstrate this in a toy example, that is motivated by the Hamiltonian of the theory [@GG; @Claudia] discussed in [@GG].
Consider the toy Hamiltonian: H= N( R\^0 + m\^2 f() ) + N\_j( R\^j +m\^2 Q\^j())+ m\^2 P() [N\_j N\^j2N]{}, \[H\] where $N,N_j,\gamma_{ij}$, and $R^0, R^j$, are the standard ADM variables and functions respectively [@ADM]; $f(\gamma),Q_j(\gamma) $ and $P(\gamma)$ are some functions that modify the GR constraints by the mass terms. The shift function $N_j$ is not a Lagrange multiplier, but is algebraically determined, as it should be the case for a massive theory with five degrees of freedom. However, the lapse functions also enters in the last term in a way that seems to prevent it to be a Lagrange multiplier, and if so, it would give rise to the sixth degree of freedom. This is not the case, however: One can introduce a new variable $n_j \equiv N_j/N$ in terms of which the Hamiltonian reads H= N(R\^0 + m\^2 f()) + N n\_j( R\^j +m\^2 Q\^j())+ N m\^2 P() [n\_j n\^j2]{}. \[H1\] The shift $n_j$, still has no conjugate momentum, hence $\delta H /\delta n_j =0$. This determines the new shift variable, $n^j =- (R^j +m^2 Q^j(\gamma))/(m^2P(\gamma))$, and yields the following Hamiltonian H|\_[n\_j]{}= N( R\^0 + m\^2 f() - [ (R\_j +m\^2 Q\_j())\^2 2 m\^2 P() ]{} ). \[H3\] Here, the lapse does certainly appear as the Lagrange multiplier. Hence, the BD term does not arise, and the theory does not propagate the sixth degree of freedom[^3].
On the other hand, a direct perturbative expansion of the last term in (\[H\]) in powers of $\delta N = N-1$ with subsequent truncation of this series at any finite nonlinear order, necessarily yields higher powers of $\delta N$ in the Hamiltonian[^4]. Naively, this truncated theory would give rise to the potentially false impression that the lapse is not a Lagrange multiplier, and that there is a sixth degree of freedom in the model.
Noticing that the higher powers of $\delta N$ at any finite nonlinear order emerge from the expansion of the theory (\[H\]) is trivial, in this toy model. However, a similar, albeit more complicated structure, emerges in the Hamiltonian of the model of [@GG; @Claudia] (see [@GG]) and the fact that the terms in the expansion come up from a single term in the exact Hamiltonian is not as simple to observe.
Last, but not least, in this work we discussed the classical theory. Generic quantum loop corrections are expected to renormalize and detune the coefficients of the polynomial terms needed to avoid the ghost. One way to be protected against this problem is to have a theory in which the tuned coefficients automatically emerge as a consequence of a symmetry that would be respected by the loop corrections. In this respect, the recent findings of [@cubic] that the cubic terms with the automatically tuned coefficients emerge as an expansion of the theory, which by itself exhibits an evidence for a hidden nonlinearly realized symmetry, makes us hopeful for the existence of a quantum-mechanically stable effective field theory of massive gravity.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Cedric Deffayet, Gia Dvali, Massimo Porrati, Oriol Pujolas, Andrew Tolley and Filippo Vernizzi for useful comments. The work of GG was supported by NSF grant PHY-0758032, and that of CdR by the SNF.
Decoupling limit with the opposite sign in $H\mn$ {#AppCreminelli}
=================================================
As mentioned in section \[GI\], the expression for $H\mn$ differs by a minus sign in front of the third term on the r.h.s. from its counterpart considered in Eq. (5) of [@Creminelli]: \[HmnCreminelli\] H=+\_\_+ \_\_+ \_\_\^\_\^, To emphasize the importance of this sign difference, we show that we recover the results of Ref. [@Creminelli] when deriving the decoupling limit using , but stress that the Bianchi identity is then not satisfied, as a consequence of the fact that $H\mn$ is then not a covariant tensor if $g\mn$ and $h\mn$ are conventionally defined.
Up to the cubic order, the Lagrangian in the decoupling limit is then &=&-12 h\^ \^\_h\_+ h\^ X\^[(1)]{}\
&&-$(8c_1+4)
[\Pi^3]+(8c_2-4)[\Pi][\Pi^2]+8c_3[\Pi]^3$+h\^X\^[(2)]{}, with $\tilde X^{(1)}\mn= X^{(1)}\mn$, since both approaches only differ at quadratic order in $\pi$, and X\^[(2)]{}=-$3c_1-\frac 32$\^2-2(1+c\_2)\[\]+$\frac 12 -3c_3$\[\]\^2-c\_2 \[\^2\]. Setting $c_1=2c_3-\frac 1 2$ and $c_2=-3c_3+\frac 12$ to obtain the total derivative combination , we get \[X22\] X\^[(2)]{}=-6(c\_3-12 ) (\^2-\[\])-(3c\_3-12)$[\Pi]^2-[\Pi^2]$, which is not conserved for any choice of $c_3$ since the reparametrization invariance is not present with this choice of $H\mn$, and the Bianchi identity has no reason to be satisfied.
Similarly at the quartic order, we would need to impose the relation between the coefficients $d_1=-6d_5-\frac{1}{16}(24c_3-5)$, $d_2=8d_5+\frac{1}{4}(6c_3-1)$, $d_3=3d_5+\frac{1}{16}(12c_3-1)$, and $d_4=-6d_5-\frac34 c_3$, to cancel the terms of the form $\Lambda_4^{-8}(\p \p \pi)^4$. The mixing with the helicity-2 mode, will then enter with the quantity $\tilde X^{(3)}\mn$ as derived in [@Creminelli]: X\^[(3)]{}&=&(-1+9c\_3+24d\_5)(\^3-\[\]\^2) - (9c\_3+24d\_5)(\[\^2\]-\[\]\^2)\
&&-(c\_3+8d\_5)$[\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]$. As noticed in [@Creminelli], not only there would then be no choice of $c_3$ and $d_5$ for which this interaction disappears, and it would always lead to higher derivative equations of motion, suggesting a ghost-like instability. However the fact that $\tilde X^{(3)}\mn$ is not conserved is an artifact of the sign choice in the expression for $H\mn$, that does not lead to reparametrization invariant results.
[99]{}
M. Fierz and W. Pauli, Proc. Roy. Soc. Lond. A [**173**]{}, 211 (1939).
P. van Nieuwenhuizen, Nucl. Phys. B [**60**]{} (1973) 478.
H. van Dam and M. J. G. Veltman, Nucl. Phys. B [**22**]{}, 397 (1970); V. I. Zakharov, JETP Lett. [**12**]{}, 312 (1970) \[Pisma Zh. Eksp. Teor. Fiz. [**12**]{}, 447 (1970)\].
A. I. Vainshtein, Phys. Lett. B [**39**]{}, 393 (1972). C. Deffayet, G. R. Dvali, G. Gabadadze and A. I. Vainshtein, Phys. Rev. D [**65**]{}, 044026 (2002) \[arXiv:hep-th/0106001\].
D. G. Boulware and S. Deser, Phys. Rev. D [**6**]{}, 3368 (1972).
G. Gabadadze and A. Gruzinov, Phys. Rev. D [**72**]{}, 124007 (2005) \[arXiv:hep-th/0312074\].
N. Arkani-Hamed, H. Georgi and M. D. Schwartz, Annals Phys. [**305**]{}, 96 (2003). P. Creminelli, A. Nicolis, M. Papucci and E. Trincherini, JHEP [**0509**]{}, 003 (2005). C. Deffayet and J. W. Rombouts, Phys. Rev. D [**72**]{}, 044003 (2005) \[arXiv:gr-qc/0505134\].
I. I. Kogan, S. Mouslopoulos and A. Papazoglou, Phys. Lett. B [**503**]{}, 173 (2001) \[arXiv:hep-th/0011138\].
M. Porrati, Phys. Lett. B [**498**]{}, 92 (2001) \[arXiv:hep-th/0011152\].
A. Higuchi, Nucl. Phys. B [**282**]{}, 397 (1987).
G. Gabadadze, Phys. Lett. B [**681**]{}, 89 (2009) \[arXiv:0908.1112 \[hep-th\]\].
C. de Rham, Phys. Lett. B [**688**]{}, 137 (2010) \[arXiv:0910.5474 \[hep-th\]\].
C. de Rham and G. Gabadadze, arXiv:1006.4367 \[hep-th\].
A. Nicolis, R. Rattazzi and E. Trincherini, Phys. Rev. D [**79**]{}, 064036 (2009) \[arXiv:0811.2197 \[hep-th\]\].
G. R. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B [**485**]{}, 208 (2000) \[arXiv:hep-th/0005016\]; G. R. Dvali and G. Gabadadze, Phys. Rev. D [**63**]{}, 065007 (2001) \[arXiv:hep-th/0008054\].
M. A. Luty, M. Porrati and R. Rattazzi, JHEP [**0309**]{}, 029 (2003) \[arXiv:hep-th/0303116\].
C. Deffayet, G. Esposito-Farese and A. Vikman, Phys. Rev. D [**79**]{}, 084003 (2009) \[arXiv:0901.1314 \[hep-th\]\],\
C. Deffayet, S. Deser and G. Esposito-Farese, Phys. Rev. D [**80**]{}, 064015 (2009) \[arXiv:0906.1967 \[gr-qc\]\].
C. de Rham and A. J. Tolley, JCAP [**1005**]{}, 015 (2010) \[arXiv:1003.5917 \[hep-th\]\]. A. I. Vainshtein and I. B. Khriplovich, Yad. Fiz., 13 (1971), 198; \[Sov. J. Nucl. Phys., 13 (1971), 111.
R. L. Arnowitt, S. Deser and C. W. Misner, Phys. Rev. [**116**]{}, 1322 (1959). G. Gabadadze and L. Grisa, Phys. Lett. B [**617**]{}, 124 (2005) \[arXiv:hep-th/0412332\].
[^1]: Notice also that the discontinuity is absent when a small cosmological constant is included before sending the mass of the graviton to zero [@Kogan; @Porrati]. Doing so in de Sitter space, however, one passes through the parameter region where helicity-0 becomes a ghost [@Higuchi], while the anti de Sitter case is ghost-free [@Kogan; @Porrati].
[^2]: Once external classical sources, such as planets, stars, galaxies,.., are present, the energy scale of nonlinearities – the Vainshtein scale – depends on the mass/energy of the source and is significantly lower [@DDGV].
[^3]: In general, it could still be propagating “$5.5$” modes even if the Hamiltonian constraint is maintained. For instance, since the toy model described by (\[H\]) is not Lorentz invariant for general functions $f$, $Q_j$ and $P$, there may exist non-propagating instantaneous modes in this model. For discussions of related issues see, [@GGLuca]. In contrast, the model of Refs. [@GG; @Claudia] is 4D Lorentz-invariant and the instantaneous mode in 4D is not expected. For a rigorous proof that there are only 5 degrees of freedom, and not “$5.5$”, however, a detailed study of the algebra of the Hamiltonian constraint should be performed. The fact that the decoupling limit gives only 5 degrees of freedom is an important hint that the full theory is not likely to have the extra “$0.5$” degree of freedom.
[^4]: Note that away from the decoupling limit, and at a nonlinear order, $\delta N$ is the right variable and not $h_{00} = 1-N^2 +N_j^2$, which was used before as the lapse in the decoupling limit.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Simple boundary expressions for the $k^{th}$ power of the cotangent line class $\psi_1$ on $\overline{M}_{g,1}$ are found for $k\geq 2g$. The method is by virtual localization on the moduli space of maps to ${\mathbb{P}}^1$. As a consequence, nontrivial tautological classes in the kernel of the boundary push-forward map $$\iota_*:A^*( \overline{M}_{g,2}) \rightarrow A^*(\overline{M}_{g+1})$$ are constructed. The geometry of genus $g+1$ curves then provides universal equations in genus $g$ Gromov-Witten theory. As an application, we prove all the Gromov-Witten identities conjectured recently by K. Liu and H. Xu.'
address:
- |
Department of Mathematics\
University Of Notre Dame\
Notre Dame, IN\
USA
- |
Department of Mathematics\
Princeton University\
Princeton, NJ\
USA
author:
- Xiaobo Liu
- Rahul Pandharipande
title: New topological recursion relations
---
Introduction
============
Tautological classes
--------------------
Let ${\overline{M}_{g,n}}$ be the moduli space of stable curves of genus $g$ with $n$ marked points. Let $A^*(\overline{M}_{g,n})$ denote the Chow ring with ${\mathbb Q}$-coefficients. The system of tautological rings is defined in [@fp3] to be the set of smallest ${{\mathbb{Q}}}$-subalgebras of the Chow rings, $$R^*(\overline{M}_{g,n}) \subset A^*(\overline{M}_{g,n}),$$ satisfying the following two properties:
1. The system is closed under push-forward via all maps forgetting markings: $$\pi_*: R^*(\overline{M}_{g,n}) {\rightarrow}R^*(\overline{M}_{g,n-1}).$$
2. The system is closed under push-forward via all gluing maps: $$\iota_*: R^*(\overline{M}_{g_1,n_1\scup\{*\}})
\otimes_{{{\mathbb{Q}}}}
R^*(\overline{M}_{g_2,n_2\scup\{\bullet\}}) {\rightarrow}R^*(\overline{M}_{g_1+g_2, n_1+n_2}),$$ $$\iota_*: R^*(\overline{M}_{g, n\scup\{*,\bullet\}}) {\rightarrow}R^*(\overline{M}_{g+1, n}),$$ with attachments along the markings $*$ and $\bullet$.
Natural algebraic constructions typically yield Chow classes lying in the tautological ring. For example, the standard $\psi$, $\kappa$, and $\lambda$ classes in $A^*(\overline{M}_{g,n})$ all lie in the tautological ring. The tautological rings also possess a rich conjectural structure, see [@fp2] for a detailed discussion.
The moduli space $\overline{M}_{g,n}$ admits a stratification by topological type indexed by decorated graphs. The normalized stratum closures are simply quotients of products of simpler moduli spaces of pointed curves. A [*descendent stratum class*]{} in $R^*(\overline{M}_{g,n})$ is a push-forward from a stratum $S$ of a monomial in the cotangent line classes of the special points[[^1]]{} of $S$.
A relation in $R^*(\overline{M}_{g,n})$ among descendent stratum classes yields a universal genus $g$ equation[[^2]]{} in Gromov-Witten theory by the splitting axiom. For example, the equivalence of boundary strata in $\overline{M}_{0,4}$ implies the WDVV equation. Several other relations have since been found [@BP; @G1; @G2; @kimliu].
Let $g\geq 1$. Boundary expressions for powers $\psi_1^k\in R^*(\overline{M}_{g,1})$ of the cotangent line class are the most basic [*topological recursion relations*]{}. For $k\geq g$, boundary expressions for $\psi_1^k$ have been proved to exist [@fp3; @ionel]. While the arguments are constructive, the method in practice is very difficult. The answers for $k=g$ appear, for low $g$, to be rather complicated. [[^3]]{}
The results of the paper concern simple boundary expressions for $\psi_1^k$ for $k\geq 2g$. The relations have two interesting consequences. The first is the construction of nontrivial classes in the kernel of the boundary push-forward map $$\iota_*:A^*( \overline{M}_{g,2}) \rightarrow A^*(\overline{M}_{g+1}).$$ By the splitting axioms of Gromov-Witten theory in genus $g+1$, we obtain universal equations in genus $g$ from linear combinations of descendent stratum classes in the kernel of $\iota_*$. The possibility for such Gromov-Witten equations was anticipated earlier in discussions with Faber, but a nontrivial example was not found. The existence of such nontrivial equations now opens the door to new possibilities. Are there equations in Gromov-Witten theory in genus $g$ obtained by boundary embeddings in even higher genera? Are there new equations[[^4]]{} waiting to be found in genus 0 and 1?
The second consequence of our new topological recursion relations is a proof of the Gromov-Witten conjectures of K. Liu and H. Xu [@kliu]. The conjectures are universal relations in Gromov-Witten theory related to high powers of the cotangent line classes. We prove all the conjectures made there.
Topological recursion
---------------------
Let $g\geq 1$. Let $L_1 \rightarrow \overline{M}_{g,1}$ be the cotangent line bundle with fiber $T^*_{p_1}(C)$ at the moduli point $[C,p_1]\in \overline{M}_{g,1}$. Let $$\psi_1= c_1(L_1) \in A^1(\overline{M}_{g,1})$$ be the cotangent line class. For a genus splitting $g_1+g_2=g$, let $$\iota:
{\Delta}_{1,\emptyset}(g_1,g_2)\stackrel{\sim}{=} \overline{M}_{g_1,2}\times
\overline{M}_{g_2,1} \rightarrow \overline{M}_{g,1}$$ denote the boundary divisor parametrizing reducible curves $$C=C_1\scup C_2$$ satisfying $g(C_i)=g_i$ with a single meeting point, $$C_1\scap C_2 = p_\star,$$ and marking $p_1\in C_1$. Let $$\psi_{\star_1}, \psi_{\star_2} \in A^1\big({\Delta}_{1,\emptyset}(g_1,g_2)\big)$$ denote the cotangent line classes at the point $p_\star$. Here, $\psi_{\star_1}$ is the cotangent line along $C_1$ and $\psi_{\star_2}$ is the cotangent line along $C_2$.
\[bbt\] For $g\geq 1$ and $r\geq 0$, $$\psi_1^{2g+r} =
\sum_{g_1+g_2=g,\ g_i>0}\
\ \ \sum_{a+b=2g-1+r}
(-1)^a \ \frac{g_2}{g} \cdot
\iota_*\Big(
\psi_{\star_1}^a \psi_{\star_2}^b \scap [{\Delta}_{1,\emptyset}(g_1,g_2)] \Big)$$ in $A^{2g+r}(\overline{M}_{g,1})$.
For $r>g-2$, both sides of the above relation vanish for dimension reasons. Theorem \[bbt\] is nontrivial only if $g\geq 2$ and $0\leq r \leq g-2$. On the right side of the relation, the marking 1 carries [no cotangent line classes]{}.
Theorem \[bbt\] and several similar relations are proved in Sections \[xz\]-\[nnvv\] using the virtual geometry of the moduli space of stable maps $\overline{M}_{g,n}({\mathbb{P}}^1,1)$. Special intersections against the virtual class $[\overline{M}_{g,n}({\mathbb{P}}^1,1)]^{vir}$ of the moduli space, known to vanish for geometric reasons, are evaluated via virtual localization [@gp] and pushed-forward to $\overline{M}_{g,n}$ to obtain relations. The technique was first used in [@fp1].
Consequences
------------
Let $g\geq 1$ and $r\geq 1$. Consider the class $$\label{sssw}
\xi_{g,r}=\sum_{a+b=2g+r}(-1)^a
\psi_1^a \psi_2^b \in A^{2g+r}(\overline{M}_{g,2}).$$ Let $\iota: \overline{M}_{g,2} \rightarrow \overline{M}_{g+1}$ be the irreducible boundary map. As a corollary of the new topological recursion relations, we prove the following result in Section \[xxzz\].
\[vyt\] For $g\geq 1$ and $r\geq 1$, $\ \iota_*(\xi_{g,r}) = 0 \in A^{2g+r+1}(\overline{M}_{g+1}).$
For $r$ odd, the push-forward $\iota_*(\xi_{g,r})$ is easily seen to vanish by the antisymmetry of the sum . We view the class $\xi_{g,{r}}$ as an uninteresting element of the kernel of $$\iota_*: R^*(\overline{M}_{g,2}) \rightarrow R^*(\overline{M}_{g+1}).$$ The universal Gromov-Witten relation obtained from $\iota_*(\xi_{g,{r}})=0$ is trivial in the $r$ odd case.
The $r$ even case is much more subtle. Here, $\xi_{g,r}$ is a remarkable element. For $r \leq g-2$, $$\xi_{g,r} \neq 0 \in A^*(\overline{M}_{g,2})$$ since we can compute $$\int_{\overline{M}_{g,2}} \xi_{g,r}\cdot
\psi_2^{g-2-r} \cap [\Delta_{1,2}(1,g-1)] =
\int_{\overline{M}_{1,2}} \psi_1^2 \cdot
\int_{\overline{M}_{g-1,2}} \psi_2^{3g-4} =
\frac{1}{24}\cdot \frac{1}{24^{g-1}(g-1)!}.$$ The vanishing of $\iota_*(\xi_{g,r})$ is nontrivial — not a consequence of any elementary symmetry. Hence, the associated Gromov-Witten relation is also nontrivial.
Gromov-Witten theory
--------------------
Let $X$ be a nonsingular projective variety over ${\mathbb{C}}$ of dimension $d$. Let $\{ \gamma_\ell \}$ be a basis of $H^*(X,{\mathbb{C}})$ with Poincaré dual classes $\{ \gamma^\ell \}$. The descendent Gromov-Witten invariants of $X$ are $$\big\langle \tau_{k_1}(\gamma_{\ell_1})
\ldots \tau_{k_n}(\gamma_{\ell_n})
\big\rangle_{g,\beta}^X = \int_{[\overline{M}_{g,n}(X, \beta)]^{vir}}
\psi_1^{k_1}\scup {{\text{ev}}}_1^*(\gamma_{\ell_1}) \cdots \psi_n^{k_n} \scup
{{\text{ev}}}_n^*(\gamma_{\ell_n})$$ where $\psi_i$ are the cotangent line classes and $${{\text{ev}}}_i: \overline{M}_{g,n}(X,\beta) \rightarrow X$$ are the evaluation maps associated to the markings.
Let $\{t^\ell_k \}$ be a set of variables. Let $F^X_g$ be the generating function of the genus $g$ descendent invariants, $$F^X_g = \sum_{\beta \in H_2(X,{\mathcal{Z}})}
q^\beta \sum_{n\ge 0} \frac{1}{n!} \sum_{\substack{\ell_1\dots \ell_n \\
k_1 \dots k_n}} t_{k_n}^{\ell_n} \dots t_{k_1}^{\ell_1}\ \langle
\tau_{k_1}(\gamma_{\ell_1}) \dots \tau_{k_n}(\gamma_{\ell_n})
\rangle_{g,\beta}^X .$$ Double brackets denote differentiation, $$\big\langle
\big
\langle \tau_{k_1}(\gamma_{\ell_1}) \dots \tau_{k_n}(\gamma_{\ell_n})
\big\rangle
\big\rangle_g^X
= \frac{\partial}{\partial t^{\ell_1}_{k_1}}
\cdots
\frac{\partial}{\partial t^{\ell_n}_{k_n}} \ F^X_g.$$
The Gromov-Witten equation obtained from Theorem \[vyt\] is the following result (trivial unless $r$ is even) conjectured by K. Liu and H. Xu.
\[gwwq\] For $g\geq 0$ and $r\geq 1$,
$$\sum_{a+b=2g+r}\ \sum_{\ell}\ (-1)^a \big\langle
\big\langle \tau_{a}(\gamma_\ell)
\tau_b(\gamma^\ell) \big\rangle \big\rangle_g^X = 0\ .$$
Theorem \[gwwq\] and several related Gromov-Witten equations conjectured by Liu-Xu are proved in Section \[xl\]. Proofs in case $g\leq 2$ or $r> g-2$ were obtained earlier in [@xliu2].
Acknowledgments
---------------
We thank C. Faber, D. Maulik, and H. Xu for conversations about tautological relations and Gromov-Witten theory. X. L. was partially supported by NSF grant DMS-0505835. R. P. was partially supported by NSF grant DMS-0500187.
Localization relations
======================
${\mathbb{C}}^*$-action
-----------------------
Let $t$ be the generator of the ${\mathbb{C}}^*$-equivariant ring of a point, $$A^*_{{\mathbb{C}}^*}(\bullet) = {\mathbb{C}}[t].$$ Let ${\mathbb{C}}^*$ act on ${\mathbb{P}}^1$ with tangent weights $t,-t$ at the fixed points $0,\infty\in {\mathbb{P}}^1$ respectively. There is an induced ${\mathbb{C}}^*$-action on the moduli space of maps $\overline{M}_{g,n}({\mathbb{P}}^1,1)$. A ${\mathbb{C}}^*$-equivariant virtual class $$[\overline{M}_{g,n}({\mathbb{P}}^1,1)]^{vir} \in
A^{{\mathbb{C}}^*}_{2g+n}(\overline{M}_{g,n}({\mathbb{P}}^1,1))$$ is obtained. The ${\mathbb{C}}^*$-equivariant evaluation maps $${{\text{ev}}}_i: \overline{M}_{g,n}({\mathbb{P}}^1,1) \rightarrow {\mathbb{P}}^1$$ determine ${\mathbb{C}}^*$-equivariant classes $${{\text{ev}}}_i^*([0]), {{\text{ev}}}_i^*([\infty]) \in
A_{{\mathbb{C}}^*}^{1}\big(\overline{M}_{g,n}({\mathbb{P}}^1,1)\big).$$
Denote the ${\mathbb{C}}^*$-equivariant universal curve and universal map by $$\pi: U \rightarrow \overline{M}_{g,n}({\mathbb{P}}^1,1), \ \
f: U \rightarrow {\mathbb{P}}^1.$$ There is a unique lifting of the ${\mathbb{C}}^*$-action to $${{\mathcal{O}}}_{{\mathbb{P}}^1}(-2)\rightarrow {\mathbb{P}}^1$$ with fiber weights to be $-t,t$ over the fixed points $0,\infty\in {\mathbb{P}}^1$ respectively. Let $$B=R^1 \pi_* f^*\big({{\mathcal{O}}}_{{\mathbb{P}}^1}(-2)\big) \rightarrow
\overline{M}_{g,n}({\mathbb{P}}^1,1).$$ The sheaf $B$ is ${\mathbb{C}}^*$-equivariant and locally free of rank $g+1$. Let $$c_g(B) \in A^g_{{\mathbb{C}}^*}\big(\overline{M}_{g,n}({\mathbb{P}}^1,1)\big)$$ be the $g^{th}$ Chern class.
A branch morphism for stable maps to ${\mathbb{P}}^1$ has been defined in [@bp], $$\text{br}: \overline{M}_{g,n}({\mathbb{P}}^1,1) \rightarrow
\text{Sym}^{2g}({\mathbb{P}}^1).$$ The branch morphism is ${\mathbb{C}}^*$-equivariant. Let $H_0\subset \text{Sym}^{2g}({\mathbb{P}}^1)$ denote the hyperplane of $2g$-tuples incident to $0\in {\mathbb{P}}^1$. Since $H_0$ is ${\mathbb{C}}^*$-invariant, $$\text{br}^*([H_0]) \in A^1_{{\mathbb{C}}^*}\big(\overline{M}_{g,n}({\mathbb{P}}^1,1)\big).$$
The total space of ${{\mathcal{O}}}_{{\mathbb{P}}^1}(-2) \rightarrow {\mathbb{P}}^1$ is well-known to be the resolution of the $A_1$ singularity ${\mathbb{C}}^2/\mathbb{Z}_2$ with respect to the action $$-(z_1,z_2) \mapsto (-z_1,-z_2).$$ A localization approach to the corresponding (reduced) Gromov-Witten theory along similar lines is developed in [@dm].
\[axx\]
Proof of Theorem \[bbt\] {#xz}
------------------------
We obtain a boundary expression for $\psi_1^{2g+r}
\in R^*(\overline{M}_{g,1})$ by localization relations on $\overline{M}_{g,1}({\mathbb{P}}^1,1)$. Let $$I_{g,r} = {{\text{ev}}}_1^*([\infty]^{2+r}) \scup c_g(B) \scup \text{br}^*([H_0])
\in A^{g+r+3}_{{\mathbb{C}}^*} \big(\overline{M}_{g,1}({\mathbb{P}}^1,1)\big).$$ Since the non-equivariant limit of $[\infty]^2$ is 0, the non-equivariant limit of $I_{g,r}$ is also 0. Let $$\epsilon: \overline{M}_{g,1}({\mathbb{P}}^1,1) \rightarrow \overline{M}_{g,1}$$ be the forgetful map. The map $\epsilon$ is ${\mathbb{C}}^*$-equivariant with respect to the trivial ${\mathbb{C}}^*$-action on $\overline{M}_{g,1}$. After push-forward, $$\label{iiir}
\epsilon_*\big(I_{g,r} \scap [\overline{M}_{g,1}({\mathbb{P}}^1,1)]^{vir}\big) \in
A_{{\mathbb{C}}^*}^{2g+r}(\overline{M}_{g,1}).$$ The virtual localization formula [@gp] gives an explicit calculation of in term of tautological classes. Setting the non-equivariant limit to 0, $$\label{vvp}
\epsilon_*\big(I_{g,r} \scap [\overline{M}_{g,1}
({\mathbb{P}}^1,1)]^{vir}\big)|_{t=0} = 0,$$ yields an equation in $R^{2g+r}(\overline{M}_{g,1})$.
The localization computation of is a sum over residue contributions of the ${\mathbb{C}}^*$-fixed loci of $\overline{M}_{g,1}({\mathbb{P}}^1,1)$. The contributing ${\mathbb{C}}^*$-fixed loci $\overline{M}^{{\mathbb{C}}^*}_{g_1,g_2}$ are indexed by genus splittings $g_1+g_2=g$. If $g_1,g_2>0$, the ${\mathbb{C}}^*$-fixed locus is $$\label{bpw}
\overline{M}^{{\mathbb{C}}^*}_{g_1,g_2} \stackrel{\sim}{=}
\overline{M}_{g_1,2} \times \overline{M}_{g_2,1}\subset
\overline{M}_{g,1}({\mathbb{P}}^1,1),$$ parametrizing maps with collapsed components of genus $g_1,g_2$ over $\infty,0 \in {\mathbb{P}}^1$ respectively and the marking over $\infty$. The restriction of $\epsilon$ to the locus is isomorphic to $$\iota: \Delta_{1,\emptyset}(g_1,g_2) \rightarrow \overline{M}_{g,1}.$$ In the degenerate cases $$(g_1,g_2)=(0,g) \ \ \text{or} \ \ (g,0),$$ the ${\mathbb{C}}^*$-fixed loci are isomorphic to $\overline{M}_{g,1}$ and $\overline{M}_{g,2}$ respectively.
By the virtual localization formula, we obtain $$\epsilon_*(I_{g,r} \scap [\overline{M}_{g,1}({\mathbb{P}}^1,1)]^{vir}) =
\sum_{g_1+g_2=g, \ g_i \geq0} \ \epsilon_*\Big(
\frac{I_{g,r}}
{e(\text{Norm}_{g_1,g_2}^{vir})} \scap [\overline{M}^{{\mathbb{C}}^*}_{g_1,g_2}]
\Big).$$ If $g_1,g_2>0$, the restriction of $B$ to $\Delta_{1,\emptyset}(g_1,g_2)$ is $$\mathbb{E}^\vee_{g_1} \otimes(+t) \oplus \mathbb{E}^\vee_{g_2} \otimes (-t) \oplus {\mathbb{C}}$$ where $\mathbb{E}$ denote the Hodge bundle. The class $\text{br}^*(H_0)$ restricts to $2g_2t$. The Euler class of the virtual normal bundle is $$\frac{1}
{e(\text{Norm}^{vir})} =
\frac{c_{g_2}(\mathbb{E}^\vee\otimes(+t)) c_{g_1}(\mathbb{E}^\vee\otimes
(-t))}{-t^2(t-\psi_{\star_2})(-t-\psi_{\star_1})}.$$ Putting all the terms together and using Mumford’s relation[[^5]]{} twice, we obtain $$\epsilon_*\Big(
\frac{I_{g,r}}
{e(\text{Norm}_{g_1,g_2}^{vir})} \scap [\overline{M}^{{\mathbb{C}}^*}_{g_1,g_2}]
\Big)|_{t=0} =
\iota_*\Big( \sum_{a+b=2g+r-1}
(-1)^g (-1)^a 2g_2 \ \psi_{\star_1}^a \psi_{\star_2}^b
\scap[\Delta_{1,\emptyset}(g_1,g_2)] \Big)$$ for $g_1,g_2 >0$. Because of the $2g_2 t$ factor, the degenerate case $(g_1,g_2)=(g,0)$ contributes 0. However, $$\epsilon_*\Big(
\frac{I_{g,r}}
{e(\text{Norm}_{0,g}^{vir})} \scap [\overline{M}^{{\mathbb{C}}^*}_{0,g}]
\Big)|_{t=0} =
(-1)^g (-1) 2g \ \psi_{1}^{2g+r}.$$ By the vanishing , we conclude $$(-1)^g (-1) 2g \ \psi_{1}^{2g+r} +
\sum_{g_1+g_2=g, \ g_i >0} \ \
\sum_{a+b=2g+r-1} \ \iota_*\Big(
(-1)^g (-1)^a 2g_2 \ \psi_{\star_1}^a \psi_{\star_2}^b
\scap[\Delta_{1,\emptyset}(g_1,g_2)]\Big)= 0$$ which is equivalent to Theorem \[bbt\].
Variations {#nnvv}
----------
Let $g\geq 0$ and $n_1,n_2 \geq 2$. Consider the moduli space $\overline{M}_{g,n_1+n_2}$. Let $N_1$ and $N_2$ denote the markings sets $$N_1=\{ 1, \ldots, n_1\}, \ \ N_2=\{n_1+1, \ldots, n_1+n_2\}.$$ For $g_1,g_2 \geq 0$, let $$\iota: {\Delta}_{N_1,N_2}[g_1,g_2]\rightarrow \overline{M}_{g,n_1+n_2}$$ denote the boundary divisor parametrizing reducible curves $$C=C_1\cup C_2$$ with markings $N_i$ on $C_i$ satisfying $g(C_i)=g_i$ and $\ C_1 \cap C_2=p_\star$. Let $$\psi_{\star_1}, \psi_{\star_2} \in A^1\big({\Delta}_{N_1,N_2}(g_1,g_2)\big)$$ denote the cotangent line classes of $p_\star$ along $C_1$ and $C_2$ as before.
\[bbbtt\] For $g\geq 0$ and $n_1,n_2\geq 2$ and $r\geq 0$, $$\sum_{g_1+g_2=g,\ g_i\geq0}
\ \ \sum_{a+b=2g+n_1+n_2-3+r}
(-1)^a \iota_*\big(\psi_{\star_1}^a \psi_{\star_2}^b \scap
[{\Delta}_{N_1,N_2}(g_1,g_2)]\big)=0$$ in $A^{2g+n_1+n_2-2+r}(\overline{M}_{g,n_1+n_2})$.
Consider the moduli space $\overline{M}_{g,n_1+n_2}({\mathbb{P}}^1,1)$ with the ${\mathbb{C}}^*$-action specified in Section \[axx\]. Let $$J_{g,r} =
{{\text{ev}}}_1^*([\infty]^{1+r})\scup
\prod_{i\in N_1} {{\text{ev}}}_i^*([\infty]) \scup
\prod_{i\in N_2} {{\text{ev}}}_i^*([0]) \scup c_g(B) \in A^{g+n_1+n_2+r+1}
\big(\overline{M}_{g,n_1+n_2}({\mathbb{P}}^1,1)\big).$$ Since the non-equivariant limit of $[\infty]^2$ is 0, the non-equivariant limit of $J_{g,r}$ is also 0. Let $$\epsilon: \overline{M}_{g,n_1+n_2}({\mathbb{P}}^1,1)
\rightarrow \overline{M}_{g,n_1+n_2}$$ be the forgetful map. After push-forward, $$\label{iiirg}
\epsilon_*\big(
J_{g,r} \scap [\overline{M}_{g,n_1+n_2}({\mathbb{P}}^1,1)]^{vir}\big) \in
A_{{\mathbb{C}}^*}^{2g+n_1+n_2-2+r}(\overline{M}_{g,n_1+n_2}).$$ Setting the non-equivariant limit to 0, $$\label{vvpc}
\epsilon_*\big(J_{g,r} \scap
[\overline{M}_{g,n_1+n_2}({\mathbb{P}}^1,1)]^{vir}\big)|_{t=0} = 0,$$ yields an equation in $R^{2g+n_1+n_2-2+r}(\overline{M}_{g,n_1+n_2})$. Evaluating the virtual localization formula as in the proof of Theorem \[bbt\] precisely yields Proposition \[bbbtt\].
Since $n_1,n_2\geq 2$ in the hypothesis of Proposition \[bbbtt\], there are no degenerate cases. There is no difficulty in handling the degenerate cases. We single out the following result with the same proof[[^6]]{} as Proposition \[bbbtt\].
\[fqq\] For $g\geq 1$ and $r\geq 0$, $$-\psi_1^{2g+r}+ (-1)^r \psi_2^{2g+r} +
\sum_{g_1+g_2=g,\ g_i > 0}
\ \ \sum_{a+b=2g-1+r}
(-1)^a \iota_*\big(\psi_{\star_1}^a \psi_{\star_2}^b \scap
[{\Delta}_{1,2}(g_1,g_2)]\big)=0$$ in $A^{2g+r}(\overline{M}_{g,2})$.
Proposition \[fqq\] corresponds simply to the $n_1=n_2=1$ case of Proposition \[bbbtt\]. The first two terms are the degenerate contributions.
Proof of Theorem \[vyt\]
------------------------
We start by pushing forward the relation of Proposition \[fqq\] in genus $g+1$ to $\overline{M}_{g+1}$ for odd $r$, $$-2\kappa_{2(g+1)+r-2} -
\sum_{g_1+g_2=g+1,\ g_i > 0}
\ \ \sum_{a+b=2(g+1)-3+r}
(-1)^a \iota_*\big(\psi_{\star_1}^a \psi_{\star_2}^b \scap
[{\Delta}_{\emptyset,\emptyset}(g_1,g_2)]\big)=0,$$ using the definition of the $\kappa$ classes and the string equation. Equivalently, $$\label{vpe}
\kappa_{2g+r} +\frac{1}{2}
\sum_{g_1+g_2=g+1,\ g_i > 0}
\ \ \sum_{a+b=2g-1+r}
(-1)^a \iota_*\big(\psi_{\star_1}^a \psi_{\star_2}^b \scap
[{\Delta}_{\emptyset,\emptyset}(g_1,g_2)]\big)=0 \in
A^{2g+r}(\overline{M}_{g+1})$$ for odd $r$.
The Chern characters of the Hodge bundle $\text{ch}_{2l-1}(\mathbb{E}_{g+1})$ on $\overline{M}_{g+1}$ vanish for $l> g+1$, see [@fp1]. Hence, by Mumford’s GRR calculation, $$\begin{gathered}
\text{ch}_{2g+r}(\mathbb{E}_{g+1})
\left(\frac{B_{2g+r+1}}{(2g+r+1)!} \right)^{-1} = \\
\kappa_{2g+r} +\frac{1}{2} \iota_*(\xi_{g,r-1}) + \frac{1}{2}
\sum_{g_1+g_2=g+1,\ g_i > 0}
\ \ \sum_{a+b=2g-1+r}
(-1)^a \iota_*\big(\psi_{\star_1}^a \psi_{\star_2}^b \scap
[{\Delta}_{\emptyset,\emptyset}(g_1,g_2)]\big)=0\end{gathered}$$ for $r\geq 3$ odd. Using the vanishing , we conclude $$\iota_*(\xi_{g,r-1}) = 0 \in A^{2g+r}(\overline{M}_{g+1})$$ for $r\geq 3$ odd, which are the only nontrivial cases of Theorem \[vyt\].
\[xxzz\]
Gromov-Witten equations {#xl}
=======================
Liu-Xu conjecture
-----------------
Let $X$ be a nonsingular projective variety. We prove here the following result constraining the Gromov-Witten theory of $X$ conjectured by K. Liu and H. Xu in [@kliu].
\[conj:C\] Let $g\geq 0$ and $x_{i}, y_{j} \in H^{*}(X, \mathbb{C})$. For all $p_{i}, q_{j}, r, s\geq 0$ and $m \geq 2g-3+r+s$, $$\sum_{k \in \mathbb{Z}} \
\sum_{g_1+g_2=g, \ g_i\geq 0} \
(-1)^{k} {\left< \hspace{-2pt} \left< \, {\tau_{k}(\gamma_{\ell})} \prod_{i=1}^{r} \tau_{p_{i}}(x_{i}) \,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, {\tau_{m-k}(\gamma^{\ell})} \prod_{j=1}^{s} \tau_{q_{j}}(y_{j}) \,
\right> \hspace{-2pt} \right>_{g_2}} = 0.$$
Here, $k$ is allowed to be an arbitrary integer. To interpret Theorem \[conj:C\] correctly, the following convention is used[[^7]]{}: $$\label{eqn:negdesc}
{ \left< \, \tau_{-2}(\gamma_{1}) \, \right>_{0,0}} = 1 \hspace{20pt} {\rm and} \hspace{20pt}
{ \left< \, {\tau_{m}(\gamma_{\alpha})} {\tau_{-1-m}(\gamma_{\beta})} \, \right>_{0,0}} = (-1)^{{\rm max}(m, -1-m)} \eta_{\alpha \beta}$$ for $m \in \mathbb{Z}$. All other negative descendents vanish. The sum over $\ell$ in Theorem \[conj:C\] is implicit.
Since the genus 0 case of Theorem \[conj:C\] has been proved[[^8]]{} in [@xliu2], we will only consider the case $g \geq 1$. By Theorem 0.2 of [@xliu2], Theorem \[gwwq\] follows from the $r=s=0$ case of Theorem \[conj:C\].
Conventions
-----------
We will not use convention . Instead, we set ${\tau_{n}(\gamma_{\alpha})}=0$ for $n<0$ and separate the negative terms in the summation of Theorem \[conj:C\].
The [*big phase space*]{} is the infinite dimensional vector space with coordinate $t=(t_{n}^{\alpha})$. It can be interpreted as an infinite product of the cohomology space $H^{*}(X,{\mathbb{C}})$. The Gromov-Witten potential $F_{g}^{X}$ is a function on the big phase space. We will interpret the symbol ${\tau_{n}(\gamma_{\alpha})}$ as the coordinate vector field $\frac{\partial}{\partial t_{n}^{\alpha}}$. Moreover, we also extend the meaning of ${\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{k} \,
\right> \hspace{-2pt} \right>_{g}}$ from partial derivatives of $F_{g}^{X}$ to covariant derivatives of $F_{g}^{X}$ with respect to arbitrary vector fields ${{\mathcal W}}_{1},
\ldots, {{\mathcal W}}_{k}$ on the big phase space. Here, the covariant differentiation is with respect to the trivial connection $\nabla$ for which the coordinate vector fields ${\tau_{n}(\gamma_{\alpha})}$ are parallel. More precisely, if ${{\mathcal W}}_{i} = \sum_{n, \alpha} f_{n, \alpha}^{i} {\tau_{n}(\gamma_{\alpha})}$ where $f_{n, \alpha}^{i}$ are functions of $t=(t_{m}^{\beta})$, then we define $${\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{k} \,
\right> \hspace{-2pt} \right>_{g}} =
\nabla^{k}_{{{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{k}} F_{g}^{X}
= \sum_{\begin{array}{c} n_{1}, \cdots, n_{k} \\ \alpha_{1}, \cdots, \alpha_{k} \end{array}}
\left( \prod_{i=1}^{k} f_{n_{i}, \alpha_{i}}^{i} \right)
{\left< \hspace{-2pt} \left< \, \tau_{n_{1}}(\alpha_{1}) \, \cdots \, \tau_{n_{k}}(\alpha_{k}) \,
\right> \hspace{-2pt} \right>_{g}}.$$
For a vector field of type ${\tau_{n}(\gamma_{\alpha})}$, the integer $n$ is called the [*level of the descendent*]{}. A vector field is [*primary*]{} if the level of the descendent is 0. The total level of descendents for a set of vector fields is defined to be the sum of the levels of descendents for all vector fields in the set. For convenience, we define the operators $\tau_{+}$ and $\tau_{-}$ on the space of vector fields by the following formulas: $$\tau_{\pm}({{\mathcal W}}) = \sum_{n, \alpha} f_{n, \alpha} {\tau_{n \pm 1}(\gamma_{\alpha})}
\hspace{20pt} {\rm if} \hspace{20pt} {{\mathcal W}}= \sum_{n, \alpha} f_{n, \alpha} {\tau_{n}(\gamma_{\alpha})}.$$ Moreover, we define $\tau_{k}({{\mathcal W}}) = \tau_{+}^{k}({{\mathcal W}})$ for any vector field ${{\mathcal W}}$.
Lower cases
-----------
We first prove a result about relations among different cases of Theorem \[conj:C\].
\[thm:reduceAB\] Let $g\geq 0$ be fixed. If Theorem \[conj:C\] holds for $r=\hat{r}$ and $s=\hat{s}$, then Theorem \[conj:C\] holds for all $r \leq \hat{r}$ and $s \leq \hat{s}$.
[**Proof**]{}: We first rewrite Theorem \[conj:C\] without using the special convention . Define $$\tilde{t}_{n}^{\alpha} = t_{n}^{\alpha} -\delta_{\alpha, 1}
\delta_{n, 1} .$$ Let ${{\mathcal W}}_{i}$, ${{\mathcal V}}_{j}$ be arbitrary coordinate vector fields on the big phase space of the form ${\tau_{n}(\gamma_{\alpha})}$. For $r, s, g, m \geq 0$, define $$\begin{gathered}
\label{eqn:Psi}
\Psi_{r,s, g, m} ({{\mathcal W}}_{1}, \cdots , {{\mathcal W}}_{r} \mid {{\mathcal V}}_{1}, \cdots, {{\mathcal V}}_{s})= \\
\sum_{k=0}^{m} \ \ \sum_{g_1+g_2=g, \ g_i\geq 0} \
(-1)^k {\left< \hspace{-2pt} \left< \, {\tau_{k}(\gamma_{\alpha})} \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{r} \,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, {\tau_{m-k}(\gamma^{\alpha})} \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g_2}}
\\
- \delta_{r,0} \sum_{n, \alpha} \tilde{t}_{n}^{\alpha}
{\left< \hspace{-2pt} \left< \, {\tau_{n+m+1}(\gamma_{\alpha})} \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}
- \delta_{r,1} {\left< \hspace{-2pt} \left< \, \tau_{m+1}({{\mathcal W}}_{1}) \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s}) \,
\right> \hspace{-2pt} \right>_{g}}
\\
+ \delta_{s,0} (-1)^{m+1} \sum_{n, \alpha} \tilde{t}_{n}^{\alpha}
{\left< \hspace{-2pt} \left< \, {\tau_{n+m+1}(\gamma_{\alpha})} \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{r} \,
\right> \hspace{-2pt} \right>_{g}}
+ \delta_{s,1} (-1)^{m+1}
{\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{r} \, \tau_{m+1}({{\mathcal V}}_{1}) \,
\right> \hspace{-2pt} \right>_{g}}. \end{gathered}$$ The function satisfies the symmetry $$\label{eqn:psir->s}
\Psi_{r,s, g, m}({{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r} \mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
= (-1)^{m} \Psi_{s,r, g, m}({{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s}
\mid {{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r}).$$ Moreover, $\Psi_{0, 0, g, m}$ is identically equal to 0 if $m$ is odd.
Theorem \[conj:C\] can be restated as $$\label{eqn:conjC}
\Psi_{r, s, g, m}({{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r} \mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s}) = 0$$ if $m \geq 2g+r+s-3$, see [@kliu2].
Suppose for fixed integers $r>0$ and $s \geq 0$, equation holds for all integers $m \geq 2g+r+s-3$. Then, we must prove that equation holds if $r$ is replaced by $r-1$ for all $m \geq 2g+r+s-4$. By an inverse induction on $r$, if Theorem \[conj:C\] holds for $r=\hat{r}$ and $s=\hat{s}$, then Theorem \[conj:C\] holds for $r \leq \hat{r}$ and $s=\hat{s}$. By equation , we can switch the role of $r$ and $s$. Hence, the Proposition will be proved.
Consider the [*string vector field*]{}, $${{\mathcal S}}= - \sum_{n, \alpha} \tilde{t}_{n}^{\alpha} {\tau_{n-1}(\gamma_{\alpha})} .$$ The [*string equation*]{} for Gromov-Witten invariants can be written as $${\left< \hspace{-2pt} \left< \, {{\mathcal S}}\,
\right> \hspace{-2pt} \right>_{g}} = \frac{1}{2} \delta_{g, 0} \eta_{\alpha \beta}
t_{0}^{\alpha} t_{0}^{\beta}$$ where $\eta_{\alpha \beta}= \int_{X} {\gamma_{\alpha}}\cup {\gamma_{\beta}}$ is the usual pairing. Taking derivatives of the string equation, we obtain $$\label{eqn:DerString}
{\left< \hspace{-2pt} \left< \, {{\mathcal S}}\, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{k} \,
\right> \hspace{-2pt} \right>_{g}}
= \sum_{i=1}^{k} {\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \,
\left\{ \tau_{-}({{\mathcal W}}_{i}) \right\}
\, \cdots \, {{\mathcal W}}_{k} \,
\right> \hspace{-2pt} \right>_{g}}
+ \delta_{g, 0} \nabla^{k}_{{{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{k}}
\left( \frac{1}{2} \eta_{\alpha \beta}
t_{0}^{\alpha} t_{0}^{\beta} \right).$$ Note that $$\nabla^{k}_{{{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{k}}
\left( \frac{1}{2} \eta_{\alpha \beta}
t_{0}^{\alpha} t_{0}^{\beta} \right) = 0$$ if $k>2$ or if at least one of the vector fields ${{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{k}$ has a positive descendent level.
Since equation is linear with respect to each ${{\mathcal W}}_{i}$ and ${{\mathcal V}}_{j}$, we can replace them by any vector fields on the big phase space. Assume $r>0$. We consider what happens if ${{\mathcal W}}_{r} = {{\mathcal S}}$ in $\Psi_{r,s, g, m} ({{\mathcal W}}_{1}, \cdots , {{\mathcal W}}_{r} \mid {{\mathcal V}}_{1}, \cdots, {{\mathcal V}}_{s})$.
\[lem:Sreduce\] For $r >0$, $$\begin{gathered}
\Psi_{r,s, g, m} ({{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r-1} {{\mathcal S}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
\label{eqn:reducepsia3} =\\
- \Psi_{r-1,s, g, m-1} ({{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r-1} \mid {{\mathcal V}}_{1}
\cdots {{\mathcal V}}_{s}) \\
+ \sum_{i=1}^{r-1} \Psi_{r-1,s, g, m} ({{\mathcal W}}_{1} \cdots \tau_{-}({{\mathcal W}}_{i})
\cdots {{\mathcal W}}_{r-1} \mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s}) .\end{gathered}$$ for all vector fields ${{\mathcal W}}_{i}$ and ${{\mathcal V}}_{j}$.
Assuming the validity of Lemma \[lem:Sreduce\], we can prove the Proposition by induction. Indeed, assume $$\Psi_{r,s, g, m} ({{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r-1} {{\mathcal S}}\mid {{\mathcal V}}_{1}
\cdots {{\mathcal V}}_{s}) = 0,$$ for all vector fields ${{\mathcal W}}_{i}$, ${{\mathcal V}}_{j}$, and all integers $m \geq 2g-3+r+s$. By linearity, we may assume that all vector fields ${{\mathcal W}}_{i}$ are coordinate vector fields of type ${\tau_{n}(\gamma_{\alpha})}$. Note that $\tau_{-}({{\mathcal W}}_{i}) = 0$ if ${{\mathcal W}}_{i}$ is a primary vector field. Hence equation implies $$\label{eqn:psia-1a3}
\Psi_{r-1,s, g, m-1} ({{\mathcal W}}_{1} \cdots {{\mathcal W}}_{r-1} \mid {{\mathcal V}}_{1}
\cdots {{\mathcal V}}_{s})=0$$ for all integers $m \geq 2g-3+r+s$ if ${{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{r-1}$ are all primary vector fields.
Since the total level of descendents for vector fields in the second term on the right hand side of equation is strictly less than that in the first term, an induction on the total level of descendents for ${{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{r-1}$ shows that equation also hold for all (not necessarily primary) vector fields ${{\mathcal W}}_{1}, \cdots, {{\mathcal W}}_{r-1}$. Hence, if Theorem \[conj:C\] holds for $r>0$ and $s \geq 0$, then Theorem \[conj:C\] holds if $r$ is replaced by $r-1$. The Proposition thus follows from Lemma \[lem:Sreduce\].
Proof of Lemma \[lem:Sreduce\]
------------------------------
Using equation , the result is straightforward for $r>2$. The cases $r \leq 2$ are more subtle because of the last term in equation .
We consider the case $r=2$ first. If ${{\mathcal W}}$ is a primary vector field, then $$\nabla^{2}_{{{\mathcal W}}, {\tau_{k}(\gamma_{\alpha})}}
\left( \frac{1}{2} \eta_{\beta \mu}
t_{0}^{\beta} t_{0}^{\mu} \right) {\left< \hspace{-2pt} \left< \, {\tau_{m-k}(\gamma^{\alpha})} \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}
\,\,=\,\, \delta_{k, 0} {\left< \hspace{-2pt} \left< \, \tau_{m}({{\mathcal W}}) \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}.$$ This will produce the extra term in $\Psi_{1,s, g, m-1} ({{\mathcal W}}\mid {{\mathcal V}}_{1}, \cdots, {{\mathcal V}}_{s})$. Therefore by equation , $$\begin{aligned}
\Psi_{2,s, g, m} ({{\mathcal W}}{{\mathcal S}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
&=& - \Psi_{1,s, g, m-1} ({{\mathcal W}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
\nonumber\end{aligned}$$ when ${{\mathcal W}}$ is a primary vector field.
If ${{\mathcal W}}$ has a positive descendent level, then $$\nabla^{2}_{{{\mathcal W}}, {\tau_{k}(\gamma_{\alpha})}}
\left( \frac{1}{2} \eta_{\beta \mu}
t_{0}^{\beta} t_{0}^{\mu} \right) = 0$$ for all $k\geq 0$. Equation again implies $$\begin{gathered}
\Psi_{2,s, g, m} ({{\mathcal W}}{{\mathcal S}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
\label{eqn:psia-1a2} = \\
- \Psi_{1,s, g, m-1} ({{\mathcal W}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
+ \Psi_{1,s, g, m} (\tau_{-}({{\mathcal W}}) \mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s}) .\end{gathered}$$ When passing from $\Psi_{2,s, g, m}$ to $\Psi_{1,s, g, m}$, an extra term will emerge. The summations which we obtain from applying equation to $ \Psi_{2,s, g, m} ({{\mathcal W}}, {{\mathcal S}}\mid {{\mathcal V}}_{1}, \cdots, {{\mathcal V}}_{s}) $ have some missing terms when compared to the definition of $\Psi_{1,s, g, m-1} ({{\mathcal W}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})$ and $\Psi_{1,s, g, m} (\tau_{-}({{\mathcal W}}) \mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})$. The missing term for $\Psi_{1,s, g, m-1} ({{\mathcal W}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})$ is $- {\left< \hspace{-2pt} \left< \, \tau_{m}({{\mathcal W}}) \, {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}$ while the missing term for $\Psi_{1,s, g, m} (\tau_{-}({{\mathcal W}}) \mid {{\mathcal V}}_{1}\, \cdots\, {{\mathcal V}}_{s})$ is $- {\left< \hspace{-2pt} \left< \, \tau_{m+1}(\tau_{-}({{\mathcal W}})) \, {{\mathcal V}}_{1}\, \cdots\, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}$. The missing terms cancel in when ${{\mathcal W}}$ has a positive descendent level.
Since we have checked that equation holds for all primary and descendent vector fields ${{\mathcal W}}$, Lemma \[lem:Sreduce\] is true for $r=2$.
Consider next the case $r=1$ and $s>0$. We have $$\begin{gathered}
\Psi_{1,s, g, m}({{\mathcal S}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s}) = \\
- {\left< \hspace{-2pt} \left< \, \tau_{m+1}({{\mathcal S}}) \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}
+ \delta_{s, 1} (-1)^{m+1} {\left< \hspace{-2pt} \left< \, {{\mathcal S}}\,\, \tau_{m+1}({{\mathcal V}}_{1}) \,
\right> \hspace{-2pt} \right>_{g}} \\
+ \sum_{k=0}^{m}\ \ \sum_{g_1+g_2=g,\ g_i\geq 0}\ (-1)^{k}
{\left< \hspace{-2pt} \left< \, \tau_{k}({\gamma_{\alpha}}) \, {{\mathcal S}}\,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, {\tau_{m-k}(\gamma^{\alpha})} \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g_2}}.\end{gathered}$$ In the definition of ${{\mathcal S}}$, $\tilde{t}_{0}^\alpha$ is not included since $\tau_{-1}({\gamma_{\alpha}})=0$. Hence $$\label{eqn:tau+S}
{\left< \hspace{-2pt} \left< \, \tau_{m+1}({{\mathcal S}}) \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}
= - \sum_{n=1}^{\infty} \sum_{\alpha} \tilde{t}_{n}^{\alpha}
{\left< \hspace{-2pt} \left< \, {\tau_{n+m}(\gamma_{\alpha})} \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{s} \,
\right> \hspace{-2pt} \right>_{g}}.$$ By equation , ${\left< \hspace{-2pt} \left< \, {{\mathcal S}}\,\, \tau_{m+1}({{\mathcal V}}_{1}) \,
\right> \hspace{-2pt} \right>_{g}} = {\left< \hspace{-2pt} \left< \, \tau_{m}({{\mathcal V}}_{1}) \,
\right> \hspace{-2pt} \right>_{g}}$ and $${\left< \hspace{-2pt} \left< \, \tau_{k}({\gamma_{\alpha}}) \, {{\mathcal S}}\,
\right> \hspace{-2pt} \right>_{g_1}} = {\left< \hspace{-2pt} \left< \, \tau_{k-1}({\gamma_{\alpha}}) \,
\right> \hspace{-2pt} \right>_{g_1}} +
\delta_{g_1, 0} \delta_{k, 0} \eta_{\alpha \beta} t_{0}^{\beta}.$$ The effect of the second term on the right hand side of this equation is just to compensate for the missing case $n=0$ in the summation for $n$ in equation when computing $$\Psi_{1,s, g, m}({{\mathcal S}}\mid {{\mathcal V}}_{1}, \cdots, {{\mathcal V}}_{s}).$$ Therefore we have $$\begin{aligned}
\Psi_{1,s, g, m}({{\mathcal S}}\mid {{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s})
&=& - \Psi_{0,s, g, m-1}({{\mathcal V}}_{1} \cdots {{\mathcal V}}_{s}).\end{aligned}$$ Hence, Lemma \[lem:Sreduce\] is true for $r=1$ and $s>0$.
Now only the case $r=1$ and $s=0$ is left. By definition, $$\begin{aligned}
\Psi_{1,0, g, m}({{\mathcal S}})
&=& - {\left< \hspace{-2pt} \left< \, \tau_{m+1}({{\mathcal S}}) \,
\right> \hspace{-2pt} \right>_{g}}
+ (-1)^{m+1} \sum_{n, \alpha} \tilde{t}_{n}^{\alpha}
{\left< \hspace{-2pt} \left< \, {\tau_{n+m+1}(\gamma_{\alpha})} \,\, {{\mathcal S}}\,
\right> \hspace{-2pt} \right>_{g}} \\
&& + \sum_{k=0}^{m}\ \
\sum_{g_1+g_2=g,\ g_i\geq 0}\ (-1)^{k} {\left< \hspace{-2pt} \left< \, \tau_{k}({\gamma_{\alpha}}) \, {{\mathcal S}}\,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, {\tau_{m-k}(\gamma^{\alpha})} \,
\right> \hspace{-2pt} \right>_{g_2}}.\end{aligned}$$ By equation , we have $$\begin{aligned}
\Psi_{1,0, g, m}({{\mathcal S}})
&=& \left\{1+(-1)^{m+1} \right\} \sum_{n, \alpha} \tilde{t}_{n}^{\alpha}
{\left< \hspace{-2pt} \left< \, {\tau_{n+m}(\gamma_{\alpha})} \,
\right> \hspace{-2pt} \right>_{g}} \\
&& - \sum_{k=0}^{m-1}\ \ \sum_{g_1+g_2=g,\ g_i\geq 0} \
(-1)^{k} {\left< \hspace{-2pt} \left< \, \tau_{k}({\gamma_{\alpha}}) \,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, {\tau_{m-1-k}(\gamma^{\alpha})} \,
\right> \hspace{-2pt} \right>_{g_2}} \\
&=& -\Psi_{0,0, g, m-1}.\end{aligned}$$ The proof for Lemma \[lem:Sreduce\] is complete.
Proof of Theorem \[conj:C\]
---------------------------
Relations in $R^{*}({\overline}{M}_{g, n})$ can be translated into universal equations for Gromov-Witten invariants by the splitting axiom and cotangent line comparison equations. Define the operator $T$ on the space of vector fields by $$T({{\mathcal W}}) = \tau_{+}({{\mathcal W}}) - {\left< \hspace{-2pt} \left< \, {{\mathcal W}}\, {\gamma^{\alpha}}\,
\right> \hspace{-2pt} \right>_{0}} {\gamma_{\alpha}}$$ for any vector field ${{\mathcal W}}$. Properties of $T$ have been studied in [@xliu1]. The operator is very useful for the translation into universal equations. In the process, each marked point corresponds to a vector field, and the cotangent line class corresponds to the operator $T$. Each node is translated into a pair of primary vector fields $\gamma_\ell$ and $\gamma^\ell$. In particular, the relation of Proposition \[bbbtt\] is translated into the following universal equation $$\label{eqn:rs2T}
\sum_{k=0}^{m}\ \ \sum_{g_1+g_2=g, \ g_i\geq 0} \ (-1)^{k}
{\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{n_{1}} \, T^{k}(\gamma_\ell) \,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, T^{m-k}(\gamma^\ell) \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{n_{2}} \,
\right> \hspace{-2pt} \right>_{g_2}} = 0$$ for all vector fields ${{\mathcal W}}_{i}$ and ${{\mathcal V}}_{j}$ if $n_{1}, n_{2} \geq 2$ and $m \geq 2g+n_{1}+n_{2}-3$.
Let $P$ and $Q$ be two arbitrary contravariant tensors on the big phase space. The following formula was proved in [@xliu2 Proposition 3.2]: $$\sum_{k=0}^{m} (-1)^{k} P(T^{k}(\gamma_\ell)) \, \, Q(T^{m-k}(\gamma^\ell))
= \sum_{k=0}^{m} (-1)^{k} P({\tau_{k}(\gamma_{\ell})}) \,\, Q({\tau_{m-k}(\gamma^{\ell})})$$ for $m \geq 0$. In particular, if we take $P({{\mathcal U}})= {\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{n_{1}} \, {{\mathcal U}}\,
\right> \hspace{-2pt} \right>_{g_1}}$ and $Q({{\mathcal U}})= {\left< \hspace{-2pt} \left< \, {{\mathcal U}}\, \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{n_{2}} \,
\right> \hspace{-2pt} \right>_{g_2}}$, then the left hand side of equation is equal to $$\begin{gathered}
\sum_{k=0}^{m} \ \ \sum_{g_1+g_2=g, \ g_i\geq 0} \
(-1)^{k}
{\left< \hspace{-2pt} \left< \, {{\mathcal W}}_{1} \, \cdots \, {{\mathcal W}}_{n_{1}} \, \tau_{k}(\gamma_\ell) \,
\right> \hspace{-2pt} \right>_{g_1}}
{\left< \hspace{-2pt} \left< \, \tau_{m-k}(\gamma^\ell) \, {{\mathcal V}}_{1} \, \cdots \, {{\mathcal V}}_{n_{2}} \,
\right> \hspace{-2pt} \right>_{g_2}} =\\
\Psi_{n_{1}, n_{2}, g, m}({{\mathcal W}}_{1} \cdots \, {{\mathcal W}}_{n_{1}} \mid
{{\mathcal V}}_{1} \cdots {{\mathcal V}}_{n_{2}}).\end{gathered}$$ Therefore equation implies that Theorem \[conj:C\] is true for $r=n_{1} \geq 2$ and $s = n_{2} \geq 2$. By Proposition \[thm:reduceAB\], all other cases of Theorem \[conj:C\] follow.
[12]{}
D. Arcara and F. Sato, [*Recursive formula for $\psi^g-\lambda_1\psi^{g-1} + \cdots + (-1)^g\lambda_g$ in $\overline{M}_{g,1}$*]{}, arXiv:math/0605343.
P. Belorousski and R. Pandharipande, [*A descendent relation in genus 2*]{}, Ann. Scuola Norm. Sup. Pisa Cl. Sci.(4) [**29**]{} (2000), 171–191.
C. Faber and R. Pandharipande, [*Hodge integrals and Gromov-Witten theory*]{}, Invent. Math. 139 (2000), 173–199.
C. Faber and R. Pandharipande, [*Logarithmic series and Hodge integrals in the tautological ring*]{}. With an appendix by D. Zagier. Michigan Math. J. [**48**]{} (2000), 215–252.
C. Faber and R. Pandharipande, [*Relative maps and tautological classes*]{}, JEMS [**7**]{} (2005), 13–49.
B. Fantechi and R. Pandharipande, [*Stable maps and branch divisors*]{}, Compositio Math. [**130**]{} (2002), 345–364.
E. Getzler, [*Intersection theory on $\overline{M}_{1,4}$ and elliptic Gromov-Witten invariants*]{}, JAMS [**10**]{} (1997), 973–998.
E. Getzler, [*Topological recursion relations in genus 2*]{}, in [*Integrable systems and algebraic geometry (Kobe/Kyoto 1997)*]{}, World Scientific Publishing: River Edge, NJ 1998, 73–106.
T. Graber and R. Pandharipande, [*Localization of virtual classes*]{}, Invent. Math. [**135**]{} (1999), 487–518.
E. Ionel, [*Topological recursive relations in $H^{2g}(M_{g,n})$*]{}, Invent. Math. [**148**]{} (2002), 627–658.
S. Keel, [*Intersection theory of moduli space of $n$-pointed curves of genus 0*]{}, Trans. Amer. Math. Soc. [**330**]{} (1992), 545–574.
T. Kimura and X. Liu, [*A genus 3 topological recursion relation*]{}, Comm. Math. Phys. [**262**]{} (2006), 645–661.
K. Liu and H. Xu, [*A proof of the Faber intersection number conjecture*]{}, arXiv:0803.2204.
K. Liu and H. Xu, [*The n-point functions for intersection numbers on moduli spaces of curves*]{}, math.AG/0701319.
X. Liu, [*Quantum product on the big phase space and Virasoro conjecture*]{}, Advances in Mathematics [**169**]{} (2002), 313–375. X. Liu, [*On certain vanishing identities for Gromov-Witten invariants*]{}, arXiv:0805.0800.
D. Maulik, [*Gromov-Witten theory of $A_n$-resolutions*]{}, arXiv:0802.2681.
[^1]: The special points correspond to the $n$ markings and the singularities of curves parametrized by the stratum.
[^2]: A genus $g$ equation is allowed to involve all genera up to $g$.
[^3]: Boundary relations in codimension $g$ for certain linear combinations of Hodge classes appear in [@AS].
[^4]: The Gromov-Witten equations obtained from relations in $R^*(\overline{M}_{0,n})$ are known by Keel’s study [@keel]. Getzler has claimed complete knowledge of relations in $R^*(\overline{M}_{1,n})$.
[^5]: Mumford’s relation here is $c_g(\mathbb{E}_g^\vee \otimes(+t)) \cdot
c_g(\mathbb{E}_g^\vee \otimes(-t)) = t^{g}(-t)^g$
[^6]: The proofs of Theorem 1 and Proposition 1 are almost identical. In fact, Theorem 1 can be derived from Proposition 1 using string and dilaton equations.
[^7]: $\gamma_{1}$ is the identity of the cohomology ring of $X$.
[^8]: The $m \geq 3g-3+r+s$ case is also proved in [@xliu2], but the result will not be used here.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
It is well known that if $0.a_1a_2a_3\dots$ is the base-$b$ expansion of a number normal to base-$b$, then the numbers $0.a_ka_{m+k}a_{2m+k}\dots$ for $m\ge 2$, $k\ge 1$ are all normal to base-$b$ as well.
In contrast, given a continued fraction expansion $\langle a_1,a_2,a_3,\dots\rangle$ that is normal (now with respect to the continued fraction expansion), we show that for any integers $m\ge 2$, $k\ge 1$, the continued fraction $\langle a_k, a_{m+k},a_{2m+k},a_{3m+k},\dots\rangle$ will never be normal.
author:
- Byron Heersink and Joseph Vandehey
title: Continued fraction normality is not preserved along arithmetic progressions
---
Introduction
============
A number $x\in [0,1)$ with base $10$ expansion $x=0.a_1a_2a_3\dots$ is said to be normal (to base $10$) if for any finite string $s=[c_1,c_2,\dots,c_k]$ of digits in $\{0,\ldots,9\}$, we have that $$\lim_{n\to \infty} \frac{\#\{0\le i \le n: a_{i+j} = c_j, 1\le j \le k\}}{n} = \frac{1}{10^k}.$$ Although almost all real numbers are normal, we still do not know of a single commonly used mathematical constant, such as $\pi$, $e$, or $\sqrt{2}$, that is normal.
A classical result due to Wall [@Wall] says that if $0.a_1a_2a_3\dots$ is normal, then so is $0.a_ka_{m+k}\\ a_{2m+k}a_{3m+k}\dots$, for any positive integers $k,m$. In concise terms, sampling along an arithmetic progression of digits preserves normality for base $10$ (and more generally, base $b$) expansions. Sampling along other sequences has been studied most notably by Agafonov [@Agafonov], Kamae [@Kamae], and Kamae and Weiss [@KW]. Merkle and Reimann [@MR] studied methods of sampling that do not preserve normality.
However, these works have focused primarily on base-$b$ expansions and so equivalent questions for other expansions are mostly unknown.
In this paper, we consider continued fraction expansions given by $$x = \cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\dots}}} = \langle a_1,a_2,a_3,\dots\rangle, \quad a_i \in \mathbb{N}$$ for $x\in [0,1)$. The Gauss map $T$ is given by $Tx= x^{-1}-\lfloor x^{-1} \rfloor$ or, if $x=0$, then $Tx=0$, and it acts as a forward shift on continued fraction expansions, so that $$T\langle a_1,a_2,a_3,\dots\rangle = \langle a_2,a_3,a_4,\dots\rangle.$$ The Gauss measure $\mu$ on $[0,1)$ is given by $$\mu(A) = \int_A \frac{1}{(1+x)\log 2} \ dx.$$ Given a finite string $s=[d_1,d_2,\dots,d_k]$ of positive integers we define the cylinder set $C_s$ to be the set of points $x\in [0,1)$ such that the string $[a_1,a_2,\dots,a_k]$ of the first $k$ digits of $x$ equals $s$. (The expansions of rational numbers are finite and non-unique, but we may ignore such points throughout this paper.)
We say that $x\in [0,1)$ is CF-normal if, for any finite, non-empty string $s=[d_1,d_2,\dots,d_k]$ of positive integers, we have $$\lim_{n\to \infty} \frac{\#\{0\le i \le n : T^i x \in C_s\}}{n} = \mu(C_s),$$ which is equivalent to saying that the limiting frequency of $s$ in the expansion of $x$ equals $\mu(C_s)$, since $T^i x \in C_s$ if and only if the string $[a_{i+1},a_{i+2},\dots,a_{i+k}]$ equals $s$. By the ergodicity of the Gauss map $T$ and the pointwise ergodic theorem, almost all $x\in [0,1)$ are CF-normal.
\[thm:main\] Suppose $\langle a_1, a_2,a_3,\dots\rangle$ is CF-normal. Then the number $\langle a_k, a_{m+k}, a_{2m+k},\\ a_{3m+k}, \dots\rangle$ is not CF-normal for any integers $k\ge 1$, $m\ge 2$. In fact, for any integers $k\ge 1$, $m\ge 2$, we have that $$\lim_{n\to \infty} \frac{\#\{1\le i \le n: a_{(i-1)m+k} =a_{im+k}=1\}}{n}$$ exists, but does not equal $\mu(C_{[1,1]})$, so that the CF-normality of $\langle a_k, a_{m+k}, a_{2m+k}, a_{3m+k}, \dots\rangle$ can be seen to fail just by examining the frequency of the string $[1,1]$.
One of the key techniques in proving this result is a way of augmenting the usual Gauss map $T$ to simultaneously act on a finite-state automata. A number of recent results have made use of this blending of ergodicity and automata. It was used in Agafonov’s earlier cited result [@Agafonov]. It was used in Jager and Liardet’s proof of Moeckel’s theorem (where it was called a skew product) [@JL]. It was used to study normality from the viewpoint of compressability [@BCH; @BH]. And it was used by Blanchard, Dumont, and Thomas to give reproofs of some classical normality equivalencies, even extending some of these results to what they call “near-normal" numbers [@BDT; @Blanchard].
We end the introduction with two questions.
First, the proof of Theorem \[thm:main\] could be extended to show that any continued fraction expansion formed by selecting along a non-trivial arithmetic progression of digits from a CF-normal number has all its $1$-digit strings appearing with the right frequency, but the 2-digit string $[1,1]$ does not. We wonder whether any string with more than one digit can appear with the correct frequency for CF-normality, or are they always incorrect.
Second, as stated earlier, sampling along a non-trivial arithmetic progression preserves normality for base-$b$ expansions. It can be shown, using, say, the augmented systems in this paper, that a similar result holds for any fibred system that is Bernoulli. The continued fraction expansion is a simple example of a non-Bernoulli system. Is Bernoullicity not only sufficient but necessary for selection along non-trivial arithmetic progressions to preserve normality?
An augmented system
===================
We will require a result from a previous paper of the second author [@ratmultCF].
Let $T$ be the Gauss map acting on the set $\Omega \subset[0,1)$ of irrationals. So $Tx \equiv 1/ x\pmod{1}$. We will consider cylinder sets of $\Omega$ to be the intersection of the usual cylinder sets (for the continued fraction expansion) of $[0,1)$ with $\Omega$.
We wish to extend the map $T$ to a transformation $\widetilde{T}$ on a larger domain $\widetilde{\Omega}=\Omega \times \mathcal{M}$ for some finite set $\mathcal{M}$. For any $(x,M)\in \widetilde{\Omega}$, we define $$\widetilde{T}(x,M) = (Tx,f_{a_1(x)} (M)),$$ where $a_1(x)=\lfloor x^{-1}\rfloor$ is the first continued fraction digit of $x$ and the functions $f_a: \mathcal{M}\to\mathcal{M}$, $a\in \mathbb{N}$, are bijective. Since the second coordinate of $\widetilde{T}(x, M)$ only depends on $M$ and the first digit of $x$, we see that this second coordinate is constant for all $x$ in the same rank $1$ cylinder. Given a cylinder set $C_s$ for $\Omega$, we call $C_s \times \{M\}$ (for any $M\in \mathcal{M}$) a cylinder set for $\widetilde{\Omega}$. We also have a measure $\tilde{\mu}$ on $\widetilde{\Omega}$ that is defined as being the product of the Gauss measure on $\Omega$ times the counting measure on $\mathcal{M}$, normalized by $1/|\mathcal{M}|$ to be a probability measure. By the assumed bijectivity of $f$, we have that $\widetilde{T}$ preserves $\tilde{\mu}$.
For easier readability, we will use $(E,M)$ to denote $E \times \{M\}$ for any measurable set $E\subset \Omega$, with measurability being determined by Lebesgue measure or, equivalently, the Gauss measure.
We adapt our definition of normality on this space. We will say that $(x,M)\in \widetilde{\Omega}$ is $\widetilde{T}$-normal with respect to $\tilde{\mu}$, if for any cylinder set $(C_s,M')$ we have $$\lim_{n\to \infty} \frac{\#\{0\le i < n: \widetilde{T}^i(x,M)\in (C_s,M')\}}{n} = \tilde{\mu} (C_s,M').$$
We say $\widetilde{T}$ is transitive if for any $M_1,M_2\in \mathcal{M}$, there exists a proper string $s$ of length $n$ such that $$T^n( C_s,M_1) = (\Omega,M_2).$$
\[thm:traversing\] If $\widetilde{T}$ is transitive, then $\widetilde{T}$ is ergodic with respect to $\tilde{\mu}$. Moreover, if $x$ is normal, then for any $M\in \mathcal{M}$, the point $(x, M)$ is $\widetilde{T}$-normal with respect to $\tilde{\mu}$.
In [@ratmultCF], this result was proved without assuming the bijectivity of the functions $f_a$. This results in being unable to assume that $\tilde{\mu}$ is an invariant measure and makes the overall proof significantly more difficult.
An operator-analytic lemma
==========================
Let $A=C_{[1]}=[1/2,1)$. It can be easily calculated that $$\mu(C_{[1]})=\mu(A) = \frac{\log(4/3)}{\log 2} \quad\text{and} \quad \mu(C_{[1,1]}) = \mu(T^{-1} A\cap A) = \frac{\log (10/9)}{\log 2}.$$ Moreover, since $T$ is known to be strong mixing, we have that $$\lim_{n\to \infty}\mu(T^{-n} A \cap A) = \mu(A)^2 = \left( \frac{\log (4/3)}{\log 2}\right)^2.$$
\[lem:Wirsing\] We have $$\label{eq:main}
\mu( T^{-n} A\cap A) < \mu( T^{-1} A\cap A)$$ for any integer $n\ge 2$.
We closely follow a process of Wirsing [@W] which established the spectral gap in the transfer operator of $T$, and in turn gave a very precise estimate of $$\left|\mu(T^{-n}[0,x))-\mu([0,x))\right|$$ as $n\to\infty$. Through this process, we prove the bound $$\left|\frac{\mu(A\cap T^{-n}A)}{\mu(A)} - \mu(A)\right| < \mu(A)-\frac{\log(10/9)}{\log(4/3)}=\frac{\log(4/3)}{\log2}-\frac{\log(10/9)}{\log(4/3)},\qquad(n\geq2)$$ which implies .
To start with, define $m_n,r_n:[0,1]\to{\mathbb{R}}$ by $$m_n(x)=\frac{\mu(A\cap T^{-n}[0,x))}{\mu(A)}\quad\mbox{and}\quad r_n(x)=m_n(x)-\mu([0,x)).$$ Then $$\begin{aligned}
r_n\left(\frac{1}{2}\right)=\frac{\mu(A\cap T^{-n}[0,1/2))}{\mu(A)}-1+1-\mu([0,1/2))=\mu(A)-\frac{\mu(A\cap T^{-n}A)}{\mu(A)},\end{aligned}$$ and so we want to bound $|r_n(1/2)|$. Next, we introduce the transfer operator of $T$, which is the map $\hat{T}:L^1(\mu){\rightarrow}L^1(\mu)$ satisfying $$\int_{B}\hat{T}f\,d\mu=\int_{T^{-1}(B)}f\,d\mu\mbox{, for all Borel subsets }B\subseteq[0,1)\mbox{ and }f\in L^1(\mu),$$ and is given by the formula $$\label{transfereq}
(\hat{T}f)(x)=\sum_{k=1}^\infty\frac{1+x}{(k+x)(k+1+x)}f\left(\frac{1}{k+x}\right), \quad x\in (0,1).$$ This formula may be extended in the natural way to functions on $[0,1]$. When extended, $\hat{T}$ is also an operator from $C^1[0,1]$ to itself. Moreover, if $f=g$ Lebesgue-a.e. then $\hat{T}f=\hat{T}g$ Lebesgue-a.e. We have $$m_n(x)=\frac{1}{\mu(A)}\int_0^x(\hat{T}^n1_A)(t)\,d\mu(t)=\frac{1}{\mu(A)\log2}\int_0^x(\hat{T}^n1_A)(t)\,\frac{dt}{1+t},$$ where $1_A$ is the indicator function of $A$. Therefore, $m_n'$ exists Lebesgue-a.e. and $$(1+x)m_n'(x)=\frac{1}{\mu(A)\log2}(\hat{T}^n1_A)(x)\quad\mbox{for Lebesgue-a.e.~}x.$$ Now by , we clearly have $$(\hat{T}1_A)(x)=
\frac{1}{2+x}$$ if $x\in (0,1)$. So if we define $f_1(x)=\frac{1}{(2+x)\mu(A)\log2}$ and $f_n=\hat{T}^{n-1}f_1$, then $f_n=\frac{1}{\mu(A)\log 2}\hat{T}^n1_A$ Lebesgue-a.e. Since $\hat{T}$ preserves continuity on $[0,1]$, each $f_n$ is continuous, so we can say that $m_n'$ exists on all of $[0,1]$, and $f_n(x)=(1+x)m_n'(x)$ for all $x\in[0,1]$ and $n\in{\mathbb{N}}$.
Next, we define $g_n(x)=f_n'(x)$, noting that $f_n\in C^1[0,1]$ for all $n\in{\mathbb{N}}$. We then have $g_{n+1}(x)=-(Ug_n)(x)$, where $U$ is the operator examined by Wirsing, defined by $U(f')=-(\hat{T}f)'$, and can be shown to be given by $$(Ug)(x)=\sum_{k=1}^\infty\left(\frac{k}{(k+1+x)^2}\int_{1/(k+1+x)}^{1/(k+x)}g(y)\,dy+\frac{1+x}{(k+x)^3(k+1+x)}g\left(\frac{1}{k+x}\right)\right).$$ The operator $U$ is clearly positive so that $Ug\leq Uf$ whenever $g\leq f$.
We have $f_1(x)=1/((x+2)\log(4/3))$, and so $g_1(x)=-1/((x+2)^2\log(4/3))$. From the work of Wirsing, $U(-g_1)\leq-\frac{1}{2}g_1$. This can be shown as follows. Let $a(x)=1/(x+2)^2$, $b(x)=1/(1+2x)^2$, and $c(x)=-1/(2+4x)$ so that $a\leq b$ on $[0,1]$ and $c'=b$. For $x\in[0,1]$, we have $$\begin{aligned}
(Ua)(x)&\leq(Ub)(x)=-(\hat{T}c)'(x)=\frac{d}{dx}\sum_{k=1}^\infty\frac{1+x}{(k+x)(k+1+x)}\frac{1}{2+4/(k+x)}\\
&=\frac{1}{2}\frac{d}{dx}\sum_{k=1}^\infty\frac{1+x}{(k+1+x)(k+2+x)}=\frac{1}{2}\frac{d}{dx}\sum_{k=1}^\infty\left(\frac{1+x}{k+1+x} - \frac{1+x}{k+2+x}\right)\\
&=\frac{1}{2}\frac{d}{dx}\left(\frac{1+x}{2+x}\right)=\frac{1}{2(2+x)^2}=\frac{1}{2}a(x),\end{aligned}$$ implying that $g_2 = U(-g_1)\leq-\frac{1}{2}g_1$, and hence, by iterating this procedure and recalling that $g_{n+1}=-Ug_n$, we get that $|g_n|\leq-\frac{1}{2^{n-1}}g_1$.
Now let $\xi=\log(1+x)$ and $\varrho_n(\xi)=r_n(x)$. Then note that $$\begin{aligned}
\varrho_n''(\xi)&=\frac{d^2}{d\xi^2}r_n(e^\xi-1)=\frac{d}{d\xi}(e^\xi r_n'(e^\xi-1))=e^\xi r_n'(e^\xi-1)+e^{2\xi}r_n''(e^\xi-1)\\
&=(1+x)(r_n'(x)+(1+x)r_n''(x))\\
&=(1+x)\left(m_n'(x)-\frac{1}{(1+x)\log2}+(1+x)\left(m_n''(x)+\frac{1}{(1+x)^2\log2}\right)\right)\\
&=(1+x)(m_n'(x)+(1+x)m_n''(x))=(1+x)\frac{d}{dx}((1+x)m_n'(x))=(1+x)g_n(x).\end{aligned}$$ We have $r_n(0)=r_n(1)=0$, $\varrho_n(0)=\varrho_n(\log2)=0$, and so by the mean value theorem of divided differences, $$\varrho_n(\xi)=-\xi(\log2-\xi)\frac{\varrho_n''(\xi^*)}{2}$$ for some $\xi^*\in[0,\log2]$ depending on $\xi$. Letting $\xi=\log(3/2)$ and taking absolute values yields $$\begin{aligned}
\left|r_n\left(\frac{1}{2}\right)\right|&\leq\frac{1}{2}\left(\log\frac{3}{2}\right)\left(\log2-\log\frac{3}{2}\right)\|\varrho_n''\|_\infty=\frac{1}{2}\left(\log\frac{3}{2}\right)\left(\log\frac{4}{3}\right)\|(1+x)g_n(x)\|_\infty\\
&=\frac{1}{2^n }\left(\log\frac{3}{2}\right)\left(\log\frac{4}{3}\right)\left\|(1+x)g_1(x)\right\|_\infty\leq\frac{1}{2^n}\log\frac{3}{2}\left\|\frac{1+x}{(x+2)^2}\right\|_\infty=\frac{1}{2^{n+2}}\log\frac{3}{2}.\end{aligned}$$ If $n\geq2$, this is at most $\frac{1}{16}\log\frac{3}{2}=0.025341\ldots$, which is less than $\frac{\log(4/3)}{\log2}-\frac{\log(10/9)}{\log(4/3)}=0.048798\ldots$ This completes the proof of Lemma \[lem:Wirsing\].
Proof of Theorem \[thm:main\]
=============================
Without loss of generality, it suffices to prove the theorem if $1 \le k \le m$.
Consider the augmented system $\widetilde{T}$ on $\widetilde{\Omega}$ given by $\mathcal{M} = \{1,2,\dots,m\}$ and $f_a(k) = k+1 \pmod{m}$ for all $a\in \mathbb{N}$. In particular, we always have that $$\widetilde{T}^i (x,j) = (T^i x, j+i\bmod{m}).$$ Also, it is clear that this is transitive: for any rank $n$ cylinder, we have that $\widetilde{T}^n (C_s, j_1) = (\Omega, j_1+n \pmod{m})$. Therefore Theorem \[thm:traversing\] applies.
Let $x=[a_1,a_2,a_3,\dots]$ be CF-normal, and let $y=[a_k,a_{m+k},a_{2m+k},\dots]$. Consider the string $s=[1,1]$. We want to show that the limiting frequency of $s$ in the digits of $y$ does not equal $\mu(C_s)$.
Borrowing our notation from the last section, we let $A=C_{[1]}$ and we will now denote $A\cap T^{-n} A$ by $E_n$, so that $C_s = E_1$.
We have that $T^i y \in E_1$ if and only if $T^{mi+k-1} x \in E_m$. Note that $(x,1)$ is normal with respect to $\widetilde{T}$ by Theorem \[thm:traversing\]. Thus we have that $$\begin{aligned}
\lim_{n\to \infty} \frac{\#\{0\le i \le n : T^i y \in C_s\}}{n} &= \lim_{n\to \infty} \frac{\#\{0\le i \le n : T^{mi+k-1} x \in E_m\}}{n} \\
&= \lim_{n\to \infty} \frac{\#\{ 0 \le i \le mn: \widetilde{T}^i (x,1) \in (E_m, k)\}}{n}\\
&= m\cdot \lim_{n\to \infty} \frac{\#\{ 0 \le i \le mn: \widetilde{T}^i (x,1) \in (E_m, k)\}}{mn}\\
&= m \cdot \tilde{\mu}(E_m,k) = m\cdot \frac{\mu(E_m)}{m} = \mu(E_m).\end{aligned}$$
By Lemma \[lem:Wirsing\], we have that $\mu(E_m) < \mu(C_s)$, which proves the theorem.
Acknowledgments
===============
The authors would like to thank Florin Boca for his suggestions.
The research of Joseph Vandehey was supported in part by the NSF grant DMS-1344994 of the RTG in Algebra, Algebraic Geometry, and Number Theory, at the University of Georgia.
[10]{}
V. N. Agafonov, *Normal sequences and finite automata*, Problemy Kibernet. No. **20** (1968), 123–129.
Ver[ó]{}nica Becher, Olivier Carton, and Pablo Ariel Heiber, *Normality and automata*, J. Comput. System Sci. **81** (2015), no. 8, 1592–1613.
Ver[ó]{}nica Becher and Pablo Ariel Heiber, *Normal numbers and finite automata*, Theoret. Comput. Sci. **477** (2013), 109–116.
Dumont J.-M. Blanchard, F. and A. Thomas, *Generic sequences, transducers and multiplication of normal numbers*, Israel J. Math. **80** (1992), no. 3, 257–287.
Fran[ç]{}ois Blanchard, *Nonliteral transducers and some problems of normality*, J. Théor. Nombres Bordeaux **5** (1993), no. 2, 303–321.
Hendrik Jager and Pierre Liardet, *Distributions arithmétiques des dénominateurs de convergents de fractions continues*, Nederl. Akad. Wetensch. Indag. Math. **50** (1988), no. 2, 181–197.
Teturo Kamae, *Subsequences of normal sequences*, Israel J. Math. **16** (1973), 121–149.
Teturo Kamae and Benjamin Weiss, *Normal numbers and selection rules*, Israel J. Math. **21** (1975), no. 2-3, 101–110, Conference on Ergodic Theory and Topological Dynamics (Kibbutz Lavi, 1974).
Wolfgang Merkle and Jan Reimann, *Selection functions that do not preserve normality*, Theory Comput. Syst. **39** (2006), no. 5, 685–697.
Joseph Vandehey, *Non-trivial matrix actions preserve normalty for continued fractions*, 2015. arXiv:1504.05121
Donald D. Wall, *N[ORMAL]{} [NUMBERS]{}*, ProQuest LLC, Ann Arbor, MI, 1950, Thesis (Ph.D.)–University of California, Berkeley.
Eduard Wirsing, *On the theorem of [G]{}auss-[K]{}usmin-[L]{}évy and a [F]{}robenius-type theorem for function spaces*, Acta Arith. **24** (1973/74), 507–528, Collection of articles dedicated to Carl Ludwig Siegel on the occasion of his seventy-fifth birthday, V.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Levinson and Montgomery proved that the Riemann zeta-function $\zeta(s)$ and its derivative have approximately the same number of non-real zeros left of the critical line. R. Spira showed that $\zeta''(1/2+it)=0$ implies $\zeta(1/2+it)=0$. Here we obtain that in small areas located to the left of the critical line and near it the functions $\zeta(s)$ and $\zeta''(s)$ have the same number of zeros. We prove our result for more general zeta-functions from the extended Selberg class $S$. We also consider zero trajectories of a certain family of zeta-functions from $S$.'
address: |
Ramūnas Garunkštis\
Faculty of Mathematics and Informatics, Institute of Mathematics, Vilnius University\
Naugarduko 24, 03225 Vilnius, Lithuania
author:
- Ramūnas Garunkštis
title: 'Zeros of the extended Selberg class zeta-functions and of their derivatives '
---
Introduction
============
Let $s=\sigma+it$. In this paper, $T$ always tends to plus infinity.
Speiser [@Speiser1934] showed that the Riemann hypothesis (RH) is equivalent to the absence of non-real zeros of the derivative of the Riemann zeta-function $\zeta(s)$ left of the critical line $\sigma=1/2$. Later on, Levinson and Montgomery [@Levinson1974] obtained the quantitative version of the Speiser’s result:
[**Theorem (Levinson-Montgomery)**]{}
*Let $N^-(T)$ be the number of zeros of $\zeta(s)$ in $R: 0<t<T, 0<\sigma<1/2$. Let $N^-_1(T)$ be the number of zeros of $\zeta'(s)$ in $R$. Then $N^-_1(T)=N(T)+O(\log T).$*
Unless $N^-(T)>T/2$ for all large $T$ there exists a sequence $\{T_j\}$, $T_j\to\infty$ as $j\to\infty$ such that $N^-_1(T_j)=N^-(T_j).$
Here we prove the following theorem.
\[dertest\] There is an absolute constant $T_0>0$ such that, for any $T>T_0$ and any $A>0.17$, there is a radius $r$,
$$\exp(-T^A)\le r\le\exp(-T^{A-0.17}),$$
such that in the region
$$\{s : |s-(1/2+iT)|\le r\ {\text{and}}\ \sigma<1/2\}$$
the functions $\zeta(s)$ and $\zeta'(s)$ have the same number of zeros.
Non-real zeros of $\zeta(s)$ lie symmetrically with respect to the critical line. In this sense, the result of Spira [@Spira69 Corollary 3] that $\zeta(1/2+it)=0$ if $\zeta'(1/2+it)=0$ can be regarded as a border case of the above theorem when $r=0$.
Note that for both $\zeta(s)$ and $\zeta'(s)$ the average gap between zeros is $2\pi/\log T$ around height $T$ (Titchmarsh [@Titchmarsh1986 Section 9.4] and Berndt [@Berndt1970]). This is much larger than the radius $r$ in Theorem \[dertest\].
In Theorem \[dertest\], the constant $0.17$ is related to the number of zeros of $\zeta(s)$ in the strip $|t-T|\le1/T$. For details see Section \[proofs\] which contains the proof of Theorem \[dertest\]. Moreover, in Section \[proofTh4\] we consider a more general version of Theorem \[dertest\] devoted to the extended Selberg class $S$. The extended Selberg class contains most of the classical $L$-functions (Kaczorowski [@Kaczorowski06]). This class also includes zeta-functions for which RH is not true, a well-known example being the Davenport-Heilbronn zeta-function, which is defined as a suitable linear combination of two Dirichlet $L$-functions (Titchmarsh [@Titchmarsh1986 Section 10.25], see also Kaczorowski and Kulas [@Kaczorowski07]). In the next section we also investigate zero trajectories of the following family of zeta-functions from $S$:
$$\begin{aligned}
\label{fstau}
f(s,\tau) := (1 - \tau)(1 + \sqrt{5}/5^s) \zeta (s) + \tau L (s, \psi),\end{aligned}$$
where $\tau \in [0, 1]$ and $L (s, \psi)$ is the Dirichlet $L$-function with the Dirichlet character $\psi\bmod 5$, $\psi(2)=-1$.
Extended Selberg class {#proofTh4}
======================
We consider Theorem \[dertest\] in the broader context of the extended Selberg class. Note that Levinson and Montgomery’s [@Levinson1974 Theorem 1] approach, which is used here, usually works for zeta-functions having nontrivial zeros distributed symmetrically with respect to the critical line. See Y[i]{}ld[i]{}r[i]{}m [@yildirim96] for Dirichlet $L$-functions; Šleževičien. e [@rasa] for the Selberg class; Luo [@Luo2005], Garunkštis [@Garunkstis2008], Minamide [@Minamide2009], [@Minamide2010], [@Minamide2013], Jorgenson and Smailovi' c [@js] for Selberg zeta-functions and related functions; Garunkštis and Šimėnas [@gs] for the extended Selberg class. In Garunkštis and Tamoši= unas [@garunkstistamosiunas] the Levinson and Montgomery result was generalized to the Lerch zeta-function with equal parameters. Such function has an almost symmetrical distribution of non-trivial zeros with respect to the line $\sigma=1/2$. Insights which helped to overcome difficulties raised by “almost symmetricity" in [@garunkstistamosiunas] led to Theorem \[dertest\] of this paper, although $\zeta(s)$ has a strictly symmetrical zero-distribution.
We recall the definition of the extended Selberg class (see [@Kaczorowski06; @Kaczorowski99; @Steuding07]). A not identically vanishing Dirichlet series
$$F(s)=\sum_{n=1}^{\infty} \frac{a_n}{n^s},$$
which converges absolutely for $\sigma>1$, belongs to the [ *extended Selberg class*]{} $S$ if
1. (Meromorphic continuation) There exists $k\in\mathbb N$ such that $(s-1)^k F(s)$ is an entire function of finite order.
2. (Functional equation) $ F(s)$ satisfies the functional equation: $$\label{eq:selbergfunctional}
\Phi(s) = \omega \overline{\Phi(1 - \overline{s})},$$ where $\Phi(s) : = F(s) Q^s \prod_{j = 1}^r \Gamma(\lambda_j s +
\mu_j)$, with $Q>0$, $\lambda_j > 0$, $\Re(\mu_j) \geq 0$ and $|\omega| = 1$.
The data $Q$, $\lambda_j$, $\mu_j$ and $\omega$ of the functional equation are not uniquely determined by $F$, but the value $d_{ F} = 2 \sum_{j=1}^r \lambda_j$ is an invariant. It is called the *degree* of $F$.
If the element of $S$ also satisfies the Ramanujan hypothesis ($a_n\ll_\varepsilon n^\varepsilon$ for any $\varepsilon>0$) and has a certain Euler product, then it belongs to the Selberg class introduced by Selberg [@Selberg92].
We collect several properties of $F(s)\in S$. The functional equation gives, for $F(1/2+it)\ne0$,
$$\Re \frac{F'}{F}(1/2+it)=-\Re\sum_{j = 1}^{r} \lambda_j\frac{\Gamma'}{\Gamma}(\lambda_j (1/2+it) +
\mu_j)-\log Q.$$
Then by the formula
$$\frac{\Gamma'}{\Gamma}(s) = \log s
+ O\left(|s|^{-1}\right)\quad( \Re(s) \geq 0, \ |s|\to\infty)$$
we get, for $F(1/2+it)\ne0$ and $d_F>0$, $$\label{logr11/2}
\Re \frac{F'}{F}(1/2+it)=-\frac {d_F}2\log t -\log Q+O\left(\frac1t\right)\qquad(t\to\infty),$$ where the implied constant may depend only on $\lambda_j$, $\mu_j$, $j=1,\dots, r$.
Every $F\in S$ has a zero-free half-plane, say $\sigma>\sigma_F$. By the functional equation, $F(s)$ has no zeros for $\sigma<-\sigma_F$, apart from possible trivial zeros coming from the poles of the $\Gamma$-factors. Let $\rho=\beta+i\gamma$ denote a generic zero of $F(s)$ and
$$N_F(T)=\#\left\{\rho : F(\rho)=0, |\beta|\le\sigma_F, |\gamma|<T\right\}.$$
Then (Kaczorowski and Perelli [@Kaczorowski99 Section 2]) $$\label{VonMangoldt}
N_F(T)=\frac{d_F}{\pi}T\log T+c_FT+O(\log T)$$ with a certain constant $c_F$, for any fixed $F\in S$ with $d_F>0$.
From the Dirichlet series expression for $F$ we see that there are constants $\sigma_1=\sigma_1(F)>1$ and $c=c(\sigma_1, F)>0$ such that $$\label{gec}
|F(\sigma_1+it)|\ge c, \qquad t\in\mathbb R.$$
It is known (Garunkštis and Šimėnas [@gs formula (12)]) that there is $B=B(F,\sigma_1)>0$ such that $$\label{TB}
|F(\sigma+iT)|<T^B, \quad (T>10),$$ for $\sigma\ge-4\sigma_1$. The specific constant $-4\sigma_1$ will be useful in the proof of Theorem \[prop\] below.
In view of above, for given positive constants $\sigma_1$, $c$, $B$, $\varepsilon$, $\delta$, $\bar T$, $\lambda_j$, and complex constants $\mu_j$ ($\Re(\mu_j)>0$, $j=1,\dots, r$), we define a subclass $\bar{S}\subset S$ as the following: it consists of functions satisfying , , , with any $|\omega|=1$; we require that any function from $\bar{S}$ has no more than $$\label{1log2}
\frac{\varepsilon }{\log (2+\delta)} \log T-2$$ zeros in the area $|t-T|\le1/T$, $T>\bar T$. For each function from $S$ the Riemann-von Mangoldt type formula yields the existence of $\varepsilon$, $\delta$, and $\bar T$ such that the zero number bound is fulfilled.
Theorem \[dertest\] will be derived from the following more general statement.
\[prop\] Let $F(s)$ be an element of $\bar S$ with $d_F>0$. Then there is a constant $T_0=T_0(\bar S)>0$ for which the following statement is true.
If $A$ and $s_0=\sigma_0+iT$ satisfy the inequalities
$$\label{ae}
A>\varepsilon, \quad T>T_0, \quad 1/2-\exp(-T^A)<\sigma_0\le1/2,$$
then there is a radius $r=r(F)$,
$$\exp(-T^A)\le r\le\exp(-T^{A-\varepsilon}),$$
such that in the area
$$\label{area}
\{s : |s-s_0|\le r\ {\text{and}}\ \sigma<1/2\}$$
functions $F(s)$ and $F'(s)$ have the same number of zeros.
Note that in Theorem \[prop\] the constant $T_0$ is independent of $A$ and $\sigma_0$. This will be important in the proof of Theorem \[cor\] below.
In [@gs] zeta-functions $f(s,\tau)$ defined by were considered. By Kaczorowski and Kulas [@Kaczorowski07 Theorem 2] we have that for any $\tau$ and any interval $(a,b)\subset (1/2,1)$ the function $f(s,\tau)$ has infinitely many zeros in the half-strip $a<\sigma<b$, $t>0$. Let $\theta>0$ and let
$$\rho : (\tau_0-\theta, \tau_0+\theta) \to\mathbb C$$ be a continuous function such that $f(\rho(\tau), \tau)=0$ for $\tau\in(\tau_0-\theta, \tau_0+\theta)$. We say that $\rho(\tau)$ is a zero trajectory of the function $f(s, \tau)$. Analogously we define a zero trajectory $\tilde{\rho}(\tau)$ of the derivative $f'_s(s, \tau)$. See also the discussion below the formula (6) in [@gs]. In [@gs] several zero trajectories $\rho(\tau)$ of $ f(s,\tau)$ and $\tilde{\rho}(\tau)$ of $ f'_s(s,\tau)$ were computed. The behavior of these zero trajectories correspond well to Theorem \[prop\]. Computations in [@gs] should be considered as heuristic because the accuracy was not controlled explicitly. Next we present a rigorous statement concerning zero trajectories of $ f(s,\tau)$ and $ f'_s(s,\tau)$.
\[cor\] Let $\tau_0 \in [0, 1]$. Let $s=\rho_0$ be a second order zero of $f(s)=f(s,\tau_0)$ with $\Re(\rho_0)=1/2$ and sufficiently large $\Im(\rho_0)$. Then the following two statements are equivalent.
1. There is a zero trajectory $\rho(\tau)$, $\tau\in(\tau_0-\theta, \tau_0+\theta)$, $\theta>0$, of $f(s, \tau)$ such that
1. $\rho(\tau_0)=\rho_0$;
2. $\Re(\rho(\tau))=1/2$ if $\tau<\tau_0$;
3. $\Re(\rho(\tau))<1/2$, if $\tau>\tau_0$.
2. There is a zero trajectory $\tilde{\rho}(\tau)$, $\tau\in(\tau_0-\eta, \tau_0+\eta)$, $\eta>0$, of $f'_s(s, \tau)$ such that
1. $\tilde{\rho}(\tau_0)=\rho_0$;
2. $\Re(\tilde{\rho}(\tau))>1/2$ if $\tau<\tau_0$;
3. $\Re(\tilde{\rho}(\tau))<1/2$, if $\tau>\tau_0$.
From the proof we see that Theorem \[cor\] remains true if all inequalities $\tau<\tau_0$ and $\tau>\tau_0$ are simultaneously replaced by opposite inequalities.
According to computations of [@gs] there are 1452 zero trajectories $\rho(\tau)$ of $f(s,\tau)$ with $0<\Im \rho(0)\le 1500$, 1166 of these trajectories stay on the critical line, while the remaining 286 leave it. The points at which mentioned trajectories leave the critical line are double zeros of $f(s)=f(s,\tau)$ (see also a discussion at the end of Section 3 in Balanzario and Sánchez-Ortiz [@bs2007]). In view of this we expect that the family $f(s,\tau)$, $\tau\in[0,1]$ has infinitely many double zeros lying on the line $\sigma=1/2$. Moreover, we think that the similar statement to Theorem \[cor\] can also be proved in the case where $s=\rho$ is a higher order zero of $f(s)=f(s,\tau)$ with $\Re \rho=1/2$, however there is no evidence such zeros exist.
The next section is devoted to the proofs of Theorems \[dertest\], \[prop\], and \[cor\].
Proofs
======
Proof of Theorem \[prop\] is based on the next lemma. Recall that the subclass $\bar S$ depends on constants $\sigma_1$, $c$, $B$, $\varepsilon$, $\delta$, $\bar T$, $\lambda_j$, $\mu_j$, ($j=1,\dots, r$).
\[crho\] Let $F(s)$ be an element of $\bar S$ with $d_F>0$. Suppose that $s_0=\sigma_0+iT$ satisfies the inequality $\quad 1/2-\exp(-T^A)<\sigma_0\le1/2$, where $T>\bar T$ and $A>\varepsilon$. Then there is a radius $r=r(F)$, $$\label{expta}
\exp(-T^A)\le r\le\exp(-T^{A-\varepsilon}),$$ such that, for $|s-s_0|=r$, $\sigma\le1/2$, $$\label{Nm}
\Re \frac{F'}{F}(s)\le -\frac{d_F}2\log T -\log Q+O\left(\frac1T\right),$$ uniformly for $F(s)\in \bar S$.
We repeat the steps of the proof of Proposition 4 in [@garunkstistamosiunas]. Contrary to Proposition 4, here we do not need the upper bound for $\varepsilon$ (see ). This is because the “symmetric" functional equation leads to the convenient formula , while the “almost symmetric" functional equation of the Lerch zeta-function with equal parameters in [@garunkstistamosiunas] leads to a more restricted version of (see [@garunkstistamosiunas Lemma 3]).
Let $T>\bar T$ and $r_k=\exp\left(-(2+\delta)^{-k}T^A\right)$, $k=1,\dots,[\frac{\varepsilon }{\log (2+\delta)} \log T]$. By and Dirichlet’s box principle there is $j=j(F)\in\{2,\dots,[\frac{\varepsilon }{\log (2+\delta)} \log T]\}$ such that the region $$\label{ring}
r_{j-1}<|s-s_0|\le r_j$$ has no zeros of $F(s)$. Then the auxiliary function $$\label{FF}
g(s):=\frac{F'}{F}(s)-\sum_{\rho\, :\, |\rho-s_0|\le r_{j-1}}\frac{1}{s-\rho}$$ is analytic in the disc $|s-s_0|\le r_j$ and in this disc we have $$\label{Cauchy}
g(s)=\sum_{n=0}^\infty a_n(s-s_0)^n\quad\text{and}\quad a_n=\frac1{2\pi i}\int\limits_{|s-s_0|= r_j}\frac{g(s)ds}{(s-s_0)^{n+1}}.$$
In view of bounds and , Lemma $\alpha$ from Titchmarsh [@Titchmarsh1986 Section 3.9] gives that, for $|s-s_0|\le r_j$,
$$\frac{F'}{F}(s)=\sum_{\rho\, :\, |\rho-(\sigma_1+iT)|\le 2\sigma_1}\frac{1}{s-\rho}+O(\log T).$$
Recall that $\sigma_1$ was defined before $\eqref{gec}$. Here and elsewhere in this proof the constants in big-$O$ and $\ll$ notations may only depend on the subclass $\bar S$. By the last equality, the zero free region (\[ring\]), and we get
$$g(s)=\sum_{\rho\, :\, |\rho-(\sigma_1+iT)|\le 2\sigma_1\ \text{and}\atop |\rho-s_0|> r_j}\frac{1}{s-\rho}+O(\log T).$$
Using this expression in the integral for $a_n$ we obtain that $$\label{newan}
a_n
\ll
r_j^{-n}\log T \quad (n\ge1).$$
Let us choose
$$r=r_j^{1+\delta/3}.$$ Clearly, the bounds are satisfied. By and , for $|s-s_0|=r$, we have $$g(s)=a_0
+O\left( r_j^{\delta/3} \log T\right).$$ Hence, for $|s-s_0|=r$, the expression gives $$\label{takingrealparts}
\Re \frac{F'}{F}(s)
=\Re a_0+\sum_{\rho\, :\, |\rho-s_0|\le r_{j-1}}\frac{\sigma-\beta}{|s-\rho|^2}
+O\left(r_j^{\delta/3} \log T\right).$$ For $|\rho-s_0|\le r_{j-1}$, $|s-s_0|=r$, $1/2-(\Re s_0-1/2+ r_{j-1})\le\sigma\le1/2$, and large $T$, we have that $
|\sigma-\beta|\le 4r_{j-1}$ and $
|s-\rho|^2>r_j^{2+2\delta/3}/2.
$ Then by we get
$$\sum_{\rho\, :\, |\rho-s_0|\le r_{j-1}}\frac{\sigma-\beta}{|s-\rho|^2}\ll r_j^{\delta/3}\log T.$$
Consequently, by , $$\label{isgasdino}
\Re \frac{F'}{F}(s)
=\Re a_0 +O\left(r_j^{\delta/3} \log T\right).$$
The region is zero-free. Thus $F(s)$ does not vanish on the circle $|s-s_0|=r$. By instantiating (\[logr11/2\]) and to a single $s$ on the intersection of $|s-s_0|=r$ and $\sigma=1/2$ we obtain that $$\label{rea}
\Re a_0=-\frac{d_F}2 \log T-\log Q+O\left(\frac1T\right)+O\left(r_j^{\delta/3} \log T\right).$$
Hence, for $|s-s_0|=r$ and $1/2-(\Re s_0-1/2+ r_{j-1})\le\sigma\le1/2$, $$\label{final1}
\Re \frac{F'}{F}(s)
=-\frac{d_F}2 \log T -\log Q+O\left(\frac1T\right).$$
If $|s-s_0|=r$ and $\sigma<1/2-(\Re s_0-1/2+ r_{j-1})$, then
$$\sum_{\rho\, :\, |\rho-s_0|\le r_{j-1}}\frac{\sigma-\beta}{|s-\rho|^2}\le 0$$
and, in view of formulas , , $$\label{final2}
\Re \frac{F'}{F}(s)
\le-\frac{d_F}2 \log T -\log Q+O\left(\frac1T\right).$$ The expressions and , together with the zero free region , prove Lemma \[crho\].
Let
$$R=\{s : |s-s_0|\le r\ {\text{and}}\ \sigma<1/2\},$$ where $r$ is from Lemma \[crho\]. To prove the theorem, it is enough to consider the difference in the number of zeros of $F(s)$ and $F'(s)$ in the region $R$.
We consider the change of $\arg F'/ F(s)$ along the appropriately indented boundary $R'$ of the region $R$. More precisely, the left side of $R'$ coincides with the circle segment $\{ s : |s-s_0|=r, \sigma\le 1/2\}$. To obtain the right-hand side of the contour of $R'$, we take the right-hand side boundary of $R$ and deform it to bypass the zeros of $F(1/2+it)$ by left semicircles with an arbitrarily small radius. In [@gs proof of Theorem 1.2] it is showed that on the right-hand side of $R'$ the inequality $$\label{logderf}
\Re \frac{F'}{F}(s)<0$$ is true. Then, in view of Lemma \[crho\], we have that the inequality is valid on the whole contour $R'$. Therefore, the change of $\arg F'/ F(s)$ along the contour $R'$ is less than $\pi$. This proves Theorem \[prop\].
The Riemann zeta-function is an element of degree $1$ of the extended Selberg class (Kaczorowski [@Kaczorowski06]). By Trudgian [@Trudgian14 Corollary 1] we see that, for large $T$, the Riemann zeta-function has less than $0.225\log T$ zeros in the strip $|t-T|\le1/T$. Thus in the formula we choose $\varepsilon=0.17$ and $\delta=0.1$. Then Theorem \[dertest\] follows from Theorem \[prop\].
We will use Theorem \[prop\]. Next we show that there is a subclass $\bar S$ such that $f(s,\tau)\in \bar S$ for all $\tau\in[0,1]$. In view of the definition of $f(s,\tau)$ we see that there are constants $c$, $B$, and $\sigma_1$ independent of $\tau$ for which the bounds and are valid. By this and Jensen’s theorem, similarly as in Titchmarsh [@Titchmarsh1986 Theorem 9.2], we get that there are constants $\varepsilon$, $\delta$, and $\bar T$ independent of $\tau$ for which the zero number bound is true. The function $f(s)=f(s,\tau)$ satisfies the functional equation ([@gs formula (3)]) $$\label{eq:compfunctional}
f(s) = 5^{-s + 1/2}2 (2 \pi)^{s - 1}\Gamma(1-s) \sin \left( \frac{\pi
s}{2} \right) f(1 - s)$$ which is independent of $\tau$, thus the constants $\lambda_j$, $\mu_j$ are also independent of $\tau$. This proves the existence of required $\bar S$. Therefore in Theorem \[prop\] with $F(s)=f(s,\tau)$ it is possible to choose $T_0$, which is independent of $\tau$. Further in this proof we assume that $\Im(\rho_0)>T_0+10$.
We consider a zero trajectory $\rho(\tau)$ of $f(s,\tau)$ which satisfies $\rho(\tau_0)=\rho_0$. The two variable function $f(s,z)$ is holomorphic in a neighborhood of any
$$(s,z)\in \mathbb C^2\setminus \{(1,z) : z\in\mathbb C\}.$$ By conditions of the theorem we have that $\rho_0\ne1$, $f(\rho_0,\tau_0)=0$, $$\label{dervatives}
\frac{\partial f(\rho_0, \tau_0)}{\partial s}=0,\quad\text{and}\quad\frac{\partial^2 f(\rho_0, \tau_0)}{\partial s^2}\ne0.$$ By and by the Weierstrass preparation theorem (Krantz and Parks [@KP2013 Theorem 5.1.3]) there exists a polynomial
$$p(s,\tau)=s^2+a_1(\tau)s+a_0(\tau),$$ where each $a_j(\tau)$ is a holomorphic function in a neighborhood of $\tau=\tau_0$ that vanishes at $\tau=\tau_0$, and there is a function $u(s,\tau)$ holomorphic and nonvanishing in some neighborhood $N$ of $(\rho_0,\tau_0)$ such that $$\label{up}
f(s,\tau)=u(s,\tau)p(s,\tau)$$ holds in $N$. Solving $s^2+a_1(\tau)s+a_0(\tau)=0$ we get $$\label{2sol}
s_{1,2}=s_{1,2}(\tau)=\frac{-a_1(\tau)\pm\sqrt{a_1(\tau)^2-4a_0(\tau)}}{2},$$ where for the square-root we choose the branch defined by $\sqrt{1}=1$. Note that in the neighborhood $N$ the function $f(s,\tau)$ has no other zeros except those described by .
Assume that the statement 1) of Theorem \[cor\] is true. Then in some neighborhood $U$ of $\tau=\tau_0$ the first part of trajectory $\rho(\tau)$ consists either of $\{s_1(\tau) : \tau<\tau_0, \tau\in U\}$ or of $\{s_2(\tau) : \tau<\tau_0, \tau\in U\}$. Similarly, the remaining part of trajectory $\rho(\tau)$ consists either of $\{s_1(\tau) : \tau>\tau_0, \tau\in U\}$ or of $\{s_2(\tau) : \tau>\tau_0, \tau\in U\}$.
If $\Re s_{1}(\tau)\ne1/2$ or $\Re s_{2}(\tau)\ne1/2$ for some $\tau$, then by the functional equation we see that $s_2(\tau)=1-\overline{s_1(\tau)}$. This and the condition [*(iii)*]{} give that $$\label{notequal1}
s_1(\tau)\ne s_2(\tau),\quad \text{if} \quad \tau>0,\ \tau\in U.$$ Thus $a_1(\tau)^2-4a_0(\tau)\ne0$ if $\tau>0$, $\tau\in U$. By the condition [*(i)*]{} we see that $\rho(\tau_0)=s_1(\tau_0)=s_2(\tau_0)$ is a double zero of $P(s)=P(s,\tau)$, therefore $a_1(\tau_0)^2-4a_0(\tau_0)=0$. Hence $a_1(\tau)^2-4a_0(\tau)$ is a non-constant holomorphic function. Then there is a neighborhood of $\tau=\tau_0$, where
$$\label{notequal2}
s_1(\tau)\ne s_2(\tau),\quad \text{if} \quad \tau<0.$$
In view of formulas , the implicit function theorem ([@KP2013 Theorem 2.4.1])) yields the existence of $\eta>0$ and of a continuous function
$$\tilde{\rho} : (\tau_0-\eta, \tau_0+\eta)\to\mathbb C,$$ such that $\tilde{\rho}(\tau_0)=\rho(\tau_0)=0$ and $f'_s(\tilde{\rho}(\tau), \tau)=0$. By this we get condition [*(a)*]{} of the second statement.
We assume that $\eta>0$ is such that the set
$$\{(\tilde{\rho}(\tau),\tau) : \tau\in (\tau_0-\eta, \tau_0]\}$$ is a subset of the neighborhood $N$ (defined by ). We have ([@gs Proposition 1.4]) that $f'_s(1/2+it, \tau)=0$ implies $f(1/2+it, \tau)=0$. Then in view of we obtain that $\Re\tilde{\rho}(\tau)\ne1/2$ if $\tau\in (\tau_0-\eta, \tau_0)$. By condition [*(ii)*]{} and by above there is a neighborhood of $(\rho_0,\tau_0)$, where $f(s,\tau)\ne0$ if $\tau<\tau_0$. Then condition [*(b)*]{} follows from Theorem \[prop\].
Theorem \[prop\] and condition [*(iii)*]{} lead to $\Re(\tilde{\rho}(\tau))<1/2$ if $\tau\in(\tau_0, \tau_0+\eta)$ and $\eta>0$ is sufficiently small. We get condition [*(c)*]{}. By this we proved that the statement 1) implies the statement 2).
Assume the second statement of Theorem \[cor\]. Then by applying Theorem \[prop\] and reasoning similarly as above, we see that from the trajectories defined by we can construct a trajectory $\rho(\tau)$ which satisfies conditions of the first statement.
[*Acknowledgement.*]{} This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects’ of Measure No. 09.3.3-LMT-K-712-01-0037.
[99]{}
Balanzario, EP, S' anches-Ortiz J. Zeros of the Davenport-Heilbronn counterexample. Mathematics of Computation 2007; 76 (260): 2045–2049.
Berndt, BC. The number of zeros for [$\zeta ^{(k)}\,(s)$]{}. Journal of the London Mathematical Society. Second Series 1970; 2: 577–580.
Garunkštis R. Note on zeros of the derivative of the Selberg zeta-function. Archiv der Mathematik 2008; 91: 238–246. Corrigendum. Archiv der Mathematik 2009; 93: 143–143.
Garunkštis R, Tamošiūnas, R. Zeros of the [L]{}erch zeta-function and of its derivative for equal parameters. To appear in Bulletin Mathématique de la Société des Sciences Mathématiques de Roumanie.
Garunkštis R, Šimėnas R. On the Speiser equivalent for the Riemann hypothesis. European Journal of Mathematics 2015; 1: 337–350.
Jorgenson J, Smajlović L. On the distribution of zeros of the derivative of Selberg’s zeta function associated to finite volume Riemann surfaces. Nagoya Mathematical Journal 2017; 228: 21–71.
Kaczorowski J. Axiomatic theory of [$L$]{}-functions: the [S]{}elberg class. In: Analytic number theory, volume 1891 of Lecture Notes in Math. Springer, Berlin, 2006, pp. 133–209.
Kaczorowski J, Kulas M. On the non-trivial zeros off the critical line for [$L$]{}-functions from the extended [S]{}elberg class. Monatshefte für Mathematik 2007; 150 (3): 217–232.
Kaczorowski J, Perelli A. The [S]{}elberg class: a survey. In: Number theory in progress, [V]{}ol. 2 ([Z]{}akopane-[K]{}ościelisko, 1997). De Gruyter, Berlin, 1999, pp. 953–992.
Krantz SG, Parks HR. The implicit function theorem. Modern Birkhäuser Classics. Birkhäuser/Springer, New York, 2013.
Levinson N, Montgomery H.L. Zeros of the derivatives of the [R]{}iemann zeta-function. Acta Mathematica 1974; 133: 49–65.
Luo W. On zeros of the derivative of the Selberg zeta function. American Journal of Mathematics 2005; 127: 1141–1151.
Minamide M. A note on zero-free regions for the derivative of Selberg zeta functions. In: Spectral analysis in geometry and number theory, vol. 484 of Contemp. Math. Amer. Math. Soc., Providence, RI, 2009, pp. 117–125.
Minamide M. The zero-free region of the derivative of Selberg zeta functions. Monatshefte für Mathematik 2010; 160 (2): 187–193.
Minamide M. On zeros of the derivative of the modified Selberg zeta function for the modular group. The Journal of the Indian Mathematical Society. New Series 2013; 80 (3-4): 275–312.
Selberg, A. Old and new conjectures and results about a class of [D]{}irichlet series. In: Proceedings of the [A]{}malfi [C]{}onference on [A]{}nalytic [N]{}umber [T]{}heory ([M]{}aiori, 1989). Univ. Salerno, Salerno, 1992, pp. 367–385.
Speiser A. Geometrisches zur Riemannschen Zetafunktion. Mathematische Annalen 1934; 110 (1): 514–521.
Spira R. On the [R]{}iemann zeta function. Journal of the London Mathematical Society. Second Series 1976; 44: 325–328.
Steuding J. Value-distribution of [$L$]{}-functions. Volume 1877 of Lecture Notes in Mathematics. Springer, Berlin, 2007.
Šleževičien. e R. Speiser’s correspondence between the zeros of a function and its derivative in Selberg’s class of Dirichlet series. Fizikos ir Matematikos Fakulteto Mokslinio Seminaro Darbai. Proceedings of Scientific Seminar of the Faculty of Physics and Mathematics 2003; 6: 142-153.
Titchmarsh EC. The theory of the Riemann zeta-function. 2nd ed., rev. by D. R. Heath-Brown. Oxford Science Publications. Oxford: Clarendon Press, 1986.
Trudgian TS. An improved upper bound for the argument of the Riemann zeta-function on the critical line II. Journal of Number Theory 2014; 134: 280–292.
Yildirim CY. Zeros of derivatives of Dirichlet $L$-functions. Turkish Journal of Mathematics 1996; 20 (4): 521–534.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Anshul Verma
- Orazio Angelini
- Tiziana Di Matteo
bibliography:
- 'WDI\_paper.bib'
title: A new set of cluster driven composite development indicators
---
Introduction
============
Economic indicators are vital in understanding and tracking the macroeconomic state and development of a country [@Stock1989], informing government policy makers about the health of the economy and also for citizens to evaluate and assess any improvement in their life [@Mugge2016]. However, with the ever expanding number of different indicators and digital records of this data, it becomes difficult to interpret the high dimensional data as a whole, spot overall trends and see how different indicators are related to each other. Often qualitative or obscure factors are used to explain development such as the need to have a good education and healthy citizens.
Additionally, it is not agreed what factors affect development [@Ricardo1891; @Leontief1956; @Bowen1986; @Aghion1990; @Heckscher1991; @Kremer1993; @Krueger2001; @Egert2009; @Aghion2010], and so arbitrarily chosen indicators are often used, ignoring specific information by excluding other indicators. In some cases, a more educated assumption is made by taking only indicators of relevance e.g. those relating to infrastructure. Even in these cases, different classes of indicators are treated separately to each other. Links with other classes of indicators e.g. poverty and infrastructure [@United1997] are disregarded.
This is especially relevant when one combines them in some way into composite indicators[@Salzman2003], which aim to describe several development indicators with just one composite version. These can range from more general indicators such as the Human Development Index (HDI) [@Sagar1998], which is used to measure the progress in life expectancy, education and Gross National Income per capita (GNI)[@Todaro2015], to more specific indicators such as the Global Connectivity Index (GCI) [@GCI]. Composite indicators are also often used to summarise the state of a country relating to the specific objective of combining the chosen set of indicators e.g. GCI is used to track the extent of digital infrastructure of a country, whilst HDI is used to track overall human development. In literature they have been used, for example, to relate cancer rates to development [@Bray2012] or to produce a global rank of a country’s competitiveness. Whilst aggregating indicators into composite ones seems to be a good solution to the problem of summarising information from many different indicators, we propose that the high number of possible ways to combine them calls for the developement of guiding principles on how this should be achieved. Moreover, some indicators are calculated differently for different regions [@Huggins2003], making comparisons based on them much more difficult. By knowing how indicators are inter-related to each other, we will be able to understand in a data-driven, objective way which indicators are most important in characterising a country and how they should be combined to produce economically meaningful composite indicators.
Dimensionality reduction can help here by providing a smaller but faithful version of the relationship between the vast number of available development indicators [@Maaten2009]. This paper proposes to study these relationships in an unbiased way, using these relationships as a basis to propose a new set of composite development indicators. Differently to previous work, we make no subjective restriction on the type of indicators we study, drawing from a large range of scope of indicators to study the relationship between the indicators emerging from the data itself. We test whether we can indeed separate the indicators into different pre defined groups based on the different factors proposed that affect development using PCA (Principal Component Analysis) and Random Matrix Theory (RMT) [@Bun2017]. We find that the broad topic category they are assigned, e.g. health vs economic vs infrastructure, is not necessarily the best way to aggregate them. We also employ hierarchical clustering algorithms for the first time to analyse the structure of indicators rather than countries, finding that the indicator clusters are a mixture of topics but still retain an economic interpretation. We use these results to overcome traditional problems faced in making composite indicators such as how/what indicators to aggregate to derive a new set of objective, data driven, interpretable and country comparable composite indicators. Leveraging on these composite development indicators, we observe useful observations for policy makers, such as the ability of mobile phone adoption to be able to distinguish between underdeveloped countries. Next, we provide a new application of network filtering to find subsets of highly influential indicators based on PageRank [@Page1999]. Finally, we compare the performance of our composite indicators to a random benchmark, a subset of influential indicators and PCA, concluding that our proposed composite indicators outperform the others.
In the context of this problem, dimensionality reduction has been applied in [@Cristelli2018], where Principal Component Analysis (PCA) was used on a set of restricted indicators relating to infrastructure to examine the direction of the causal relationship between infrastructure capability and economic growth. The authors of [@Lai2003] compare the pre-set weights of the HDI to that derived from a PCA. Hierarchical clustering has been applied to analyse clusters of countries such as in [@Nardo2005; @Castellacci2011]. However, in either cases either a restricted set of indicators are used or the focus of the work is on countries, rather than the analysis and development of new indicators themselves.
Network filtering techniques [@Mantegna1999; @Tumminello2005] and their related hierarchical clustering algorithms [@Anderberg2014; @Song2012; @Musmeci2015] have also proved to be useful when analysing data, with wide ranging applications from finance to biology [@Musmeci2015; @Song2012; @Sneath1957]. Network filtering techniques view a similarity matrix as a network, each node being a feature and each link having a weight with the respective non-zero correlation. Within this framework, removing noisy entries in the correlation matrix can be translated into finding a sparse version of the similarity network. These techniques aim to extract the backbone of the structure between generic features by enforcing sparsity in a specific way to the particular technique. The induced sparsity of the network helps make hidden structures more visible. One successful example of this is the Minimum Spanning Tree (MST) [@Graham1985; @Mantegna1999], which imposes that the correlation matrix is a tree that maximises the total weight of links, and has been applied in a diverse number of fields from electricity networks to taxonomy [@Sneath1957; @Graham1985]. A generalisation which includes the possibility of loops is the Planar Maximally Filtered Graph (PMFG) [@Aste2005; @Tumminello2005], which instead imposes a weaker constraint that the network is planar i.e. it can be embedded on a sphere without any links crossing. Hierarchical algorithms are also highly related and aim to group features with similar properties into clusters that organised in a hierarchical fashion in the form of a dendrogram. An example of this is the Directed Bubble Hierarchical Tree (DBHT) algorithm that is based on the PMFG, having been used for finance [@Musmeci2015; @Musmeci2015a] and in gene expression data [@Song2012]. In particular, the DBHT algorithm has also been shown to outperform other hierarchical clustering algorithms. This paper is organised as follows. The second section is a description of the dataset and how we amalgamate topics together. In the third section we apply a PCA analysis to our dataset, which we use to show the difference between the structure of the empirical correlation matrix and the preassigned topics. We then find the clustering using the DBHT algorithm in \[DBHT\_interpret\]. Developing the clustering results further to form a novel set of composite indicators in \[CompInds\_DBHT\], we observe some interesting features of our composite indicators in \[CountryDevelopment\]. For \[Influential\_PMFG\] we apply the PMFG to the empirical correlation matrix in order to derive some influential indicators via PageRank. In \[PerformanceComparison\] we compare the performance of our composite indicators with a random benchmark and top indicators taken from the PageRank. Finally, we discuss the dynamic stability of our results in \[DynamicalAnalysis\] and draw some conclusions in the final section.
WDI Dataset {#WDI_Data}
===========
The World Development Indicator (WDI) dataset is a vast collection of various yearly development indicators for $C=218$ countries (where $C$ is the number of countries) and are taken from official, internationally recognised agencies [@World2018]. Note that we have applied the imputation scheme the distribution regularisation procedure detailed in \[Imputation\] and in \[Distribution\_regularisation\] respectively. We shall use a total of $T=19$ years, where $t=1,...,19$ represents the years from $1998$ to $2016$ . The number of indicators contained within the dataset is $N=1574$, and the objectives for collecting these indicators ranges from well known economic data such as Gross Domestic Product (GDP), education data such as the literacy rate and population health such as infant mortality rate. Hence both the large number of indicators and the diverse range of granularity and objectives makes this dataset a perfect candidate in order to study the relationships between different classes of indicators and to infer and derive conclusions that hold globally. Note that we also remove highly correlated indicators through the process detailed in \[CleaningProcedure\] since some indicators can be trivially related together e.g. percentage of population that are males and the same but for females, which would bias the results. This reduces the number of indicators $N$ to $1448$.
Amalgamating Topics {#AmalgamatingTopics}
-------------------
The indicators are also divided into $94$ different topics that include different classes of economic indicators such as Economic Policy and Debts: National Accounts: Growth Rates, which measures growth rates of agriculture, industry, manufacturing and services sectors, and Education: Participation, which measures participation rates across gender, age groups in various levels of education. We show the distribution of all such topics using this classification in \[fig:pie\_chart\]. We can see that most of the groups of indicators make up a very small fraction of the indicators, which would mean that any averaged statistic across within each group would be subject to significant noise. To counteract this, we aggregate the topics for each classification based on their root objective e.g. Education: Participation and Education: Efficiency are both classes of indicators relating to education and hence we combine these two groups into one group Education, similarly Health: Nutrition and Health: Disease Prevention are combined into Health. Applying this procedure to the entire dataset produces $g=1,...,G=12$ different topics for the indicators which we indicate in \[fig:pie\_chart\_short\]. We can see that each topic has a larger number of indicators, which will increase the statistical reliability of any conclusions drawn from the data.
[0.475]{} {width="\textwidth"}
[0.475]{} {width="\textwidth"}
Data Structure {#DataStructure}
--------------
We here start exploring the correlation structure across years as a measure to quantify the relationship between indicators. We aggregate values across years in order to average out correlations that might hold only for specific periods or groups of countries. This also helps to reduce the noise in the correlation matrix, since one-year matrices would be too shallow to reliably obtain correlation estimates. With this mind, we organise the data matrix $\bm{X}$ as follows. It consists of $C$ matrices of size $T\times N$ matrices stacked vertically, with each cross sectional block representing the data for one specific country $c$ and each column reporting the data for indicator $i=1,...,1448$. For each cross section, the entries in the first row and $i$ column are the values of the indicator $i$ for $t=1$ and the last row are the same but for $t=T$. In order to discard spurious correlations in the data, we remove trends by taking the first difference, that is for each block of data we calculate $$%X^{\text{diff},c}_{\tilde{y},i}=X^{c}_{y+1,i}-X^{c}_{y,i} \ ,
\Delta \bm{X}(\tilde{t},c,i)=\bm{X}(t+1,c,i)-\bm{X}(t,c,i)$$ with $\bm{X}(t,c,i)$ is the value of indicator $i$ for country $c$ at year $t$. $\bm{X}(t+1,c,i)$ is similar but with $t+1$. $\Delta \bm{X}(\tilde{t},c,i)$ represents the first difference between $\bm{X}(t+1,c,i)$ and $\bm{X}(t,c,i)$, with $\tilde{t}$ running from $1,...,T-1=18$. Every $\Delta \bm{X}(\cdot,c,\cdot)$ has $T-1$ rows and $N$ columns. Stacking each of these vertically forms the $Y=3924\times N$ matrix $\Delta \bm{X}$, which now contains all the differenced values for all countries and all time steps.
To encode the relationship between the indicators we use the empirical Pearson correlation matrix $\bm{E}$, which can be calculated from a zero mean, standardised $\Delta \bm{X}$ as $$%E_{ij}=\frac{1}{C\tilde{Y}}\sum_{c,c'=1}^{C}\sum_{\tilde{y},\tilde{y}'=1}^{\tilde{Y}}X^{\text{diff},c}_{\tilde{y},i}X^{\text{diff},c'}_{\tilde{y}',j} \ .
\bm{E}=\frac{1}{C(T-1)}\left(\Delta \bm{X}\right)^{\dagger} \Delta \bm{X} \ ,$$ where $\dagger$ represents the transpose. Therefore, we aim to understand the multivariate dependence between development indicators through analysing the main driving factors of the structure of $\mathbf{E}$. However, using the raw correlation matrix would be unwise due to its large size ($1448$ by $1448$) and noise present in the system, potentially leaving a certain amount of redundant information in $\mathbf{E}$. As mentioned earlier, we can distill the information given in $\mathbf{E}$ to a smaller version using dimensionality reduction, which should also have the added benefit of making it easier to interpret the structure of $\mathbf{E}$.
PCA analysis {#PCA}
============
Within the class of dimensionality reduction methods, PCA is a popular and easy to apply technique used on correlation matrices [@Jolliffe2002]. This technique has been successfully applied in many diverse areas, ranging from finance [@Plerou2002] to molecular simulation [@Stein2006]. PCA accomplishes the task of dimensionality reduction by taking a subset of the orthogonal basis for the correlation matrix $\mathbf{E}$ [@Jolliffe2002]. The first principal component corresponds to the eigenvector with the highest eigenvalue, providing the direction where the data is maximally spread out i.e. explains the most variance of the system. Each subsequent principal component has a lower eigenvalue and thus explains a lower fraction of the total variation of the system. Therefore, we can reduce the dimensionality of the correlation matrix by taking a subset of principal components, hoping to encode most of the total variance of the data. This subset can be chosen with the help of Random Matrix Theory (RMT) [@Bun2017], which studies the properties of matrices drawn from a probability distribution. In our specific context of forming composite indicators in a data-driven way, one could then use the chosen subset of components as a basis for composite indicators.
In this section, we apply PCA to the correlation matrix $\bm{E}$ on the dataset of \[WDI\_Data\], finding the distribution of its eigenvalues, using results from RMT to help interpret it. We then analyse the contribution of each topic defined in \[AmalgamatingTopics\] to the eigenvectors corresponding to the principal components.
Eigenvalue Spectrum {#EigenvalueSpectrum}
-------------------
As is customary in Random Matrix Theory, we fitted the Marčenko-Pastur (MP) distribution [@Marchenko1967] to the eigenvalue distribution of $\bm{E}$ to discern what part of the eigenvalue spectrum is less likely to be a product of finite-sampling noise. We found that MP does not fit our eigenvalue distribution well, which suggests that there is structure in the whole distribution, as opposed to just its right tail. We shuffled the data to destroy all correlations between indicators, and obtained an eigenvalue distribution that fitted the MP near perfectly. These findings suggest that choosing only a subset of the principal components obtained by PCA is likely to discard relevant information. In other words, this is a clue that PCA might be unsuitable to reduce dimensionality on this dataset. For a more detailed discussion of the procedures in this subsection, we refer to \[EigenvalueSpectrumSupplementary\].
Eigenvector Interpretation {#EigenvectorInterpretation}
--------------------------
We investigate what the interpretation of the eigenvectors is by calculating the contribution of each of the $G$ topics from \[AmalgamatingTopics\] that divide the indicators. This will reveal the structure with respect to topics of the principal components so we can see if they are dominated by one specific topic. The analysis will also be particularly relevant for the earlier principal components that are the main contributors to the variance of the system, which will bring to the surface any topics which are more significantly contributing to development.
Specifically, we project the eigenvectors $\bm{v}_{i}$ of $\bm{E}$ onto the $G$ topics which divide the indicators that we defined in \[WDI\_Data\] using the projection matrix $\bm{P}$ with entries $$P_{ig} = \begin{cases}
1/N_{g} & \text{if $i$ is in topic $g$} \\
0 & \text{else}\ ,
\end{cases}$$ where $N_{g}$ is the number of indicators that are part of topic $g$. From this, for every we can define $\bm{\rho}_{i}$, which is $G$-dim vector with entries $\rho_{g,i}$, and is computed as $$\bm{\rho}_{i}=\gamma_{i}\mathbf{P}\mathbf{v}_{i} \ , \label{RhoG}$$ where $\gamma_{i}$ is the normalisation constant $\sum_{g=1}^{12}\rho_{g,i}$. Each entry of $\bm{\rho}_{i}$ gives the contribution of the $g$-th topic to the $i$-th eigenvector. As an example, we plot $\bm{\rho}_{i}$ for the top $6$ principal components in \[fig:RhoG\_6PCs\]. In \[tab:RhoG\_PVal\], we report the one-sided p values of $\rho_{g}$ for testing against the null hypothesis that the contribution from the topic to the principal component is random using the procedure detailed in \[RhoG\_StatTest\]. The bolded values are those below the $5\%$ significance level where we reject the null hypothesis. By looking at \[fig:RhoG\_6PCs\] and \[tab:RhoG\_PVal\], we see that for the first principal component although other topics contribute to the largest eigenvalue, the statistically significant contributions come from the Health and Infrastructure. Similarly for the second principal component the Environment, Health and Gender topics make a statistically significant contribution, and for the third principal component only the Economic related indicators make a significant contribution.
We have also plotted the number of times two topics are simultaneously significant across all principal components in \[fig:Graymap\_Pval\_Double\], with a darker grey indicating a higher number of times this occurs. We use a $5\%$ p value with a Bonferroni correction of $N$, giving the actual p value used to be $3.45\times 10^{-3}$. The black diagonal terms give the number of times a single topic is significant across all principal components using the same p value. If the indicators could be neatly divided into topics then we should see no interaction between them so that \[fig:Graymap\_Pval\_Double\] will look almost like a diagonal matrix. In fact, we see that some of the off diagonal elements are quite large relative to the diagonal elements e.g. Education vs Economic and Social vs Health, which indicates that the topics are indeed interacting. We can therefore conclude from this analysis that there is not a clear, single topic that contributes more than others and that in fact statistically significant contributions can come from different topics that combine in different ways. This has implications for composite indicators aiming to capture some particular aspect of development such as GCI and HDI since it suggests that the inclusion of certain indicators which focus on other aspects of development might improve the quality of the composite indicator. Conversely, some indicators may actually not be representative of the aim of the composite one, which means including it would add no information with respect to the aim of the composite indicator whilst also simultaneously increasing complexity. Overall, we can conclude that whilst the principal components indicate that the correlations between indicators contains interesting structure, it is difficult to use PCA to form new composite indicators. This means we must turn to other methods to achieve both of these goals.
![Bar chart of the $\rho_{g}$ defined in \[RhoG\] for the top $6$ principal components of $\bm{E}$ using the $12$ topics of the indicators in \[AmalgamatingTopics\]. The legend corresponds to these $12$ topics.[]{data-label="fig:RhoG_6PCs"}](RhoG_6PCs.pdf){width="70.00000%"}
![Grey scale map with the off diagonal entries giving the total number of times that the topics labelled by the corresponding row and column are simulateneously statistically significant across all principal components. A p-value of $0.05$ with a Bonferroni correction is used so that the new p-value is $0.05/N$ or $3.45\times 10^{-3}$. The diagonal entries are the number of times a single topic is significant at the same p-value across all principal components.[]{data-label="fig:Graymap_Pval_Double"}](Graymap_Pval_Double.pdf){width="70.00000%"}
Interpretation of the clustering from the DBHT {#DBHT_interpret}
==============================================
This section analyses the relationships between indicators in a data driven way where we make as little assumptions about the structure of the data as possible. In this way, we can develop an interpretation and partition of the indicators which is consistent with the data. In the previous section, we showed that this is not possible with PCA and by dividing indicators based on their a priori topic given in \[AmalgamatingTopics\].
Hierarchical clustering algorithms [@Bishop2006], which group together data with similar properties in a hierarchical fashion, and their associated network filtering techniques will help in this respect. This is because we can consider information from all indicators. Once we apply the clustering algorithm on the correlation matrix, we have a natural way of accomplishing dimensionality reduction by using one variable to describe each cluster of nodes, with the collection of clusters forming the reduced correlation matrix. The ones associated to network filtering algorithms leverage on the topological properties of the filtered network.
We shall use the PMFG network filtering technique because it is able to retain a higher amount of information about the system than the MST. This is because it preserves a greater number of links of the original network and in fact contains the MST as a subgraph [@Tumminello2005]. This is important for us since the MST is a tree and thus contains no loops, whereas the PMFG contains $3$ and $4$ cliques, and we would like to avoid discarding relevant information about the relationship between indicators. For the PMFG, the associated clustering algorithm is the DBHT algorithm, which takes advantage of the $3$ clique structure of the PMFG. The main advantage of using the DBHT algorithm is that it does not need prior input into the number of clusters, making it preferable over other clustering algorithms so that we can make a-posteriori comparison with less assumptions [@Song2012; @Musmeci2015]. This is important for us since we want to uncover the structure of correlations between indicators, making as few assumptions as possible on this same structure. In this section, we investigate whether the indicators can be divided into their topics by applying the DBHT algorithm to $\bm{E}$ in \[DBHT\_results\]. Then by analysing clusters individually, we look for their dominating topics and what their possible interpretation is in \[DBHT\_similarity\].
DBHT results and interpretation {#DBHT_results}
-------------------------------
We apply DBHT to $\bm{E}$. It identifies a total of $K=102$ clusters which we label $k=1,...,K$, significantly more than the $G$ preassigned topics, with an average cluster size of $14.2$. In \[fig:Cluster\_Topic\_Dist\] we summarise the clustering labels obtained from DBHT and its topic composition, with the height of the bar representing the number of indicators in each cluster $N_{k}$. Each bar is further divided by colours which represent how many indicators belong to that particular topic. \[fig:Cluster\_Topic\_Dist\] shows that cluster sizes are highly heterogeneous - the biggest cluster has $111$ indicators versus the smallest with $4$.
We can see that some clusters are dominated by certain topics - for example cluster $41$ is dominated by economic indicators and in particular indicators related to countries’ current account balance and external balance of trade. At the same time however, there are also some clusters which are instead a mixture of topics but still have an interpretation based on the indicators contained within them such as cluster $72$, which contains indicator from disparate topics. A closer inspection reveals that this cluster is made of indicators about access to electricity, railways size, primary and secondary education expenditure, health-related indicator such as HIV incidence and hepatitis immunization, access to sanitation facilities, prevalence of underweight children, number of women who justify a husband’s beatings, and the Gini index. All these measurements can be easily used to characterize underdeveloped countries [@Winkler2011; @Garcia2006; @Smith2000; @Ravallion1997; @Bose2007; @Gupta2008; @Montgomery2007] Another interesting fact that we can observe from the data is that the cluster $5$ contains very important economic indicators such as GDP per capita, value added contributions of agriculture, industry, manufacturing, services and trade, and also imports/exports of goods and services as a fraction of GDP. This same cluster also contains indicators directly related to measuring the innovation output of a country such as patent, trademark and industrial design applications, suggesting that innovation is an important factor in economic development. We can interpret this by realising that innovation led growth increases productivity through the accumulation of knowledge obtained via education, new products or better processes [@Romer1990; @Aghion2010].
Many other interesting clusters are found, such as number 11, which seems to relate to underdevelopement with it contents relating to life expectancy, foreign aid, drinking water availability, fertility rate, and percentage of women married before the age of 18. Cluster 21 puts together CO2 emissions, alternative and nuclear energy, combustible renewables and waste, hydroelectric sources prevalence and power distribution losses. Cluster 101 describes the distress status of a country’s debt, including indecators about how much of it has been rescheduled or forgiven.
![The cluster label $k$ versus the number of indicators in each cluster $N_{k}$. Each bar is divided into the number of indicators in cluster $k$ which belong to each topic, with each colour corresponding to each topic according to the key on the right.[]{data-label="fig:Cluster_Topic_Dist"}](Clusters_Topic_Dist_PMFG.pdf){width="70.00000%"}
Similarity of the DBHT clustering with the topics {#DBHT_similarity}
-------------------------------------------------
Once we have established that each of the clusters has an economic meaning, we quantify how much the clustering outputted by the DBHT in \[fig:Cluster\_Topic\_Dist\] is similar to the clustering based on topics. Therefore, we will be able to see overall how close the two divisions of topics are in a quantitative way. We do so using the Adjusted Rand Index (ARI), which is $1$ if there is a perfect agreement, $-1$ if there is an anti-agreement and $0$ if there is no agreement [@Rand1971]. It has been successfully used in [@Musmeci2015]. Computing the ARI to compare the output of the DBHT algorithm and the topics distribution, we find that this value is $0.0456$ i.e. quite close to $0$, which corroborates our previous conclusion that overall the clustering of the data is not in general linked to that based on topics.
The analysis can also be made at local level by seeing if any topics have a significant presence in each cluster. In this way, it allows us to also to see locally if more than one topic might be present in each cluster, which is important since whilst on a global level there may not be much similarity . Practically, this is achieved by using the procedure proposed in [@Tumminello2011]. Specifically, we test statistically, using a one sided test, the null hypothesis that a cluster $k$ from the DBHT and the $g$-th topic have $m$ common elements is random. Under the null hypothesis, this distribution is hyper geometric. If the null hypothesis is rejected, it means that statistically we say that the $g$-th topic is *overexpressed* in cluster $k$. We apply this procedure to each of the DBHT clusters and topics using the p value of $8.17\times 10^{-6}$ (which is $0.01$ with a Bonferoni correction [@Feller2008] of $1/2 KG$), recording the number of overexpressed topics in each cluster. The plot of this calculation’s results is found in \[fig:OverExpression\]. We see that whilst a majority of clusters have one or two topics overexpressed, there are a total of $49$ clusters which have no overexpressed topics. These particular clusters of indicators still have an economic meaning. For example, cluster $32$ contains indicators relating to tertiary education such as pupil to teacher ratio in tertiary education and completed education at a tertiary level, which belong the education topic. However, it also contains indicators such as scientific and technical journal articles, which is classed as relating to infrastructure. These indicators may be linked e.g. because scientific articles are usually always published by authors with at least a tertiary level education. This confirms our conclusions that overall at a system wide and local level, the clustering of the data does not reflect the information given by the topics, suggesting that indicators do not necessarily correlate with other indicators of the same type.
![The number of overexpressed topics, where a topic is overexpressed when the chance of the number of indicators of that topic being present in cluster $k$ (for more details see \[DBHT\_similarity\]) is below a p-value of $8.17\times 10^{-6}$.[]{data-label="fig:OverExpression"}](OverExpression.pdf){width="75.00000%"}
Deriving new composite development indicators from DBHT {#CompInds_DBHT}
=======================================================
In the previous section we showed that the distribution of topics amongst the indicators is not an accurate description of the data and may miss key information about the relationship between different classes of indicators. This means that composite indicators based on this premise such as the HDI or the WEF-GCI infrastructure pillar may not be the best way of combining indicators. We want to propose a new set of data driven composite development indicators which can encapsulate this new information based on the results given in \[DBHT\_results\]. In doing so, we would overcome traditional problems faced when forming composite development indicators, mainly on how and which indicators we should aggregate. This section is dedicated to describing a way of using the results in section \[DBHT\_interpret\] to derive a novel set of cluster driven composite development indicators.
To define each composite indicator we shall use the set of clusters from DBHT given in \[DBHT\_results\]. It provides a natural way to select the indicators to combine for our composite indicators since each cluster contains indicators which share similar properties, and also has an economic interpretation as highlighted in section \[DBHT\_interpret\]. Hence, aggregating information for indicators which are members of the same cluster enables us to simply and efficiently summarise the economic information contained within them. Condensing the complimentary insight offered by indicators in the same cluster also overcomes the need to make ’educated’ assumptions about which indicators are to be combined that other alternative composite indicators often use [@Sagar1998]. In this way, we can more clearly see the overall behaviour of each set of indicators in cluster $k$ by using the corresponding composite indicator value as a proxy. DBHT also is significantly advantageous in this respect since it requires no prior input of the number clusters (and thus number of composite indicators) needed to describe the properties of the data) [@Song2012; @Musmeci2015].
Method used to calculate the composite indicators
-------------------------------------------------
Here, we shall define the method used to calculate the new composite development indicators based on the results of \[DBHT\_results\]. In the $k$ composite indicator we want to capture the average behaviour of all indicators in that cluster. Therefore, we aggregate the indicators in cluster $k$ by using the median value across all indicators within this cluster. This forms composite indicator $k$, $I_{k}$, defined as $$I_{k}=\mathrm{median}_{i \in \text{cluster } k}\mathbf{X} \ ,$$ where the notation $i \in \text{cluster } k$ indicates that we only take indicators $i$ that are members of cluster $k$. An advantage of using the median over the arithmetic mean or even a weighted mean is that the median is more robust to outliers. The median is a valid measure across the different indicators because the entries of $\bm{X}$ are also standardised, meaning that their scales are all the same. We highlight that we have chosen to use the median for every $k$ since this provides us with a consistent methodology so that the precise details of how $I_{k}$ is calculated do not change for every $k$. This improves some existing methods used in the literature where for example the same indicator may be calculated in different ways for different regions [@Huggins2003] meaning that we can make valid comparisons between indicators. We use this method to calculate the set of $I_{k}$, giving $102$ indicators in total and call this set of cluster driven composite indicators (CDCIs).
Using the CDCIs to understand country development {#CountryDevelopment}
=================================================
One of the main uses of indicators is to track the country development of a country to assess its progress. This is important since it gives an idea of what has been achieved in terms of country development and where to focus policy changes to affect country development positively. One can also use indicators to compare countries either pair wise or globally. On the last point, CDCIs can be useful because the methodology used to compute them is not reliant on subjective, country dependent criteria, which means we can make fair comparisons between different values of the CDCIs for different countries at different times. By comparing the CDCIs with each other for all countries, we can therefore investigate whether they can be used to assess the development of a country.
In \[fig:panel\_timelapse\], we provide some examples of comparisons between different indicators. Overall, we remark that all the plots have a ’hockey-stick’ shape. We can specifically see for the plots in the top two panels the vertical leg of the hockey-stick shape is made of developing countries, whilst the horizontal leg, indicating a saturation effect, is made of developed countries. This is interesting since it suggests a country level transition from a group consisting of developing nations to one with developed nations. In fact, this further supports the so called two regime hypothesis [@Pugliese2017; @Cristelli2018], where countries below a barrier struggle to develop consistently, which corresponds to nations in the vertical leg of hockey-stick shape. Countries that overcome this barrier have or are experiencing high growth in development, which are represented by the horizontal leg of the hockey-stick shape. This transition can clearly be observed to be consistent across time from the bottom panel comparing $I_{6}$ and $I_{72}$ and $I_{73}$ and $I_{72}$, with the years $1998$ and $2016$ overlayed.
As a consequence of the consistency of the observed hockey-stick shape, we can make interesting observations regarding the particular pair of CDCIs being plotted. Specifically in the top panel of \[fig:panel\_timelapse\], we plot $I_{34}$ against $I_{49}$, where the former represents those who have use mobile and banking services and the latter come from primary school statistics, a key signature of development [@Keller2006]. We see here that it seems that the vertical leg access to mobile phone technology can, for developing countries, characterise their development. Past a certain point however, the concavity changes, suggesting that access to mobile phones becomes less able to distinguish between countries’ development. In this region, we have already remarked that they are mostly developed nations, who have a saturation in mobile phone access due to their higher average income. Likewise, we see in the middle panel, which corresponds to $I_{8}$ (secondary school enrollment) and $I_{72}$ (recalling from \[DBHT\_results\] that this represents underdevelopment), that secondary school enrollment can initially also be used to characterise development. However after a certain level of development, secondary school enrollment saturates in these developed countries, meaning it can no longer be used in this way.
However, not all relationships between certain CDCIs are hockey-stick shaped. Indeed, we can see this from \[fig:panel\_2\] which plots $I_{18}$ vs $I_{72}$ for $1998$ on the left and $2016$ on the right in the top panel and the same but for $I_{20}$ vs $I_{72}$ in the bottom panel. For the top panel, $I_{18}$ is a CDCI that represents natural resource abundance, whilst again we recall from \[DBHT\_results\] that $I_{72}$ corresponds to underdevelopment. We notice from the plots in the top panel that most of the countries with higher abundance of natural resources are underdeveloped countries. This reminds of the so called ’resource curse’ [@Ross1999], where resource-rich nations with inefficient governments are often underdeveloped.
Additionally in the bottom panel of \[fig:panel\_2\], we plot $I_{20}$ vs $I_{72}$. $I_{20}$ corresponds to the amount of flow of foreign direct investment (FDI). Interestingly, we can observe an intriguing relationship between underdevelopment and FDI in the plots. All underdeveloped nations tend to have low FDI, which can be interpreted as being perceived with a low investment potential by foreign investors. However, countries which are not underdeveloped may have both high or low FDI, for example Poland, Kuwait and Uzbekistan are all more highly developed countries that have a low FDI. Attractiveness to foreign investments is not directly correlated to a country’s level of developement.
![Comparison between different CDCIs. (Top panel) $I_{34}$ vs $I_{49}$ for $1998$ on the left and $2016$ on the right. (Middle panel) The same but with $I_{8}$ vs $I_{72}$. (Bottom panel) On the left we have $I_{6}$ vs $I_{72}$ for $1998$ in orange and $2016$ in blue. On the right is the same but for $I_{73}$ vs $I_{72}$.[]{data-label="fig:panel_timelapse"}](panel_timelapse.pdf){width="\textwidth"}
![(Top panel) $I_{18}$ vs $I_{72}$ for $1998$ on the left and $2016$ on the right. (Bottom panel) The same, but for $I_{20}$ vs $I_{72}$.[]{data-label="fig:panel_2"}](panel_2.pdf){width="\textwidth"}
Deriving influential indicators by using PMFG {#Influential_PMFG}
=============================================
One can imagine that there could be nodes that are very important to the structure of the correlation network than others, implying that these same nodes could be highly influential in the analysis of the development of countries. This would be very interesting for our purposes since they could provide a direct way to form a reliable reduced set of development indicators that would automatically overcome any problems associated with calculating composite versions. More specifically, one could take a subset of the top most influential indicators since these indicators’ influence have an aggregate, over-arching influence on all other indicators, and thus the structure of interactions in the system.
In this section we shall apply the PMFG to $\bm{E}$ and identify system wide important indicators. For this purpose, information filtering is useful because it neatly transfers the problem of identifying influential indicators as finding a ranking of important nodes in the network, for which there exist several so called network centrality measures. We choose to use PageRank [@Page1999], which has proven successful in ranking scientists and webpages [@Page1999; @Liu2005], to identify the most system wide influential indicators that affect the network. In PageRank, we rank nodes of networks on their importance based on the probability of a random walker landing on a particular node [@Page1999] with higher values indicating that the node has more importance.
We find the PMFG of $\bm{E}$. The output network of this visualisation can be seen in \[PMFG\_network\]. Then we apply PageRank to the PMFG of $\bm{E}$, displaying an example of the top $9$ indicators in \[tab:PageRank\].
Interpretation of the PageRank identified indicators {#PMFGAnalysis}
----------------------------------------------------
From \[tab:PageRank\] we observe that there are some indicators which we would expect to be in this ranking: for example GDP measures the value of goods and services an economy produces, and is widely used as a primary development indicator [@Lepenies2016]. Central government debt has also been linked to economic development since high levels of debt can drag growth rates down [@Checherita2012]. However, it is interesting to see that mobile cellular subscriptions to be the top ranking indicator, especially considering our comments in \[CountryDevelopment\] that mobile banking can be used to track the development of countries. This is an interesting result since there are many papers in computational socioeconomics which use mobile data as metric of an average citizen’s socioeconomic status due to vast information it can encode [@Blumenstock2010; @Blumenstock2012; @Mehrotra2012; @Gutierrez2013; @Gao2019]. In fact, it has been shown for example that mobile data is correlated with household expenditure [@Blumenstock2010] and poverty [@Smith2013], and reveal gender inequality [@Mehrotra2012]. This may be because having a mobile cellular subscription requires a number of milestones in the development of a country e.g. a healthy enough population to make use of them, the education to know how to use them, the relevant infrastructure such as phone masts that can reach all parts of the population.
We have also investigated whether particular topics are overexpressed within the top $102$ (chosen because this is the number of clusters identified by the DBHT) PageRank indicators by applying the same hypothesis test used to produce \[fig:OverExpression\]. We find that no topic is overexpressed within this subset of indicators, which again corroborates our conclusion that no single topic is more influential than the other.
[width=0.8,center]{}
names
---------- ------------------------------------------------------------------------------------------------
0.006764 Mobile cellular subscriptions
0.004666 Share of tariff lines with specific rates, manufactured products (%)
0.003717 Children in employment, wage workers, male (% of male children in employment, ages 7-14)
0.003706 Unemployment, male (% of male labor force) (national estimate)
0.003129 Mobile cellular subscriptions (per 100 people)
0.002939 Central government debt, total (% of GDP)
0.00276 Share of youth not in education, employment or training, female (% of female youth population)
0.002686 Population ages 30-34, female (% of female population)
0.002553 GDP (current US\$)
: The names of the top $9$ influential indicators based on PageRank in the second column and their actual PageRank values in the first column.[]{data-label="tab:PageRank"}
Performance comparison {#PerformanceComparison}
======================
If we reduce all of the indicators in the dataset to the composite ones, we have boiled down the structure of the correlations between indicators to more essential constituents. Therefore, when the set of composite indicators are taken together, they should still be a faithful representation of the original $\bm{E}$ since they are main driving factors behind the structure of correlations. We can use this principle to evaluate the performance of the CDCIs against any alternatives. This section is dedicated to comparing the performance of the CDCIs derived in \[CompInds\_DBHT\] against some alternatives.
For this purpose we propose, as a first approximation, that each indicator can be written as a linear factor model [@Thompson2004] of composite indicators. The general linear model is $$\bm{X}_{i}=\sum_{k=1}^{K}\beta_{ik}\tilde{I}_{k}+\epsilon_{i} \ , \label{Indicator_FactorModel}$$ where $\bm{X}_{i}$ is the $i$th indicator i.e. the $i$th column $\bm{X}$. $\tilde{I}_{k}$ is the $k$-th composite indicator of either the CDCIs or the other alternative schemes of composite indicators. $\beta_{ik}$ is the loading of $i$ for indicator $k$, which measures the sensitivity of $\bm{X}_{i}$ to changes in $\tilde{I}_{k}$. Finally, $\epsilon_{i}$ are white noise terms. \[Indicator\_FactorModel\] is an appropriate approximation to use since firstly, we are using the linear correlation matrix, which means it is intimately related to linear factor models. Note we also have that the number of composite indicators in each of the alternatives used in our comparison must be the same as the number of CDCIs $K$. This is because the size of the indicator set will inevitably affect its ability to describe the correlations, so fair comparison must involve fixing the number of indicators used. We then use elastic net regression (for details see Supplementary Information \[ElasticNet\]), which is able to take into consideration the potential correlation between composite indicators, to find $\beta_{ik}$ and $\beta_{ik'}$ for every $i$. The performance can then be evaluated on the basis of the error between the linear model and the real indicator values. For this, we define the usual mean squared error of the regression as $$MSE=\sum_{i=1}^{N}\left(\bm{X}_{i}^{(predict)}-\bm{X}_{i}\right)^{2} \ , \label{MSE_Regression}$$ where $\bm{X}_{i}^{(predict)}$ are the predicted values of $\bm{X}_{i}$ using the $\beta_{ik}$ from the elastic regression. The final metric we use the evaluate the performance of the cluster driven composite indicators is $$ERR=\frac{MSE_{CDCIs}}{MSE_{alt}} \ , \label{ERR}$$ where $ERR$ is called the error reduction ratio, $MSE_{CDCIs}$ is the $MSE$ calculated in \[MSE\_Regression\] for the CDCIs. Similarly, $MSE_{alt}$ is the same but for any of the alternative schemes of composite indicators used as a comparison. If $ERR$ is below $1$ (above $1$) then this means that the CDCIs perform better (worse). Also note that of course $ERR$ is bounded below by $0$.
The choice of alternative schemes of composite indicator, is as follows. We take the top $K$ PageRank indicators that were identified in \[PMFGAnalysis\]. We choose these particular schemes since they offer the best feasible alternative in forming a basis of composite indicators that have the most influence on correlations system wide. A fourth comparison is also made by randomly selecting $102$ indicators from the columns of $\mathbf{X}$ that provides a benchmark of the performance of the other composite indicator schemes since a set of randomly selected indicators should not be able to reliably incorporate anything from the real correlation network. We carry out the elastic regression and compute $ERR$ for all alternative schemes of indicators used. For the random benchmark, we repeat and average the results for $100$ different random subsets of indicators. The results are shown in \[tab:error\_indicators\]. We see that in both cases $ERR$ is much less than $1$, indicating that the CDCIs are able to outperform the random benchmark and the PageRank alternative. We can therefore conclude that the CDCIs are more effective at reducing the dimensionality of the dataset the random benchmark and the PageRank alternative.
------ ------
0.66 0.71
------ ------
: In the first column, the $ERR$ calculated using \[ERR\] for the benchmark using $102$ random subsets of indicators, which is repeated $100$ times. The second column is the same but instead using $102$ of the most influential indicators, assessed via PageRank.[]{data-label="tab:error_indicators"}
Dynamical Analysis {#DynamicalAnalysis}
==================
Since in the analysis so far we have used the static correlation matrix computed over the whole time period, we should also investigate the dynamic stability of the clusters. We start by splitting the whole time period into $16$ rolling time window of length $4$, with a time shift of $1$ year. For each time window $w=1,...,16$, we calculate the corresponding correlation matrix $\bm{E}^{w}$ and its DBHT clustering. The similarity between each pair of time windows $w$ and $w'$ is then measured by calculating the ARI between their respective DBHT clusterings. The results are shown in the heat map of \[fig:AR\_heatmatrix\_clusters\]. We can see that overall the DBHT clusterings display a high similarity with each other, with a median ARI value of $0.376$, which is high considering that the static clustering does not reflect the topics as argued through the ARI computed in \[DBHT\_similarity\]. We also used the same procedure and parameters to investigate the dynamic stability of the relationship between CDCIs (except using the correlation matrix and DBHT clustering between the CDCIs). A heat map of the results can be seen in \[fig:AR\_heatmatrix\_CompInd\]. Again, we see that overall there is a high likeness between the clusterings of the CDCIs in each time window. In fact, the median ARI is even higher at $0.683$. Interestingly, we see that starting from the window covering $2005$ to $2009$, which is the year corresponding to the financial crisis, there is a markedly higher similarity between the clusterings of the CDCIs. This could be explored further.
[0.475]{} {width="\textwidth"}
[0.475]{} {width="\textwidth"}
Conclusion
==========
In this paper, we have investigated whether the collection of development indicators given by the WDI database can be divided using their fundamental topic description. Leveraging on PCA and a novel application of information filtering and hierarchical clustering techniques, we showed that the structure of the topics does not mirror the actual structure between the indicators. This suggests that composite development indicators that are aggregated from restricted sets may ignore key information. Instead, we propose a new set of cluster driven composite development indicators that overcomes these problems. They are objective, data driven, interpretable and are able to make valid comparisons between countries. We have used the composite indicators and some highly influential PageRank indicators to give new insights into the development of countries. Some of these may support decisions for policy makers. Lastly, we showed that our proposed composite indicators can outperform schemes of indicators based on a random benchmark and PageRank.
Acknowledgments {#acknowledgments .unnumbered}
===============
A.V wishes to thank EPSRC for providing funding during his PhD studies. We also wish to thank the ESRC Network Plus project ’Rebuilding macroeconomics’. We acknowledge support from Economic and Political Science Research Council (EPSRC) grant EP/P031730/1. We are grateful to the NVIDIA corporation for supporting our research in this area with the donation of a GPU.
Author contributions statement {#author-contributions-statement .unnumbered}
==============================
A.V, O.A and T.D.M conceived the experiment(s), A.V. and O.A. conducted the experiment(s) and A.V, O.A and T.D.M authors analysed the results. A.V, O.A and T.D.M reviewed the manuscript.
Additional information {#additional-information .unnumbered}
======================
The authors declare no competing interests.
Cleaning procedure {#CleaningProcedure}
==================
Imputation {#Imputation}
----------
The WDI dataset suffers from high levels of missing data. We solved this problem with a combination of removal and imputation of datapoints. For starters, the amount of missing data decreases in time, as can be seen in \[fig:miss\_time\]. We decided to use the last 20 years of data, which have the least amount of missing datapoints in the dataset, so to not have to deal with missingness values above 50%.
We considered the possible bias of the dataset due to the fact that data is not missing at random. In fact, it can be seen from \[fig:miss\_factors\] that the amount of missing data a country has is correlated, sometimes strongly, with the values of some of its indicators. It seems that the dataset is biased towards industrialized and more developed countries. While this might cause problems when one tries to make predictions out of the data, we believe the results about the existence of a correlation structure in the data are affected little by this.
![Missing data percentage per year, all years (1963-2017), all countries.[]{data-label="fig:miss_time"}](missingness_time.pdf){width="70.00000%"}
![Correlation between percentage of missing points for a country and the value of an indicator, all years (1963-2017), all countries.[]{data-label="fig:miss_factors"}](missingness_factors.pdf){width="70.00000%"}
The remaining data still has a high amount of missingness. We therefore proceded to impute it. We tested several algorithms on the dataset, readily available from the Fancyimpute python package [@fancyimpute]. They cover mostly matrix factorization approaches to imputation: SoftImpute [@mazumder2010spectral], IterativeSVD [@troyanskaya2001missing] and MatrixFactorization [@fancyimpute] are all based on this principle. SimpleFill consists in replacing missing entries with the median, and KNN is K-Nearest Neighbours [@hastie2005elements]. In \[table:impute\_compare\] we report the Mean Average Error (MAE) and Mean Square Error (MSE) for the techniques adopted (obtained by holding out 0.5% of the data to test the quality of the results). Interestingly, the best performing technique is K-Nearest Neighbours (KNN). This is in line with the result of [@tacchella2018dynamical], which predicts GDP change over time for a country by averaging the past GDP changes of similar countries, where similarity is measured as an euclidean distance on a space defined by two macroeconomic indicators. This agreement might point to the fact that the most reliable way to model a country is by its similarity to similar countries already observed. The only metaparameter for the KNN algorithm ($D$, the number of neighbours to average) has been chosen by means of grid searching on logarithmically separated values of $D$ and testing on a holdout set of size 0.5%. \[table:impute\_meta\] shows that the best value for $D$ is either 2 or 4, depending on whether one minimizes MAE or MSE. We chose the average, $D=3$. We have checked that the results do not change qualitatively if $D=2$ or $D=4$ is chosen.
D MAE MSE
----- ---------- ----------
1 0.033483 0.102603
2 0.030785 0.091696
3 0.031161 0.090172
4 0.031588 0.087745
5 0.032636 0.088496
6 0.033698 0.089346
8 0.036122 0.091546
11 0.039479 0.094866
14 0.042721 0.097449
18 0.046539 0.100678
23 0.050710 0.104088
29 0.055189 0.108290
37 0.060167 0.113266
48 0.065573 0.119146
61 0.070659 0.124860
78 0.075911 0.131326
100 0.081211 0.137930
: Mean Average Error (MAE) (second column) and Mean Squared Error (MSE) (last column) when varying the number of neighbours to average $D$ of the KNN algorithm (first column) using a 0.5% holdout set size.[]{data-label="table:impute_meta"}
Distribution regularisation {#Distribution_regularisation}
---------------------------
Another characteristic of the WDI dataset is the heterogeneity of the value distributions across different indicators. For example, many indicators are percentages, and as such are bounded between the values 0 and 100. Long-tailed distributions are very common, as well as some that might remind Gaussian distributions. A sample of these distributions can be seen in \[fig:transf\_examples\]. We applied mathematical transformations to some of the indicators, in order to change their distribution and have a more homogeneous and tractable dataset.
We applied one of three possible transformations to each indicator. The first possibility is the identity function, i.e. we left the values unchanged. The second consists in taking the base-10 logarithm of the modulus of each indicator’s value. The third is the *bisymmetric log transformation* [@webber2012bi].
$$\text{logbisymmetric}_b(x) = \text{sign}(x) * \log_b(1 + \left|x\right|)$$
Given the high number of indicators and the need to avoid arbitrary decisions, the decision of what transformations to apply to each indicator has been made through an algorithm. To understand the criteria used, we will introduce first the definition of *span* of a set of numbers $X$. We define span as:
$$\text{span}(X) = \text{max}_{x \in X}(\log_{10}(|x|)) - \text{min}_{x \in X}(\log_{10}(|x|))$$
In order to decide what transformation to apply to each indicator, we consider the set of all values for that indicator found in the dataset, $X$. We then define two quantities. The first we will call *in-span*, which is the span for the subset of values $x$ found in $X$ such that $-1<x<1$. The second is the *out-span*, i.e. the span for all values of X that are outside the $[-1,1]$ interval:
$$\begin{aligned}
\text{inspan}(X) &= \text{span}({x | x \in (X \cap (-1,1)}) \\
\text{outspan}(X) &= \text{span}({x | x \in X \setminus [-1,1])})\end{aligned}$$
Then, the algorithm for assigning the transformation is this:
given a set of numbers $X$ compute $\text{bothsigns}(X)$ = whether $X$ contains both numbers $>0$ and $<0$ compute $\text{haszeros}(X)$ = whether $X$ contains the value $0$ compute $\text{inspan}(X)$, $\text{outspan}(X)$
The rationale behind this algorithm is that frequently the values in X span a large number of orders of magnitude, and in this case we want to transform them so that their distribution is easier to manage with linear techniques such as PCA or factor models. If the numbers are all of the same sign and there is no zero in X, one can directly take the logarithm; otherwise we apply the log-bisymmetric transformation, which has no singularity on the zero and is defined for negative numbers.
After transforming the dataset with this algorithm, we z-score each indicator individually, so to set the mean to zero and the standard deviation to one. A sample of the results of this procedure can be seen in \[fig:transf\_examples\].
![Examples of the transformations applied to indicators and how they transform their distribution.[]{data-label="fig:transf_examples"}](selection_examples.pdf){width="\textwidth"}
Eigenvalue Spectrum {#EigenvalueSpectrumSupplementary}
===================
Firstly, we should only extract components of $\bm{E}$ that describe relevant interactions between indicators. The question then arises about how many principal components we keep [@Jolliffe2002]. This directly controls the size of the reduced correlation matrix - which we would like to be as small as possible - versus the fraction of the total variance of the indicator system that the reduced matrix can explain. It would also help us in identifying what economic indicators are responsible in driving the indicator system by analysing which are the main contributing indicators to the top eigenvalues.
The eigenvalues, however, could also be affected by noise from taking a finite sample [@Plerou2002]. We should therefore first study the empirical distribution of the eigenvalues, identifying those eigenvalues which are just noise and discarding them. To identify noisy eigenvalues, we will need a null distribution, produced from a Gaussian white noise process. The answer is provided by the well-known Marčenko-Pastur (MP) distribution [@Marchenko1967], given by $$p(\lambda)=\frac{1}{2\pi q\sigma^{2}}\frac{\sqrt{(\lambda_{+}-\lambda)(\lambda-\lambda_{-})}}{\lambda} \label{MPDist} \ ,$$ where $p(\lambda)$ is the probability density of eigenvalues having support in $\lambda_{-}< \lambda < \lambda_{+}$. The edge points $\lambda_{\pm}=\sigma\left(1\pm \sqrt{q}\right)^{2}$, $q=N/Y$ and $\sigma$ is the standard deviation over all indicators. If we compare the distribution in \[MPDist\] to the empirical eigenvalue distribution of $\mathbf{E}$, we will be able to see how many components are indistinguishable from noise, often called the ’bulk’ eigenvalues. These are then discarded. In practice, this is achieved by fitting Eq. \[MPDist\] to the eigenvalues of $\mathbf{E}$, with $q$ and $\sigma$ acting as free parameters. The results are shown in \[fig:EigenvalueSpectrum\] which compares the empirical histogram of eigenvalues of $\mathbf{E}$ and the best fit MP distribution in red, giving $216$ components beyond the upper limit of the MP distribution. Whilst this number appears large, it still means that we can reduce the size of the correlation matrix by $85\%$ before we start to include components which statistically can be seen as noise. Further methods can be used to reduce the number of components further e.g. cross validation or cumulative variance [@Jolliffe2002], and also [@Verma2019].
However, by comparing the best fit MP distribution for our dataset in red in \[fig:EigenvalueSpectrum\] we see that in fact there seems to be a noticeable deviation of the bulk eigenvalues from the MP distribution, so we can infer that the MP distribution may not be suitable in identifying noisy eigenvalues. We also notice that the best value of $q$ is noticeably different than the theoretical value for this dataset of $0.35$ indicating a significant difference in the predicted properties of the bulk using \[MPDist\]. Indeed, the use of the MP distribution in this respect has been questioned more recently [@Guhr2003; @Livan2011; @Wilinski2018] for financial data at least.
[0.7]{} {width="\textwidth"}
\
[0.7]{} {width="\textwidth"}
Moreover, it would also indicate that there could actually be some structure hidden within the bulk eigenvalues. We test this by shuffling our differenced indicator data, recalculating the correlation matrix and again finding the best MP fit to the eigenvalue distribution of this new correlation matrix. In doing so, we destroy the correlations between indicators, therefore testing whether these are the cause of the differences seen in the bulk in \[fig:EigenvalueSpectrum\]. The results are reported in \[fig:EigenvalueSpectrum\_Shuffled\], where the histogram of the eigenvalues coming from the new correlation matrix is in blue bars and the best MP fit for this given in red. We can clearly see an almost perfect fit in this case of the MP fit and $q$ much closer to theoretical value which \[MPDist\] predicts, which suggests that indeed the earlier bulk eigenvalues are a result of non-trivial strucutre within $\mathbf{E}$, and are not just random fluctuations in the data. Overall, these two results together suggest that there is no natural way to a select a subset of principal components without loosing non-trivial information, which may make PCA an unsuitable method of dimensionality reduction for this dataset.
Nevertheless, as the inset plot in \[fig:EigenvalueSpectrum\] shows, there are some eigenvalues whose magnitude is $2$ times greater than that of some of the smaller eigenvalues e.g. the first principal component has an eigenvalue of $94$. These eigenvalues from the perspective of PCA are the most important eigenvalues since they make the biggest contributions to the overall variance of the system. They are also well separated from the bulk, which means that they are less affected by noise and will have a clearer, more discernible interpretation [@Bun2017].
Procedure for calculating the p-values of $\rho_{g}$ {#RhoG_StatTest}
====================================================
Here we detail the procedure used to calculate the p-values used to produce \[tab:RhoG\_PVal\]. Under the null hypothesis that $\bm{\rho}_{i}$ is random, the entries of $\Delta \bm{X}$ will be i.i.d normally distributed with mean $0$ and standard deviation of $1$. We can therefore use the exact same definition given in \[RhoG\] but with a randomly generated $\Delta \bm{X}$ to produce an instance of $\bm{\rho}_{i}$ under the null hypothesis. One can then estimate the empirical cumulative distribution function [@Van2000] of each entry $\rho_{i,g}$ by repeating this process many times and aggregating the results with the same $g$. For \[tab:RhoG\_PVal\], we repeat the process $1000$ times.
PMFG network {#PMFG_network}
============
Here, we report the visualisation of the PMFG network computed on $\mathbf{E}$ in fig. \[fig:PMFG\_remove\]. From the PMFG, we can observe that there a few hubs of nodes which are connected to other less connected nodes, which is consistent with the observations from other complex networks in different contexts.
![The PMFG of $\mathbf{E}$, with the colour of each node representing cluster membership according the DBHT algorithm.[]{data-label="fig:PMFG_remove"}](PMFG_remove.pdf){width="70.00000%"}
Elastic net regression {#ElasticNet}
======================
Elastic net regression is used to find the values of $\beta_{ik}$ from \[Indicator\_FactorModel\]. Further details of the use of this method is provided in this appendix. Elastic net regression [@Zou2005] is a hybrid version of ridge regularisation and lasso regression, thus providing a way of dealing with correlated explanatory variables (in our case $I_{k}(t)$ and $I_{k'}(t)$) and also performing feature selection, which takes into account non-interacting clusters $I_{k'}(t)$ that ridge regularisation would ignore. Elastic net regression solves the constrained minimisation problem $$\min_{\bm{\beta}_{i}} \frac{1}{Y}\sum_{y=1}^{Y}\left(\Delta\bm{X}(y,i)-\bm{I}^{\dagger}\bm{\beta}_{i}\right)^{2}+\lambda P_{a}(\bm{\beta}_{i}) \ ,$$ where $\bm{\beta}_{i}$ is the vector of loadings given by $(\beta_{i1}, \beta_{i2}, \dots,\beta_{iK})^{\dagger}$, $\bm{I}$ is the matrix consisting of columns $(I_{1}(t),I_{2}(t), \dots, I_{K}$ and $\lambda$ and $a$ are hyperparameters. $P_{a}(\bm{\beta}_{i})$ is defined as $$P_{a}(\bm{\beta}_{i})=\sum_{k=1}^{K}\left((1-a)\frac{\beta_{ik}^{2}}{2}+a |\beta_{ik}|\right) \ . \label{ElasticNetPenalty}$$ The first term in the sum of \[ElasticNetPenalty\] is the $L_{2}$ penalty for the ridge regularisation and the second term in the sum is the $L_{1}$ penalty for the lasso regression. Hence if $a=0$ then elastic net reduces to ridge regression and if $a=1$ then elastic net becomes lasso, with a value between the two controlling the extent which one is preferred to the other. The determination of the $a$ hyperparameter, controlling the extent of lasso vs ridge, and $\lambda$, for the ridge, is done using 10 cross validated fits [@Zou2005], picking the pair of $(a,\lambda)$ that give the minimum prediction error.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
A new analytical solution of the set of highly nonlinear two-fluid equations is presented to explain the mechanism for the generation of “seed” magnetic field and plasma flow by assuming the density n to have a profile like an exponential in xy-plane and temperature profiles of electrons (ions) to be linear in yz-plane. [It is shown that the baroclinic vectors - $\nabla\Psi\times\nabla T_{j}$ (where $\Psi =
ln\overline{n}, \overline{n}$ is normalized density, and $T_{j}$ denote the temperatures of electrons and ions for j = e, i) can generate not only the magnetic field but the plasma flow as well.]{} It is also pointed out that the electron magnetohydrodynamics (EMHD) model has inconsistencies because it does not take into account the ion dynamics while the magnetic field is produced on slow time scale. The estimate of the magnitude of the magnetic field in a classical laser plasma using this model is in agreement with the experimental observations.
author:
- |
Hamid Saleem\
National Centre for Physics (NCP),\
Quaid-i-Azam University Campus,Islamabad,\
Pakistan.
date: '16, July 2010'
title: 'Non-equilibrium two-fluid plasmas can generate magnetic fields and flows [simultaneously]{}'
---
$\textbf{I. INTRODUCTION}$\
The presence of large scale magnetic fields in galaxies, galaxy clusters and in intergalactic space \[1\] is a mystery and several theoretical models have been presented to explain the origin of these fields \[2-5\]. Most of these works deal with the dynamo theory of single fluid magnetohydrodynamics (MHD). But the set of MHD equations assumes that some magnetic field is already present in the system. Therefore, these models actually investigate the amplification of the existing weak magnetic field and can not explain the generation of the ’seed’ field in true sense.\
Long ago \[6\], Biermann presented a mechanism for the generation of stellar magnetic fields which is not based on MHD. He proposed that the electrons faster motion compared to ions can produce electric field in a rotating star which is not curl free due to non-parallel density and temperature gradients and hence the magnetic field is produced. The ions were assumed to be stationary in this work. The Biermann battery and electron diffusion processes were also investigated to explain the generation of ’seed’ magnetic fields in galaxies \[7\].\
It is very interesting that the large magnetic fields of the order of kilo and mega Gauss were observed in classical laser-induced plasmas many decades ago \[8-9\]. These observations indicate that the dynamics of initially unmagnetized nonuniform plasmas can generate magnetic fields. The idea of magnetic field generation by plasma dynamics is very attractive and a huge amount of research work in this direction has already been appeared in literature.\
Based on Biermann battery effect, a single fluid plasma model called electron magnetohydrodynamics (EMHD) was presented to explain the magnetic field generation in laser plasmas \[10, 11\]. But it does not require the rotation of the system to produce magnetic fields. In EMHD, ions are assumed to be stationary and electrons are treated to be inertialess. Both the fluctuating \[12, 13\] and steadily growing magnetic fields \[8, 9\] have been theoretically produced using EMHD models. The so called magnetic electron drift (MEDV) mode was discovered \[12\] using EMHD equations. The MEDV mode is believed to be a pure transverse low frequency wave of an unmagnetized inhomogeneous plasma. But a critical analysis of MEDV mode shows that it should contain a contribution from electron density perturbation as well.\
Therefore, a new mode which is partially transverse and partially longitudinal has been proposed to be a normal mode of pure electron plasmas which can exist only in a very narrow range of parameters. This mode can couple with the ion acoustic wave which also becomes electromagnetic under certain conditions in a non-uniform plasma \[14\]. Similarly the model equation widely used for the generation of steadily growing magnetic field \[12, 13\] is not flaw-less. The field grows on ion time scale while ions are assumed to be stationary. Recently \[15\], the same model equation containing electron baroclinic term has been used to estimate the magnetic field produced in a laser plasma.\
Some weaknesses and contradictions in the approximations and assumptions used in EMHD models for magnetic field gereration have already been pointed out \[16\]. The advantage of EMHD model is that it is very simple. Since it is still being used by many authors, therefore it seems important to discuss at least the two cases of fluctuating and steadily growing magnetic fields which are believed to be generated by EMHD. The EMHD models for the generation of magnetic fields are critically discussed in the next section.\
The magnitudes of magnetic fields on galactic scale \[7\] as well as on laser-plasma scale \[8, 9\] were estimated by assuming the non-parallel electron density and temperature gradients to be constant using EMHD equations. In these models the gradients of electron temperature and density were assumed to be one-dimensional and the produced magnetic fields had only one component.\
A few years ago \[17\], a theoretical model was presented to show the generation of three-dimensional magnetic field by baroclinic vectors of electrons and ions. However, in this investigation some constant magnetic field was assumed to be present already. A stationary solution of these equations was presented by Mahajan and Yoshida \[18\] in the form of double Beltrami field.\
Later, the model presented in Ref. \[17\] was modified to explain the creation of all the three components of ’seed’ magnetic field vector $\textbf{B}$ from t=0 due to externally given forms of baroclinic vectors \[19\]. Here one does not need to assume some static magnetic field to be present in the system. However, the form of the solution was sinusoidal along one axis which is not physical in general.\
The EMHD may have some applications in other areas but for the magnetic field generation on slow time scale, the ion dynamics can not be neglected. Therefore, it is necessary to study the ’seed’ magnetic field generation by using two fluid model. On the other hand the generated magnetic field vector should not have necessarily only one component for the sake of generality.\
Our aim is to find out an analytical solution of the set of two fluid equations such that the cross products of plasma density and temperature gradients of electrons and ions become the source terms in the electron and ion equations of motion. We assume that the plasma has been produced in a non-equilibrium state and it evolves with time generating the “seed” magnetic field and flow.\
The present investigation is very different from the previous work \[19\] because the density gradient scale length is assumed to have an exponential form in xy-plane. In most of the analytical studies, the density is assumed to have exponential form along one axis. But we want to obtain a two-dimensional solution, therefore the density is assumed to be a function of (x,y) coordinates.\
We have chosen special profiles of gradients of density $(\nabla n)$ electron temperature $(\nabla T_e)$ and ion temperature $(\nabla
T_i)$ to obtain an analytical solution. Different profiles of density and temperatures can be considered but then the numerical simulation will be needed. In our formulism all the nonlinear terms vanish and ultimately we obtain the two linear equations where the terms $\nabla \psi \times \nabla T_j$ (i = e, i) with $\psi = ln
\bar{n}$ (where $\bar{n}$ is normalized density) become the source terms for magnetic field.\
The details of the model are discussed in section III. Since the present model contains a very complex system of highly nonlinear equations, therefore to find out an analytical solution one has to use some assumptions and approximations. The main focus is to justify the physical idea that the two fluid plasma with density gradient like an exponential function in xy-plane and constant gradients of electron and ion temperatures along y and z-axes can generate the ’seed’ magnetic fields and flow. This model can explain the magnetic fields produced in laser-induced plasmas. In our opinion, the numerical simulation of two-fluid equations is very important to study the ’seed’ field generation. In simulation one can use many different profiles of density and temperatures.\
$\textbf{II. CONTRADICTORY RESULTS OF EMHD}$\
Here we briefly point out the contradictory results of EMHD models used for magnetic field generation. First we discuss a theoretical model based on EMHD for the generation of fluctuating magnetic fields proposed several years ago \[12\]. The mode-discovered through EMHD theory was named as the magnetic electron drift vortex (MEDV) mode. A great deal of research work on this mode has been carried out.\
The critical analysis of MEDV mode and some new theoretical results have been published recently on the fluctuating magnetic fields \[14\]. There seems to be a need to briefly clarify here the physical situation to lay down the basis of our theoretical model presented in the next section. In the theory of MEDV mode, the ions are assumed to be stationary but the electron inertial effects are included. The linear description of the MEDV mode is presented very briefly as follows.\
Electron equation of motion is, $$m_e n_0 \partial_t
\textbf{v}_{e1}=-en_0 \textbf{E}_1 - \nabla p_{e1}\eqno{(1)}$$ where subscripts one (1) and naught (0) denote the linearly perturbed and equilibrium quantities, respectively. In the limit $\omega_{pi}<<\omega << \omega_{pe}$ (where $\omega$ is the frequency of the wave and $\omega_{pj}=\left(\frac{4\pi
n_{j0}e^2}{m_j}\right)^{1/2}$ is the plasma oscillation frequency of the jth species while j= e here), the displacement current is ignored and Maxwell’s equation yields, $$\nabla \times \textbf{B}_1=\frac{4\pi}{c}(\textbf{J}_1)=\frac{4\pi}{c}(-en_0
\textbf{v}_{e1})\eqno{(2)}$$ Since $\nabla.\textbf{J}_1=0$, therefore according to (2), the density perturbation is neglected and we find $p_{e1}=n_0 T_{e1}$. For $T_{e1}$, the electron energy equation becomes, $$\frac{3}{2}n_{0}\partial_{t}T_{e1}+\frac{3}{2}n_0
(\textbf{v}_{e1}.\nabla)T_{e0}=-p_{0}\nabla.\textbf{v}_{e1}\eqno(3)$$ Assuming, $\nabla n_0=\hat{\textbf{x}}\left|\frac{dn_0}{dx}\right|$, $\kappa_n = \left|\frac{1}{n_0}\frac{dn_0}{dx}\right|$, $\textbf{E}_1=E_1 \hat{\textbf{x}}$, $\textbf{k}=k_y
\hat{\textbf{y}}$ and $\textbf{B}_1=B_1 \hat{\textbf{z}}$ the linear dispersion relation for MEDV mode turns out to be $$\omega^{2}=\frac{2}{3}C_{0}(\frac{\kappa_{n}}{k_{y}})^{2}v^{2}_{Te}k_{y}^2\eqno(4)$$ where $C_0=\frac{\lambda_{e}^{2}k_{y}^{2}}{1+\lambda_{e}^{2}k_{y}^{2}}$, $\lambda_e=\frac{c}{\omega_{pe}}$ is the electron collision-less skin depth and $\nu_{Te}=\left(\frac{T_e}{m_e}\right)^{1/2}$ is the electron thermal speed. If temperature gradient is assumed to be anti-parallel to the density gradient in laser plasma with $\nabla
T_0=\hat{\textbf{x}}\left|\frac{dT_0}{dx}\right|$ and $\kappa_T=\left|\frac{1}{T_0}\frac{dT_0}{dx}\right|$, then (4) is modified as, $$\omega^2=C_0\frac{\kappa_n}{\kappa_y}\left[\frac{\left(\frac{2}{3}\kappa_n-\kappa_T\right)}
{k_y}\right]\nu_{Te}^{2}k_{y}^{2} \eqno{(5)}$$ and these magnetic perturbations become unstable if the condition $$\frac{2}{3}\kappa_n < \kappa_T\eqno{(6)}$$ holds. Note that the local approximation requires $\kappa_n, \kappa_T << k_y$. It has been assumed that $\nabla.\textbf{E}_1=0$ and $\nabla.\textbf{v}_{e1}\neq 0$ in the description of MEDV mode along with $\omega_{pi}<<\omega$.\
Equation (1) indicates that the term $\nabla p_{e1}=\nabla(n_0
T_{e1})=T_{e1}\nabla n_0+n_0 \nabla T_{e1}$ will produce a linear term with $\nabla$ replaced by $\textbf{k}$ and hence $\textbf{E}_1$ can have a longitudinal component with $\nabla. \textbf{E}_1\neq 0$. Thus the mode can not be a pure transverse mode.\
Moreover, the linear theory has been applied under the local approximation therefore the term, $\left(\frac{\kappa_n}{k_y}\right)^2 v_{T_e}^{2}k_{y}^{2}$ can be closer to $c_{s}^{2}k_{y}^{2}$ where $c_s=(T_e/m_i)^{1/2}$ is the ion acoustic speed while $C_0 < 1$ always holds. Using the laser plasma parameters \[9\] $T_e=100 eV$ and $n_0 \sim 10^{20} cm^{-3}$, one obtains $\omega<\omega_{pi}$ contrary to initial assumption of stationary ions for $\omega_{pi}<<\omega$.\
Recently \[14\], it has been shown that if compressibility effects are also taken into account then one obtains a new partially transverse and partially longitudinal normal mode of a nonuniform pure electron plasma as, $$\omega^2=\frac{2}{3H_0}\lambda_{e}^{2}k_{y}^{2}(v_{te}^{2} \kappa_n^2)
\left(1-\frac{3}{2}\frac{\kappa_T}{\kappa_n}\right)\eqno{(7)}$$ where $H_0=\left[\left\{1+\frac{5}{3} \lambda_{De}^{2}
k_{y}^{2}\right\}a-\kappa_{n}^{2}/
k_{y}^{2}\right]$ and $a=(1+\lambda_{e}^{2}k_{y}^{2})$.\
It is important to note that ions can be assumed to be stationary in the limit $\frac{m_e}{m_i}\rightarrow 0$. Since for hydrogen plasma $\frac{m_e}{m_i}\sim 10^{-3}$, therefore (7) is valid for $m_e/m_i <
\lambda_{De}^{2} k_{y}^{2}$, $\kappa_{n}^{2}/k_{y}^{2}$ and $\omega^2 << \omega_{pe}^{2}$.\
But the important point is that the term $\nu_{Te}^{2}\kappa_{n}^{2}$ can be closer to $c_{s}^{2}k_{y}^{2}$ and lesser than $\omega_{pi}^{2}$ therefore the dynamics of ions should not be ignored.\
It is also interesting to mention here that ion acoustic wave (IAW) has always been treated as a low frequency electrostatic mode. The reason is that in the limit $\frac{m_e}{m_i}\rightarrow 0$, the inertia-less electrons are assumed to follow the Boltzmann density distribution in the electrostatic field $\textbf{E}=-\nabla \varphi$ as, $$\frac{n_e}{n_0}\simeq e^{-e\varphi/T_e}\eqno{(8)}$$ In an inhomogeneous plasma we may have $\frac{m_e}{m_i}<\kappa_{n}^{2}/k_{y}^{2}$ and in this case electron inertia should not be neglected. Then for $\frac{m_e}{m_i}<
\lambda_{De}^{2} k_{y}^{2}$, the IAW follows the dispersion relation \[24\], $$\omega^2=c^{2}_{s}k_{y}^{2}\frac{(a-\kappa_{n}^{2}/k_{y}^{2})}{(ab-\kappa_{n}^{2}/k_{y}^{2})}\eqno{(9)}$$ where $b=\left(1+\lambda_{De}^{2}k_{y}^{2}\right)$. Hence inhomogeneous plasmas can have a low frequency electromagnetic wave on ion time scale.\
When the electron temperature perturbation effect is taken into account, then modes described in (7) and (9) will couple to produce a partially longitudinal and partially transverse wave with the dispersion relation \[24\], $$\omega^2=\frac{5}{3H_0}[(\lambda_{e}^{2}k_{y}^{2})\nu_{Te}^{2}\kappa_{n}^{2}\left(\frac{2}{3}-
\frac{\kappa_T}{\kappa_n}\right)+\left(a-\frac{\kappa_{n}^{2}}{k_{y}^{2}}\right)$$$$c_{s}^{2}k_{y}^{2}\left\{
\frac{5}{3}-\left(\frac{k_{T}^{2}}{k_{y}^{2}}+\frac{\kappa_T
\kappa_n}{k_{y}^{2}}\right)\right\}]\eqno{(10)}$$
If $\nabla p_{e0} = 0$ is used as the steady state condition the above equation yields a basic low frequency electromagnetic wave of inhomogeneous unmagnetized plasmas with the dispersion relation, $$\omega^2 = \frac{5}{3H_0} \left[(\lambda_e^2 k_y^2) v_{te}^2
\kappa_n^2 + \left(a-\frac{\kappa_n^2}{k_y^2}\right) c_s^2
k_y^2\right]\eqno{(11)}$$ This wave has not been studied in plasmas so far. In our opinion it can play very important role in the generation of magnetic fluctuations in unmagnetized plasmas due to several linear and nonlinear mechanisms. In a pure electron plasma where ions are assumed to be stationary, (10) reduces to $$\omega^2
= \frac{5}{3H_0} (\lambda_e^2 k_y^2) v_{te}^{2}
\kappa_n^2\eqno{(12)}$$ But in our point of view the electron plasma wave frequency in (12) is near ion acoustic wave frequency $c_s^2
k_y$ and hence it couples with it.\
Now we look at EMHD theory for the generation of ’seed’ magnetic field which is steadily growing. Again ions are assumed to be stationary in the time scale $\tau << \omega_{pi}^{-1}$. In addition to this the electron inertia is also neglected assuming $\omega_{pe}^{-1} << \tau$ . Then electron equation of motion becomes, $$0 \simeq -e \textbf{E} - \frac{\nabla
p_e}{n}\eqno{(13)}$$ The faraday law is, $$\partial_t \textbf{B}= -c
\nabla \times \textbf{E}\eqno{(14)}$$ If it is assumed that $\nabla
n_0 =\hat{\textbf{x}} \left|\frac{dn_0}{dx}\right|$ and $\nabla T_0
= \hat{\textbf{y}} \left|\frac{dT_o}{dx}\right|$, then (13) and (14) yield, $$\partial_t \textbf{B} = -\frac{c}{e} \left(\frac{T_e}{L_n
L_T}\right)\hat{\textbf{z}}\eqno{(15)}$$ where $L_n = \kappa_{n}^{-1}$ and $L_T=\kappa_T^{-1}$ are constants.\
Equation (15) is integrated from $\tau = 0$ to $\tau =
\frac{L_n}{c_s}$ to have \[9 - 11\], $$\textbf{B}=
\left\{\frac{c}{e}\left(\frac{T_e}{L_n L_T}\right)\tau\right\}
\hat{\textbf{z}}\eqno{(16)}$$ This is a well-known equation in laser-plasma literature. Assuming $T_e \sim 100 eV$, $n_0 \sim
10^{20} cm^{-3}$, $L_n \sim L_T \sim 0.005 cm$, one obtains $c_s
\sim 3 \times 10^7 cm/Sec$ and hence $|B| \sim 0.6 \times 10^6$ Gauss \[9\]. Note that $\tau =\frac{L_n}{c_s} \simeq 1.66 \times
10^{-10}$ Sec and $\omega_{pi} \sim 1.3 \times 10^{13} rad/Sec$ while the laser pulse duration is of the order of a nano second. Thus we have $\omega_{pi}^{-1} << \tau$ contrary to initial assumption of stationary ions for $\tau << \omega_{pi}^{-1}$. An equation similar to (16) has also been used to estimate the ’seed’ magnetic field generated by ionized clump of a galactic cloud \[7\]. The density gradient of the cloud has been assumed to have exponential form.\
The brief overview of EMHD models shows clearly that the theoretical models for both the fluctuating and steadily growing magnetic fields suffer from serious contradictions. Since the EMHD is still being used \[15\] for estimating magnetic fields produced in laser plasmas, therefore the weaknesses and contradictions have been elaborated here again.\
Since the plasmas have generally exponential density profiles, therefore there is a need to find out an exact 2-D solution of the set of two fluid equations assuming exponential type density structure in a plane. It may also be mentioned here that in many tokamak plasmas, the density falls almost exponentially near the walls which gives rise to drift waves.\
$\textbf{III. EXACT SOLUTION OF 2-FLUID EQS.}$\
Our aim is to search for an exact analytical solution of the set of highly nonlinear partial differential equations of electron-ion plasma to show how the system from a non-equilibrium state can evolve in time generating the ’seed’ magnetic field.\
In our opinion the same physical mechanism is applicable at both astrophysical and laboratory scales. Biermann \[6\] gave the pioneering idea that the electron baroclinic vector $(\nabla n_e
\times \nabla T_e)$ can generate magnetic fields in rotating stars. We just modify it a little by proposing that the ’seed’ magnetic field is a macroscopic phenomenon and it is generated on longer spatial and temporal scales. Hence the ion dynamics can not be neglected. Therefore, the ion baro-clinic vector $(\nabla n_i \times
\nabla T_i)$ is also crucial to be considered. However, the quasi-neutrality approximation in slow time scale is a valid approximation, therefore we use $n_i \sim n_e = n$. In thermal equilibrium, the source of magnetic field generation disappears.\
For an analytical solution of a complex set of equations, we have to use some assumptions and approximations. The exact solution presented here contains exponential type density profile in xy-plane which is the main deviation from the previous models \[17, 19\]. It is important to note that the nonlinear terms do not vanish if we assume exponential density fall or rise along both the axes x and y as has been discussed in section II.\
Therefore, we have to choose a very special exponential function in xy-plane for density which reduces the nonlinear equations into two linear equations. The detailed mathematical model is presented here.\
The electrons are assumed to be inertialess in the limit $|\partial_t|<<\omega_{pe}$, $c|\nabla|$ where $\omega_{pe}=\left(\frac{4\pi n_0 e^2}{m_e}\right)^{\frac{1}{2}}$ is the electron plasma frequency and c is the speed of light. We define four scalar fields $\varphi,u,\chi$ and h such that the ion velocity $\textbf{v}_i$ and magnetic field are defined, respectively, as \[17, 19\], $$\textbf{v}_i=(\nabla\varphi\times\mathbf{\hat{z}}+u\mathbf{\hat{z}})f(t)=(\partial_y \varphi, -\partial_x \varphi,
u)f\eqno{(17)}$$ $$\textbf{B}=(\nabla\chi\times\mathbf{\hat{z}}+h\mathbf{\hat{z}})f(t)=(\partial_y
\chi, -\partial_x \chi, h)f\eqno{(18)}$$ All these scalar fields are functions of (x, y) coordinates and f is a function of time.\
We further assume $\partial_t n_j=0$ and $\nabla.\textbf{v}_j=0$ which requires $$\nabla\psi.\textbf{v}_j=\{\varphi,
\psi\}=0\eqno{(19)}$$ where $\psi=ln \bar{n}$, $n_e\simeq n_i =n$ and $\{\varphi, \psi\}=\partial_y \varphi \partial_x \psi-\partial_x
\varphi \partial_y \psi$. Here $\bar{n}=\frac{n_{(x,y)}}{N_{0}}$ and $N_{0}$ is an arbitrary number used to normalize n.\
The displacement current is neglected and hence we obtain, $$\textbf{v}_{e}=\left(\textbf{v}_{i}-\frac{c}{4\pi e}\frac{\nabla\times\textbf{B}}{n}\right)\eqno{(20)}$$ Let $\textbf{E}=-\nabla\Phi-\frac{1}{c}\partial_t \textbf{A}$ where $\Phi$ is electrostatic potential different from $\varphi$. Since $\textbf{B}=0$ at t=0, therefore we do not normalize the equations. This point has been explained in detail in Ref. \[19\]. The curls of momentum equations of electrons and ions yield, respectively, $$\partial_t
\textbf{B}+\nabla\times\left[\textbf{B}\times\left(\textbf{v}_i-\frac{c}{4\pi
e}\frac{\nabla\times\textbf{B}}{n}\right)\right]=-\frac{c}{e}(\nabla\psi\times
\nabla T_e)\eqno{(21)}$$ and $$\partial_t(a\textbf{B}+\nabla\times\textbf{v}_i)-\nabla\times[a(\textbf{v}_i
\times \textbf{B})+\textbf{v}_i \times (\nabla\times
\textbf{v}_i)]$$$$=\frac{1}{m_i}(\nabla\psi\times T_i)\eqno{(22)}$$ where $a=\frac{e}{m_i c}$. If the conditions $$\{\varphi,\chi\}= \{\varphi, u\}= \{h, \varphi\}=0\eqno{(23)}$$ satisfy along with $$\{\nabla^2 \varphi,
\varphi\}=0\eqno{(24)}$$ then all the nonlinear terms of (21) and (22) vanish and they reduce, respectively, to simpler equations $$\partial_t \textbf{B}=-\frac{c}{e}(\nabla \psi\times \nabla
T_e)\eqno{(25)}$$ and $$\partial_t
(a\textbf{B}+\nabla\times\textbf{v}_i)=\frac{1}{m_i}(\nabla\psi\times\nabla
T_i)\eqno{(26)}$$ where $T_e\neq T_i$ and right hand sides of (25) and (26) are the source terms for generating magnetic field and plasma flow. Let us assume that $\textbf{B}$ is related with plasma vorticity through the following equation, $$\textbf{B}=\alpha\left(\nabla\times\textbf{v}_{i}\right)\eqno(27)$$ where $\alpha$ is a constant. Then (26) becomes $$\left(a+\alpha^{-1}\right)\partial_{t}\textbf{B}=\frac{1}{m_{i}}\left(\nabla\psi\times\nabla
T_{i}\right)\eqno(28)$$ Now we discuss an important point of the present theoretical model. In the previous works it was assumed that the field $\varphi$ satisfies the Poisson equation, $$\nabla^{2}\varphi=-\lambda\varphi\eqno(29)$$ where $\lambda$ is a constant and $0<\lambda$ holds. The forms of $\psi$ and $T_{j}$ were chosen as $\psi=\psi_{0}e^{\mu_{1}x}cos \mu_{2}y$ and $T_{j}=\{T_{00j}+T^{'}_{0j}(y-z)\}f(t)$ where $\psi_{0}$, $\mu_{1}$, $\mu_{2}$, $T_{00j}$ and $T^{'}_{0j}$ were constants. We, here, want to find out a 2-D solution in the exponential form without assuming the density gradient to be constant. For this purpose the assumption (29) is modified as $$\nabla^{2}\varphi=\lambda\varphi\eqno(30)$$ and we assume $0<\lambda$. The form of $\psi_{(x,y)}$ can be chosen like, $$\psi_{(x,y)}=A_{1}e^{(\mu
x+\nu y)}+A_{2}e^{(\mu x-\nu y)}=\psi_{1}+\psi_{2} = ln
\bar{n}\eqno(31)$$ where $\overline{n}=\frac{n_{(x,y)}}{N_{0}}$ is dimensionless and $N_{0}$ is some constant density. Here $A_{1}$, $\mu$, $\nu$, $A_{2}$ are constants and $$\lambda=\mu^{2}+\nu^{2}\eqno(32)$$ We may choose $0<\mu, \nu$ for simplicity. Let the temperatures be only functions of space in yz-plane as, $$T_{0j}(y,z)=T_{00j}+T^{'}_{0j}(y-z)\eqno(33)$$ Then the baroclinic vectors in (25) and (26) become constant with respect to time. These equations can be integrated from t=0 to $\tau$ and one obtains, $$\textbf{B}=-\frac{c}{e}(\nabla \psi \times \nabla
T_e)\tau\eqno{(34)}$$ and $$\textbf{B}=\frac{1}{m_i
(a+\alpha^{-1})}(\nabla \psi \times \nabla T_i)\tau\eqno{(35)}$$ These equations relate $T^{'}_{e0}$ and $T^{'}_{i0}$ as, $$T^{'}_{e0}=\frac{a}{(a+\alpha^{-1})}T^{'}_{i0}\eqno{(36)}$$ Equations (31) and (33) yield $$\left(\nabla\psi\times\nabla
T_{j}\right)=-T^{'}_{0j}\left(\partial_{y}\psi, -\partial_{x}\psi,
-\partial_{x}\psi\right)\eqno(37)$$ where $\partial_{y}\psi=\nu\left(\psi_{1}-\psi_{2}\right)$ and $\partial_{x}\psi=\mu\psi$.\
We are free to use any of the equations, (34) or (35) to estimate $\textbf{B}$. Let us choose (34) and use (18) for f=1 to find out following relations, $$\chi=(\chi_{0}\psi)\tau\eqno(38)$$ and $$h=-(h_{0}\psi)\tau\eqno(39)$$ where $\chi_{0}=\left(\frac{cT_{0e}^{'}}{e}\right)$ and $h_{0}=\mu\chi_{0}$. Equation (17) for f=1 along with (30) gives, $$\nabla\times\textbf{v}_{i}=\left(\partial_{y}u, -\partial_{x}u, -\lambda\varphi\right)\eqno(40)$$ Then (27) yields, $$u=(u_{0}\psi)\tau\eqno(41)$$ and $$\varphi=(\varphi_{0}\psi)\tau\eqno(42)$$ where $u_{0}=\frac{\chi_{0}}{\alpha}$ and $\varphi_{0}=\left(\frac{\mu}{\alpha\lambda}\right)\chi_{0}$.\
Three dimensional magnetic field $\textbf{B}$ and $\textbf{v}_{i}$ can be expressed explicitly for $A_1 \neq A_2$ as,\
$$\textbf{B}= \left[ \begin{array}{ c } \chi_{0}\nu
\left(A_{1}e^{(\mu x+\nu y)}+A_{2}e^{(\mu x-\nu y)}\right) \\
-\mu\chi_{0}\left(A_{1}e^{(\mu x+\nu y)}+A_{2}e^{(\mu x-\nu
y)}\right) \\ -h_{0} \left(A_{1}e^{(\mu x+\nu y)}+A_{2}e^{(\mu x-\nu
y)}\right) \end{array} \right]\psi_0$$$$\eqno(43)$$ and $$\textbf{v}_{i}= \left[ \begin{array}{ c }\nu \varphi_{0}(A_{1}e^{(\mu x + \nu y)}-A_{2}e^{(\mu x-\nu
y)})\\
-\mu \varphi_{0}(A_{1}e^{(\mu x + \nu y)}+A_{2}e^{(\mu x-\nu
y)})\\-u_{0}(A_{1}e^{(\mu x + \nu y)}+A_{2}e^{(\mu x-\nu
y)})\end{array}\right]\psi_0$$$$\eqno(44)$$\
Hence all the scalar fields $\chi, h, u$ and $\varphi$ become functions of (x,y) through $\psi$ which is externally given. The complicated nonlinear terms of two-fluid equations vanish and we obtain two simple and beautiful linear equations (34) and (35). These equations show that the terms of electrons and ions $(\nabla
\psi \times \nabla T_j)$ (for j = e, i) become the source for the ’seed’ magnetic field and plasma flow $\textbf{v}_i$.\
[$\textbf{IV. GENERAL APPLICATIONS}$\
It has been shown that the forms of $\psi (x,y)$ and $T_{0j} (y,z)$ given in equations (31) and (33), respectively, reduce the set of nonlinear two fluid partial differential equations into two simpler linear equations (34) and (35) under certain conditions mentioned in the previous section. This theoretical model shows that the non-parallel density and temperature gradients can create magnetic fields and flows in initially unmagnetized plasmas.\
Our aim is to apply this model to a system which has exponential-type of density profile as for example in the case of a laser plasma. But the chosen form of $\psi (x,y)$ in (31) with the definition $\psi = ln \bar{n}$ gives, $$\bar{n}=\frac{n_{(x,y)}}{N_0}=exp\left[A_0 \left\{ e^{(\mu x + \nu y)} +
e^{(\mu x - \nu y)}\right\}\right] \eqno{(45)}$$ where $A_0 = A_1 =
A_2$ has been assumed.\
It looks as if density $n_{(x,y)}$ has a profile like double exponential in (x,y) plane. Such a steep density variation is not interesting for physical applications, in general. We show here that the density variation can become very similar to exponential form in (x,y) plane by choosing suitable values of the constants $N_0$, $\mu$ and $\nu$ along with $A_0$. Let us consider an inhomogeneous plasma rectangle in (x,y) plane with four corner points (0,0), $(x_m, 0), (0, y_m)$ and $(x_m,y_m)$ where $x_{m}$ and $y_{m}$ are the maximum lengths of the system along x and y axis, respectively. Then choose the constants in such a way that the density $n_{(x,y)}$ at $(x_m,y_m)$ will be almost e-times (or a little larger) than the value at (0,0), while the density at $(x_m,0)$ and $(0,y_m)$ will be somewhat lesser than e-times the density at (0,0).\
Such a density function is acceptable physically. For example, in previous EMHD model, the density was chosen to be an exponential function only along x-axis as $n_{(x)}=n_0 e^{\frac{x}{L_n}}$ where $L_n$ is the density scale length and $n_0$ is the magnitude of density at x=0 \[9\]. In our case, $n$ depends upon two coordinates x and y. It’s profile depends upon the values of the constants. Therefore this theoretical model can be applicable to many inhomogeneous plasma systems.\
Note that $\psi= ln \bar{n}$ and if $0<A_0 < 1$, $0\leq\mu x + \nu y
< 1$ and $0\leq\mu x - \nu y < 1$, then $\psi$ can have values which give $e^{\psi}$ of the order of $e^{(1)}$, but not exactly $e(1)
\simeq 2.7$ at all points because $\psi$ changes with x and y.\
In the next section we shall apply the model to laser plasmas as an example and our point of view will become clearer. The values of the constants will be chosen to show how it works for relatively smooth density profiles in (x,y)-plane.\
It seems important to point out that any one of $\psi_1$ and $\psi_2$ in (31) should not be much smaller than the other while choosing the constants. If one of them is negligibly small then the solution becomes one-dimensional.]{}
[$\textbf{V. LASER PLASMA}$\
Here we apply our theoretical model to estimate magnetic field $\textbf{B}$ and the plasma flow $\textbf{v}_{i}$ in a non-uniform classical laser plasma. Consider a finite plasma rectangle with four corner points $(0, 0)$, $(x_{m}, 0)$, $(0, y_{m})$ and $(x_{m},
y_{m})$ in (x, y) plane as has been mentioned in previous section. We may assume $\mu x$ and $\nu y$ to vary in this finite plasma as, $$\mu x: 0\rightarrow 0.5 = \mu x_{m}\eqno(46a)$$ and $$\nu y :
0\rightarrow 0.7 = y_{m}\eqno(46b)$$ Then at (0, 0), we have, $\overline{n}_{(0, 0)}=\frac{n_{(0, 0)}}{N_{0}}$ and $N_{0}$ is chosen such that $\overline{n}\neq1$ or $\overline{n} \nless 1$ because density $n_{(x, y)}$ should be neither zero nor negative. Therefore, we choose $\frac{n_{(0, 0)}}{N_{0}}=3$ which gives, $\Psi_{(0, 0)}\simeq1.1$ and due to (45) we find $A_{0}\simeq0.55$. If density is of the order of $10^{20}cm^{-3}$, then we may assume $N_{0}=10^{20}cm^{-3}$ and hence $n_{(0, 0)} =
3\times10^{20}cm^{-3}$. Or we may assume $N_{0}=10^{19}cm^{-3}$ and hence $n_{(0, 0)}=3\times10^{19}cm^{-3}$ while $n_{(x_{m}, y_{m})}$ will turn out to be nearly $10^{20}cm^{-3}$. In laser-plasmas, $L_{n}=50\times10^{-6}$ $m=L_{T}$ was assumed in estimating $|B|$ \[8, 9\] (where $L_{n}$ and $L_{T}$ are scale lengths of density and electron temperature along x and y co-ordinates, respectively). In these studies, the $\nabla n$ was along x-axis and $\nabla T_{e}$ was along y-axis only. Then $L_{n}\simeq L_{T}$ was also assumed. For the sake of generality we do not assume $\mu = \nu$. Instead let $\nu = 1.5 \mu$. In this case we estimate $\overline{n}$, $\Psi$ and $\textbf{B}$ at 4-corner points of the plasma rectangle as follows: $$\overline{n}_{(0, 0)}=3,\Psi_{(0, 0)}\simeq1.1$$ $$\textbf{B}_{(0, 0)}=(0, -1.1, -1.1)9.9\times10^{5} Gauss\eqno(47a)$$ $$n_{(x_{m}, 0)}=6:\Psi_{(x_{m}, 0)}\simeq1.79$$ $$\textbf{B}_{(x_{m}, 0)}=(0, -1.79, -1.79)9.9\times 10^{5}Gauss\eqno(47b)$$ $$\overline{n}_{(0, y_{m})}=3.98;\Psi_{(y_{m}, 0)}=1.38$$ $$\textbf{B}_{(0, y_{m})}=(1.24, -1.38, -1.38)9.9\times10^{5}Gauss\eqno(47c)$$ $$n_{(x_{m}, y_{m})}\simeq9.48;\Psi_{(x_{m},
y_{m})}\simeq2.25$$ $$\textbf{B}_{(x_{m}, y_{m})}\simeq(2.73, -2.25,
-2.25)9.9\times10^{5}Gauss\eqno(47d)$$ These values of $|\textbf{B}|$ are almost in agreement with the observation \[9\].Then we can express, $$\textbf{v}_{i(x, y)}=6\times10^{7}(-0.46, 0.3, -1)\Psi_{(x, y)}cm/sec\eqno(48)$$ If we look at the values of density, we notice that density $n_{(x,
y)}$ at $(x_{m}, y_{m})$ is $$n_{(x_{m}, y_{m})}\simeq \{n_{(0,
0)}\}(3.16)$$ which is little larger than e-times the density at (0, 0). Therefore, the density has not been chosen as a double exponential function. We are mainly interested in the order of magnitudes of $|\textbf{B}|$ and $|\textbf{v}_{i}|$ which mainly depends upon our choice of constants.\
]{}
$\textbf{V. DISCUSSION}$\
It is important to note that the ’seed’ magnetic field generation can not be explained on the basis of magnetohydrodynamics (MHD). There must be a source like thermal energy which converts into magnetic energy. Biermann \[6\] proposed that the electron baroclinic vector $(\nabla n_e \times \nabla T_e)$ can generate the magnetic fields in rotating stars. Then based on this idea, a very simple model; the electron magnetohydrodynamics (EMHD) was presented to explain the generation of magnetic field in laser-induced plasmas. Later on, the magnetic electron drift vortex (MEDV) mode was discovered using EMHD. This was believed to be a low frequency pure transverse normal mode of electron plasma \[12, 13\]. The EMHD is also not a convincing theoretical model for the generation of ’seed’ magnetic field. The MEDV mode description also suffers from contradictions.\
On the other hand, the two-fluid model is too complicated. The numerical simulation of these equations is a complex problem. But the analytical 3-D solution is also not straightforward. However, a two-dimensional solution has been presented using physical and consistent assumptions and approximations.
It is important to note that the present model is different from the previous works because it shown that
1. ion dynamics play a crucial role
2. the baroclinic vectors generate not only the magnetic field but plasma flow as well
An exact 2-D solution of the two-fluid equations was also found a few years ago \[19\], but it had a serious weakness. The density gradient was assumed to follow sinusoidal behavior contrary to common observations. Since, it was the first effort to get an exact solution of the two fluid equations, therefore it was presented for the interest of researchers working in the field.\
The present 2-D exact solution of two-fluid equations is in the form of exponential function of x and y coordinates. This structure of density gradient is more physical. Since all fields become linear function of $\psi$, therefore all fields have the similar spatial structure. This solution is applicable to both astrophysical and laser-induced plasmas in our opinion.\
The present investigation suggests that instead of EMHD, the numerical simulation of two-fluid equations will be very useful for understanding the mechanism for the generation of ’seed’ magnetic field in different systems with different profiles of density and temperatures. For analytical solution, we have to choose very special forms of density and temperature gradients. This theoretical model can be very useful for further studies in astrophysical and laser plasmas.
[2]{} L. M. Widrow, Rev. Mod. Phys. $\textbf{74}$, 775 (2002). L. Mestel and K. Subramanian, Mon. Not. R. Astron. Soc. $\textbf{265}$, 649 (1993). E. N. Parker, Cosmical Magnetic Fields (Clarendon, Oxford 1979) E. G. Blackman, Astrophysical J. $\textbf{529}$, 138 (2000). A. Brandenburg and K. Subramanian, Astrophysical Magnetic Fields and Nonlinear Dynamo Theory, Physics Reports 417, 1-209 (2005). L. Biermann, Z. Naturforsch. $\textbf{5A}$, 65 (1950). A. Lazarian, Astron. Astrophys. $\textbf{264}$, 326 (1992). J. A. Stamper, K. Papadopoulos, R. N. Sudan, S. O. Dean, E. A. Mclean, and J. W. Dawson, Phys. Rev. Lett. $\textbf{26}$, 1012 (1971). K. A. Brueckner and S. Jorna, Rev. Mod. Phys. $\textbf{46}$, 325 (1974). A. A. Kingssep, K. V. Chukbar, V. V. Yan’Kov, in Reviews of Plasma Physics, edited by B. B. Kadomtsev (Cosultants Bureau, New York 1990), Vol. 16, p. 243. L. A. Bol’shov, A. M. Dykhne, N. G. Kowalski, and A. I. Yudin, in Handbook of Plasma Physics, edited by M. N. Rosenbluth, and R. Z. Sagdeev (Elsevier Science, New York 1991), Vol. 3, p.519. R. D. Jones, Phys. Rev. Lett. $\textbf{51}$, 1269 (1963). M. Y. Yu and Xiao Chijin, Phys. Fluids $\textbf{30}$, 3631 (1987). H. Saleem, Phys. Plasmas $\textbf{16}$, 082102 (2009); H. Saleem in New Developments in Nonlinear Plasma Physics, Editors B. Eliasson and P.K. Shukla, Proc. ICTP Summer College on Plasma Physics and International Symposium on Cutting Edge Plasma Physics 10-28 August 2009, Trieste, Italy. C. A. Ceccetti, M. Borghesi, J. Fuchs, G. Schurtz, S. Kar, A. Macchi, L. Romagnani, P.A. Wilson, P. Antici, R. Jung, J. Osterholtz, C.A. Pipahl, O. Willi, A. Schiavi, M. Notley and D. Neely. $\textbf{16}$, 043102 (2009). H. Saleem, Phys. Rev. $\textbf{E 54}$, 4469 (1996); H. Saleem, Phys. Rev. $\textbf{E 59}$, 6196 (1999). H. Saleem and Z. Yoshida, Phys. Plasmas $\textbf{11}$, 4865 (2004). S. M. Mahajan and Z. Yoshida, Phys. Rev. Lett. $\textbf{81}$, 4863 (1998). H. Saleem, Phys. Plasmas $\textbf{14}$, 072105 (2007).
$\textbf{Acknowledgement}$\
The author is grateful to Professor Zensho Yoshida of Tokyo University for several useful discussions on this work at Abdus Salam-International Centre for Theoretical Physics (AS-ICTP), Trieste, Italy during the Summer College on Plasma Physics 10-28 August 2009.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'When angular objects in lensing are considered as linear objects, interesting phenomena start happening. Tachyonic caustics are one example. We review that the intrinsic variables of the lens equation are angular variables. We argue that the “fast glance effect“ of a caustic curve that is far away from lenses does not share the physical bearing of the well-known (apparent) superluminal motion. There is no dbout that it would be a useful exercise to study the null geodesics in the metric of, say, a rapidly rotating black hole binary. Lienard-Wiechert potentials ($A_\mu$) satisfy Maxwell’s equations in Minkowski space. Authors’ claim that swapping $eQ$ and $GM$ makes the time component ($A_0$) of the Lienard-Wiechert potentials into ”the gravitational analog" that governs the behavior of the null geodesics near a relativistic binary system seems to be unfounded.'
author:
- Sun Hong Rhie
title: |
Superluminal Caustic is Just a Common Misconception: A Comment\
on astro-ph/0001199 by Zheng Zheng and Andrew Gould
---
()
It has been a long controversy in the smoky backroom where non-smoking jurors shred papers and throw verdicts, where the caustics are. The controversy hits home often because we are looking for Bruno’s planets. Caustics matter greatly in the field of microlensing planet searches. We wrote in a paper on the discovery of evidence of a low mass planet [@98blg35] that “a single lens (stellar lens only) has a point caustic at the position of the lens.” Or, is the point caustic behind the position of the lens at the projected position of the lens? May we draw the caustic curve and the critical curve in the same plane as we usually do? Is it a law or a rule that caustics (onto which the critical curve in the image plane is mapped under the lens equation) lie in the source plane? Or, is it a matter of definitions and conventions? What is the image plane? What is the source plane? Does any of them (including what we wrote in the paper mentioned) have any merit or physical relevance? What is a superluminal caustic? We realize that we just found in the phantom of tachyonic caustics an important clue to the mysterious misunderstanding behind the controversy. Discoveries advance science. So does reasoning. Here we investigate the misconception of superluminal caustics [@whatsupernatural] as an attempt to straighten out the wrinkles on caustics. We only need to borrow a pinch of salt from a way of thinking in science which may have been popularized by Einstein: gedenken experiment.
-
Andrew Gould writes in a recent article [@whatnatural] (G1421 from here on) that “the geometry of point-lens microlensing is so simple that students can derive all the basic results in a few hours." The abstract of G1421 starts with a paragraph, “if the standard microlensing geometery is inverted so that the Einstein ring is projected onto the observer plane rather the source plane, ... ." What could be the source plane and the observer plane referred to as in G1421? In lensing, there are three basic objects, which we may refer to as “lensing trio" in this article. They are radiation an emission source, a lensing object, and an observer [@schechter]. The lensing trio fall more or less on a line in the radial direction, namely, the line of sight (of the observer of the source star in the absence of the intervening lensing object). Given a geometric line, one can imagine an infinitely many planes that are perpendicular to the line. One may refer to the plane that passes through the radiation emission source as the source plane and the plane that passes through the observer as the observer plane. Where is the Einstein ring? Equation (1) in G1421 indicates that the Einstein ring lies on the plane that passes through the lensing object. Zheng and Gould (2000; ZG1199 from here on) refer to the plane through the lensing object as the lens plane.
Physics lies in relations not in nomenclatures, and it will be most harmonious if the nomenclatures faithfully represent the relations. One of the governing relations in lensing is the so-called lens equation, and ZG1199 writes the lens equation in terms of the variables defined on the aforementioned source plane and lens plane [@sweiss86]. = [D\_sD\_1]{} - D\_2 () , \[eqZG11\] where $D_1$ and $D_2$ are the distances from the lens to the observer and to the source, and $D_s = D_1 + D_2$. ZG1199 describes the transverse (or 2-d) position variables ${\etavec}$ and ${\zetavec}$ as follows: A photon comes from point ${\etavec}$ in the source plane and hits point ${\zetavec}$ in the lens plane. At this point, one may wonder if ${\etavec}$ must be the variable for the source position (since it is said to be in the source plane) and ${\zetavec}$ must be the variable for the lens position (since it is said to be in the lens plane). In order to understand the significance of the variables ${\etavec}$ and ${\zetavec}$, we consider a gendenken experiment: we reduce the mass $M$ gradually by taking away one atom at a time. A photon from point ${\etavec}$ on the plane at a distance $D_s$ from the observer must hit point ${\zetavec}$ on the lens plane such that lens equation (\[eqZG11\]) with decreasing mass $M(t)$ is satisfied. When the last atom is taken away, there is no lensing mass, and there is no lens plane. The plane at a distance $D_1$ from the observer is just one of the infinitely many planes that are perpendicular to the line of sight. On the other hand, $D_1$ retains its significance in equation (\[eqZG11\]). When there is no lensing mass, $\Thetavec$ vanishes, and the lens equation reads as follows. = [D\_sD\_1]{} \[eqNolens\] Since there is no lensing mass, the distance $D_1$ does not have any physical relevance and should not show up in equation (\[eqNolens\]) with any significance. But it does. We only know that that is where we used to have a lensing mass. Is it some sort of hysteresis? Then, it could be where we are thinking of putting a lensing mass because we can start piling atoms the very next moment. Then, of course, it could be just a plane we are thinking of for no reason, perhaps out of boredom. Or, perhaps, ${\zetavec}$ is not a most representative variable for the governing equation of the lensing behavior. What equation (\[eqNolens\]) says is that the transverse vector ${\etavec}$ on the source plane at a distance $D_s$ from the observer and the transverse vector ${\zetavec}$ on an arbitrary plane at a distance $D_1$ from the observer are parallel and scale with the distances from the observer. What should strike us by now is that the perspective of the observer, that is all there is in the lens equation with zero lens mass. So, we divide the transverse vectors by the distances of the planes, and it becomes very clear what the equation must mean. Let $\vec\alpha\equiv \zetavec / D_1$ and $\vec\alpha_s\equiv \etavec / D_s$, then = \_s . \[eqAlphaZeromass\] A radiation source that would be seen at an angular position $\vec\alpha_s$ by an observer in a flat space is seen by the observer at an angular position $\vec\alpha$ that has the same value as $\vec\alpha_s$ when there is nothing to change the light ray from that in the Minkowski space. Then, it is clear that the pair of variables $\{\etavec, \zetavec\}$ simply are not the proper pair of variables for the lens equation. It is not that one can not write the equation in terms of the variables $\{\etavec, \zetavec\}$, but that they oscure the conceptual underpinning of the lens equation. It is also clear that the variable $\vec\alpha$ or $\zetavec=D_1 \vec\alpha$ is a variable for the positions of the images not the lenses. This fact seems to have generated another misconception that images should lie on the lens plane. This misconception seems to have produced a corollary that images can be in two different lens planes (when the lensing involves double scattering) as the audience was told by an invited speaker in a recent lensing meeting.
Now, we put back in the lensing mass, say, at $D_1$. The (2-d) scattering angle $\Thetavec$ is a function of a dimensionful constant $GM$ as well as of ${\zetavec}$: $\Thetavec({\zetavec}; GM)$, and it works out to be dimensionless as it should. So, we can divide equation (\[eqZG11\]) by $D_s$ to write it in terms of the pair of angular variables $\vec\alpha$ and $\vec\alpha_s$. = \_s + [DD\_1]{} , \[eqLeqAlpha\] where $D$ is the reduced distance = [1D\_1]{} + [1D\_2]{} . A radiation source that would be seen at an angular position $\vec\alpha_s$ by an observer in Minkowski space is seen by the observer at an angular position $\vec\alpha$ which is shifted from $\vec\alpha_s$ by a fraction of the (2-d) scattering angle $\Thetavec$. The reduced distance is no bigger than the smaller of $D_1$ and $D_2$ as is familiar from the reduced mass in mechanics. Thus, $D/D_1 \leq 1$, and the angular shift between $|\vec\alpha-\vec\alpha_s| \leq |\Thetavec|$. (The equality holds when $D_1 =0$, which is not exactly a physical situation. That is because of the hidden assumption of the lens equation that the observer is supposed to be asymptotically far away from the lensing mass as it should be clear in the following section.)
An emblematic analogue to this interpreational issue of the lens equation hinging on “trivial math" and “deeper physics" would be the case of the cosmological constant which may have been the biggest blunder of the man of the 20th century (TIME) but likely a necessity for the millenials. As a desperate effort to stop the universe from expanding without introducing negative density or pressure, Einstein modified his equations in 1917 and introduced a new fundamental constant, the so-called cosmological constant. The new equation read as follows [@einstein; @sweinberg]. G\_ - g\_ = -8G T\_ , \[eqLambda\] where $ G_{\mu\nu}$ and $ T_{\mu\nu}$ are Einstein and energy-momentum tensors, and $g_{\mu\nu}$ is the metric tensor. The LHS is the geometry, the RHS is the matter content, and the Einstein field equation tells us how the geometry of space time interacts with the content of the matter in the space time. What is curious about the cosmological term is that it does not vanish when the space time is flat. It is a blow to the fundamental notion of Einstein gravity one might have just convinced oneself to accept: gravitational interaction is an experience of the curvature of space time. The cosmological term is a non-curvature term of the geometry that participates in governing the grvaity as it is written in the LHS of the gravitational field equation. We may consider the $\Lambda$-term as a part of the energy momentum tensor and write the gravitational field equation as follows. G\_ = -8G (T\_ - [8G]{} g\_) \[eqLambdaVac\] The transition from equation (\[eqLambda\]) to equation (\[eqLambdaVac\]) is less than trivial mathematically, but it requires a profound change in the frame of physical understanding. The $\Lambda$-term as an energy momentum tensor features negative pressure, which Einstein axiomized to avoid and was one of the very reasons why he devised the $\Lambda$-term in equation (\[eqLambda\]) in the first place. It took developement of Goldstone bosons, renormalization and brief marriage of particle physics and condensed matter physics, Grand Unification Theories (GUT), experience of rich vaccuum structures (with endless parade of scalar fields as yet to be discovered or refuted) and phase transitions in the early universe, monopole problem, horizon problem, dark matter problem, inflation theories, structure seed problem, fine-tuning problems, topological defects, etc, for the negative pressure to find its natural position in human intellectural domain. Now, no one doubts the physical relevance of the cosmological constant as the the vaccuum energy density – or the “zero point energy" of the (future) quantum gravity, even though it is a good question how big it is, or whether it is a constant. In astrophysical practice, it is simply passed as a stuff with stiff equation of state: $p = -\rho$ that can overcome the self-gravity of ordinary matter and make the universe fly apart. In fact, high-z supernovae searches have found evidence of accelerating universe [@SNcp; @highzSN].
So, we emphatically conclude that the intrinsic variables in lens equations in (\[eqZG11\]) and (\[eqLeqAlpha\]) are angular variables. This obvious conclusion is self-evident from the beginning if we derive the lens equation of a point mass $M$ starting from the first principle (simply following a standard textbook on general relativity). An equation that relates two angular variables, that is the first thing we get. The others are all simple derived quantities, and there is no misunderstanding of what is what. If it takes a few hours for students to do so as Gould (G1421) testifies, it should take less than a few days for practitioners to derive the lens equation and less than a few minutes to read one. So, we write out the derivation in the following section as an effort to abolish the ground for the mysterious understanding that seems to perpetuate even more mysterious controversies. In fact, it is cathartic to go through the derviation of the lens equation once. The Einstein field equations are non-linear and can not be solved in general, but there are some exact solutions. And, the lens equation of a point mass is derived from one of those exact solutions, and that the simplest one (despite the problem of no global time killing vector and of nontrivial topology). It is such a great assurance to be backed by an analytic derivation from an exact solution.
Incidentally, one may realize that the source plane and the lens plane defined based on the positions of the source and the lens in radial direction (or based on the distances of the source and the lens from the observer) have no relevance with the variables in the lens equation. The lens equation can only address the relation between transverse position variables. Then, one wonders why these objects play such a persistent role in the volitational papers and caustics.
Let’s pause for a moment and ponder angular variables. What is it that we perceive and measure as the angular position of a celestial object? Speckle imaging may offer the best food for thought. The space time dependent refraction index makes a photon beam from a celestial object wiggle along through the atmosphere, and the object appears to hop around as registered at the focal plane. We may refer to these snap shots as a time series of “apparent angular positions“ of the object. If we remove the atmospheric turbulence or the atmosphere altogether (again in our thought experiment) – assuming that that leaves only the vacuum for the photon beam to propagate through, we will find a steady image on our CCD. We may refer to it as the ”true angular position" of the object. What is common for both “apparent" and “true" angular positions is that an angular position is determined by the direciton of the propagation vector of the photon beam at the observer. The space time dependent refraction index of the atmosphere is an electromagnetic property of a matter in hydrodynamic motion. On the other hand, there is nothing wrong with understanding gravitational lensing effect of the space time curvature in terms of effective refraction index (continuous function in space and time) assuming that we calculate the index truthfully to the underlying physics. That is, according to general relativity, not in terms of Ferma theorem [@GL] unsubstantiated for the gravitational effect on optical paths (no historians have found evdience of a margin the lack thereof prevented Ferma from elaborating Riemannian geometry and propagation of massless spin one particles).
Let’s consider a quasar lens. There are four objects that are believed to be all from the same emission source QSO 2237+0305, and they are called Huchra’s lens, Einstein cross, or QSO 2237+0305. The emission source QSO 2237+0305 is at a cosmological distance $z = 1.695$, and it will be a long time before the lensing galaxy moves away from the line of sight of the QSO even though the (Sb) galaxy is relatively close to us at $z = 0.0394$. So, the four objects are the thing that will be recognized as QSO 2237+0305 for generations to come, but we always can remove the lensing galaxy in our thought experiment. Then, an observer will see one object, say, at an angular position $\vec\alpha_s$, which one may refer to as the “true angular position" of the quasar. In a consistent manner, one might have liked to refer to the angular positions of the four objects as the “apparent angular positions" of the quasar, but the multiplicity of the objects renders such practice seem unfit. The four objects can represnt the different facets of the lensed quasar, and they are usually referred to as the “images" of the quasar. So, the angular positions of the four objects may be referred to as the “angular positions of the images" of the quasar or simply the “image positions". This practice is in perfect harmony with our everyday experience. A client in a barber chair next to a corner with mirrored walls can see multiple “images" of oneself while being oblivious to the true object, oneself. In fact, the images show the different sides of the client. So, we are all content to call the four objects the images of the quasar. Now, we repeat our favorite thought experiment and reduce the mass of the lensing galaxy to zero. An observer will see one image of the quasar. What should we call this image? Preimage? Unlensed image? Image-sub-zero (Image$_0$: the image one sees when the lensing mass vanishes)? Instead, this particular image is usually referred to as the “source". Then, the “true angular position" of the quasar $\vec\alpha_s$ may be referred to as the “angular source position" or simply the “source position", and equation (\[eqAlphaZeromass\]) may read in English as follows: the image position of an object is the same as the source position of the object when there is no lensing mass. So, $\vec\alpha$ we introduced as the equivalent of $\zetavec/D_1$ is the variable for the image positions. When we put back in the lensing galaxy, the image positions may differ from the source position, and their relation is nothing but the lens equation (\[eqLeqAlpha\]). One of the main games in quasar lensing (or any large scale lensing) is to reconstruct $\vec\alpha_s$ and $\Thetavec$ from observational information of the images $\vec\alpha$.
In quasar lensing, the distance (actually the redshift) to the emission source is one of the better determined quantities. As a consequence, one may favor [@quasar_microlensing] to write the lens equation in terms of linear variables by multiplying the equation (\[eqLeqAlpha\]) by the distance to the source $D_s$ (assumed to be determined from the measured redshift and the cosmology to be determined). (D\_s \_s) = (D\_s ) - [DD\_1]{} (D\_s ) \[eqLeqDs\] In the case of QSO 2237+0305, the variables ($D_s \vec\alpha_s$) and ($D_s \vec\alpha$) may be considered to be defined on the plane at $z = 1.695$ from the observer. (The equivalence between angular variables and linear variables holds valid in a small scattering angle approximation, $\Theta << 1$, which we assume to be the case in this article. The angular separations of the four objects in QSO 2237+0305 is about $1^{\prime\prime} << 1$ .) One may refer to the plane at $z = 1.695$ as the source plane, as in G1421 (and references therein), and consider ($D_s \vec\alpha_s$) and ($D_s \vec\alpha$) as the linear variables projected into the source plane [@quasar_microlensing]. Then, the variable $(D_s \vec\alpha_s)$ denotes the source position in the source plane, and the variable $(D_s \vec\alpha)$ denotes an image position in the source plane. If one feels a bit of cluttered tautology here, one may realize that the culprit is the clinging desire to recognize the quasar in full three-dimensional coordiantes of the space. The source position has been assigned three coordiantes: $(D_s \vec\alpha_s, D_s)$, and so has been the position of the image: $(D_s \vec\alpha, D_s)$. We note that the images lie on the source plane here. We mentioned before that some practioners insist on putting images on the lens plane (or even lens planes).
The source plane here is tied to the value of the third (or radial) coordinates $D_s$ in $(D_s \vec\alpha_s, D_s)$ and $(D_s \vec\alpha, D_s)$. On the other hand, $\{D_s \vec\alpha_s\}$ spans a two-dimensional plane, and one may prefer to refer to the plane as the source plane because the plane is parameterized by the source position variable. The two source planes coincide and there doesn’t seem to be any conflict between the two definitions. That is, until one realizes that $\{D_s \vec\alpha \}$ also spans a two-dimensional plane, and one may prefer to refer to the plane as the image plane because the plane is parameterized by the image position variable. So, we find ourselves in the middle of a luxury of definitions that seem to be tangled in redundancy: the source plane originally defined by the distance of the emission source at $D_s$ and parameterized by the (2-d) source position variable may be preferred to be referred to as the source plane, and the source plane originally defined by the distance of the emission source at $D_s$ and parameterized by the (2-d) image position variable may be preferred to be referred to as the image plane.
Lens equation is a relation of two diemensional variables and can accommodate only two dimensional angular variables or corresponding two dimensional linear variables. Even when one carries around the third (radial) components, the degrees of freedom in the lens equation is only two dimensional. One can choose a plane that may represent the angular space faithfully with an understooding that any plane is as good as any other plane. $$\begin{aligned}
(\vec\alpha_s, ~1) = D_s^{-1}~(D_s \vec\alpha_s, ~D_s)
= D_\xi^{-1} ~(D_\xi\vec\alpha_s, ~D_\xi) \\
(\vec\alpha, ~1) = D_s^{-1}~(D_s \vec\alpha, ~D_s)
= D_\xi^{-1}~(D_\xi\vec\alpha, ~D_\xi)\end{aligned}$$ One may choose the plane at a unit distance, at $D_s$, or at an arbitrary distance $D_\xi$ from the observer. They are all equivalent. The scattering angle is a quantity defined on an optical path – a one-dimensional object in space. Lensing is defined on the space of optical paths, which one as an observer recognizes only at one end of the paths. Once a plane is chosen, that is where all the lensing variables will be defined and compared. Thus, it is best to consider the chosen plane as the “abstract plane" which may be parameterized by the source position variable or the image position variable. We may prefer to refer to the “abstract plane" as the “abstract lens plane" or simply the “lens plane" because that is where the lens equation is defined and studied. One may refer to the “abstract lens plane" parameterized by the source position variable as the source plane and the “abstract lens plane" parameterized by the image position variable as the image plane. So, the lens equation is a mapping from the “abstract lens plane" to itself, or from the image plane to the source plane. (D\_\_s) = (D\_) - [DD\_1]{} (D\_) \[eqLeqDxi\]
When $D_\xi = D_1$, the “abstract lens plane" coincides with the lens plane defined by the radial position of the lensing object as in G1421 and ZG1199 (and references therein). As we will see in the following section, the distance from the lensing mass of the apastron of an optical path around a lensing mass is the same as the Einstein ring radius on the plane at $D_1$ (in the approximation of linear gravity which is valid for all the observed and identified gravitational lenses). Einstein ring refers to a ring image (as well as the critical curve) in a point mass lens, and there seems to be a (wrongful) religious belief among some practitioners that the plane at $D_1$ is endowed with a privileged position as the image plane. That is not so, contrary to what one may find in G1421 and ZG1199. It is indeed baffling to hear as recently as July 1999 a claim [@petters] that images can be in two different lens planes. An observer does not see the photons in the images until they arrive at the observer as we have discussed repeatedly. There is only one plane one may define: the “abstract lens plane", the representation of the observer’s sky – the motherboard of both the image plane and source plane. One can put the “abstract lens plane" anywhere one finds it useful as far as the smallness of the scattering gurantees linearity between the plane and the sky.
The critical curve and the caustic curve pertain to the differential beahvior of the lens equation. The lens equation is an explicit mapping from an image position to its source position, and there are multiple solutions for a given source position. The multiplicity can change from domain to domain of the source plane, and all these interesting behavior can be studied starting from differentiating the lens equation. d \_s = d - [DD\_1]{} [d d ]{} d \[eqDiff\] When one of the (never both in microlensing we are interested in) eigenvalues of this linear transformation, $d\vec\alpha \rightarrow d\vec\alpha_s$, vanishes, the lens equation is said to be stationary along the eigendirection. The set of the points where the eigenvalue vanishes is called the critical curve. This is a benign or natural generalization of what is familiar from a real function, say, $y = f(x): x , y \in {\cal Re}$. Critical points are where $df/dx =0$ (or f(x) is not differentiable). Sometimes, we may hear “critical line” in relation to lensing. First of all, the set of critical points almost always form closed smooth curves in lensing. (One can assign lensing objects at infinity and force the critical curve to have an open curve, whose physical relevance I am not certain of.) Also, “critical line" may be best left as the terminology for the loci of the zeros of the Riemann Zeta funcion. (The zeros are believed to be on the “line" whose real value is 1/2, and this Riemann conjecture remains to be a conjecture despite the telegram sent by Hilbert claiming otherwise.) Thus, we prefer to call the set of critical points of the lens equation the critical curve. The critical curve may be a disjoint sum of closed curves. One may wonder what happens to the critical curve under the mapping of the lens equation. The resulting curve is called the caustic curve. The caustic curve has the same connectedness as the critical curve because the lens equation is continuous (actually smooth) in the neighborhood of the critical curve. Continuity is preserved under a continuous mapping. On the other hand, I have no idea idea why the caustic curve is named as it is, even though they look punky all right with spiky cusps. In CRC Consise Encyclopedia of Mathematics, caustics are defined as involutes, and involution vaguely reminds me of the way light rays pile up on the glittering surface of a swimming pool on a bright day. Caustic curves I encounter in microlensing are all “some-form-of-oids" similar to cycloids: smooth closed curves punctuated by cusps. Cusps occur because the lens equation has stationary points along the critical curve. A household name example of cusps may be the highest points in the swinging of a pendulum. The trajectory of the pendulum in space changes the direction of the tangents (or the velocity) to the curve at the stantionary points where the kinetic energy vanishes. In The Random House College Dictionary, caustic is defined as to be severely critical, sarcastic, or capable of burning living tissue. We find the caustic curves relatively benign or even slightly enjoyable (we can generate relatively interesting looking algebraic curves from physical necessity!), but the mythology around the caustic curve seems to have been, well, caustic. One wonders whether the smoke in the backroom may be dully tributed as the shroud of acquired memories of Bruno burning at stake wondering of the neurochemistry of the minds of the inquisitors and leaving behind his philosophical conjecture on ubiquitous planets to be scientifically tested some four hundred years later.
So, where are the critical curve and the caustic curve? They are objects defined through the lens equation and all lie in the same space: angular space. Or, an equivalent linear space. We choose a lens plane, that is, an “abstract lens plane", and mark lens positions, source positions, image positions, Einstein ring, critical curve and caustic curve. And, anything else we may feel useful.
Then, how does a caustic curve fly tachionically or at a speed faster than the speed of light? It doesn’t. A caustic curve is not an object physically occupying a space at $D_s$ from the observer. It does not swirl on the plane at $D_s$ from the observer in unison with a pair of binary masses in a relativistic orbital motion at a distance of $D_1$ from the observer. One may suggest: apparent superluminal motion of blobs in a microquasar is a projection effect, and exactly the same phenomenon happens to the caustic curve once it is projected to a linear space such as the $D_s$-plane. Does it not? Of course, not. The analogy is flawed. In the case of the blobs from the microquasar, they are the objects out there moving at certain linear velocities, and we can only measure the motions in terms of angular shifts in time. The shifts of the angular positions of the blobs from the angular position of the microquasar which is the emission source of the blobs of particles can correspond to a speed larger than the speed of light when multiplied by the distance to the microquasar. That is referred to as an (apparent) superluminal motion, and it offers an information on the direction of the beam of the blobs. In contrast, the significance of defining superluminal caustic is as substantial as defining superluminal eyes upon having made a sweeping glance at the Milky Way from Ayers Rock. A caustic is nothing more than a peeping hole in this regard. As the caustic curve moves, the target stars that can be sampled through the window defined by the caustic curve change. Furthermore, what we see are images not the caustic curve. The caustic curve as an aperture does not deliver images directly. Images are delivered only after the tranformation dictated by the lens equation is carried out. We will see in a following section that there is no “superluminality" to interest us even when we indulge in the phantom world of tachyonic caustics. Only the effect of fast glance: caustic crossing signal buried under finite size source effect and “long" exposure.
-
Schwarschld metric is an exact solution to the Einstein field equations with a point mass. If the point mass is $M$, the Schwarzschild metric is given by [@sweinberg] ds\^2 = -(1-[2GMr]{}) dt\^2 + [dr\^21 - 2GM/r]{} + r\^2 (d\^2 + \^2 d\^2) , \[eqMetric\] where the Schwarzschild radius $r_s = 2GM$ is $2.95$km for a solar mass object. In microlensing, photons’ passage is about $10^8$ times the Schwarzschid radius. The optical path is found by solving the free fall equation with the null condition $ds^2 = 0$, and the orbit $\theta(r)$ is given as an elliptic integral that requires numerical estimation. When the closest approach $r_{\circ}$ of the photon beam to the mass $M$ is $10^8$ times $r_s$ or so, however, the weak gravity allows truncation of the integral at the linear order (in the Newtonian potential $GM/r$, which is called Robertson expansion), and the scattering angle of the orbit takes a simple form. = [4 G M r\_]{} = [2 r\_s r\_]{} \[eqTheta\] In Newtonian gravity, the scattering angle is given by twice the value of the Newtonian potential at the closest approach, $2GM/r_{\circ}$, and differs from Einstein gravity factor 2. This factor 2 difference was crucial in establishing Einstein theory as the theory of gravity.
In Newtonian gravity, an unbound orbit forms a hyperbolic curve (on the plane defined by an azimuthal angle $\phi =$ constant). If we consider the family of hyperbolic curves connecting two asymptotic points that represent an emission source and an observer, the scattering angle the hyperbolic curve represents grows with the distance from the lensing mass located somewhere between the emission source and the observer. In GR, the photon trajecotries (in the Schwarzschild coordinates) are not exactly hyperbolic, but the family of photon trajectories share the same behavior: the scattering angle grows with the distance to the lens position. On the other hand, equation (\[eqTheta\]) tells us that the scattering angle $\Theta$ is inversely proportional to $r_{\circ}$. Therefore, there are only two possible null geodesics from a given emission source to a given obserser for a given azimuthal angle.
Figure \[fig-scatplane\] shows the scattering plane ($\phi =$ constant) and two null geodesics (or optical paths). A photon emitted along the tangent to an optical path at the emission source arrives at the observer with the propagation vector tangent to the optical path at the observer. Thus, the observer sees two stars, one at $(\alpha_1, \phi)$ and the other at $(\alpha_2, \phi)$ (in this unorthodox angular position coordinate system). If we remove the lensing mass $M$ (or wait for the lensing mass to move away), the observer will see one star at $(\alpha_s, \phi)$. In lensing jargon, $(\alpha_s, \phi)$ is referred to as the source position, and $(\alpha_1, \phi)$ and $(\alpha_2, \phi)$ are referred to as the image positions. The relation between the source position and the image positions is called the lens equation and is obtained easily from the diagram in figure \[fig-scatplane\]. If $\alpha$ is the variable for the image positions, and $D_1$ and $D_2$ are the distances from the lens to the observer and to the source along the line of sight, the lens equation is given by - \_s = [D\_2D\_1 + D\_2]{} [4GMD\_1 ]{} . \[eqAleq\] This is a quadratic equation and has two solutions for $\alpha$ for each $\alpha_s$. When $\alpha_s =0$, the two solutions are reflection symmetric: $\alpha = \pm\alpha_E$. In fact, when $\alpha_s =0$, the scattering plane is not uniquely determined due to the azimuthal symmetry, and the images form along a ring of radius $\alpha_E$. This ring is the famous Einstein ring, and $\alpha_E$ is referred to as the angular Einstein ring radius. In the small scattering angle approximation we are using, the closest approach $r_\circ$ of these photon paths to the lensing mass $M$ is the same as the Einstein ring radius $R_E \equiv D_1 \alpha_E$. $D_1$ and $D_2$, R\_E , where $D$ is the reduced distance of $D_1$ and $D_2$. The lens equation (\[eqAleq\]) can be written in terms of linear variables. Let $b \equiv D_1 \alpha$ and $s \equiv D_1 \alpha_s$. Then, the variables are defined on the plane that passes through the lensing mass. b - s = R\_E\^2 [1b]{} In order to incorporate the variable ($\phi$) for the orientation of the scattering angle, we should write it as a (2-d) vector equation. b - s = R\_E\^2 [b\^2]{} \[eqSingb\] So far, the lensing mass has been at the origin of the coordinate system. If it is at $\vec x$, the lens equation becomes b - s = R\_E\^2 [b - x (b - x)\^2]{} . This can be extended to multiple particle lens systems. For a binary lens, b - s = R\_E\^2 ([\_1(b-x\_1) (b-x\_1)\^2]{} + [\_2(b-x\_2) (b-x\_2)\^2]{}) , \[eqBib\] where $R_E$ is the Einstein ring radius of the total mass $M$, $R_E^2 \equiv 4 G M D$, and $\epsilon_1$ and $\epsilon_2$ are the fractional masses located at $\vec x_1$ and $\vec x_2$ respectively.
The two-dimensional vectors are most ideally handled as complex variables. Most of all, that is the only way to solve the binary equation. So, let’s complexify the variables: $\vec b, \vec s, \vec x_1 \vec x_2
\rightarrow z, \omega, x_1, x_2$. Then equations (\[eqSingb\]) and (\[eqBib\]) are rewritten as follows. = z - [R\_E\^2|z]{} \[eqSing\] = z - R\_E\^2 ([\_1 |z - x\_1]{} + [\_2 |z - x\_2]{} ) \[eqBi\] We can choose the coordinate system so that the lens position variables $x_1$ and $x_2$ are real. Once we introduce a complex variable on the plane on which the lensing variables are defined (so commonly referred to as the lens plane), the lens plane as a two-dimensional linear space is parameterized by the complex variable and its complex conjugate. What is convenient about complex variables is that we only need to write half the equation. For example, equation (\[eqSing\]) implies that the following is also true. |= |z - [R\_E\^2z]{} Incidently, we have defined two sets of variables on the lens plane: One for the image position variable, $(z, \bar z)$, and the other for the source variable position variable, $(\omega, \bar\omega)$. One may wonder if it is necessary to consider a projected plane to define complex variables. That is not so. We could have defined a complex plane for the angular space parameterized by $\vec \alpha (\leftarrow \alpha)$ or $\vec \alpha_s
(\leftarrow \alpha_s)$. It is just that we have identified the angular space and a projected plane, which is valid because $\Theta << 1$. As a matter of fact, we could have chosen any projected plane as our lens plane where we define lensing variables. What is invariable is the fact that the observer sees images and recognizes the tangent of the optical paths at the observer as the angular positions of the images in the sky. In fact, we do not have to adhere to the linear scale of the projected plane, and it is customary to normalize the equation (or scale the lens plane) so that $R_E = 1$. For example, the binary lens equation can be rewritten in dimensionless variables as follows. = z - [\_1 |z - x\_1]{} - [\_2 |z - x\_2]{}
-
Einstein ring is well known, but it is still an interesting object to think about if we think about it. If the radiation emission from the source is a uni-directional coherent beam as in a laser, the observer will be able to see one image of the source at most let alone a ring image. However, many heavenly bodies are largely isotropic radiation emitters. So, when the lensing trio are aligned, the observer sees infinitely many rays from the source, $\{\vec\alpha~|~ |\vec\alpha| = \alpha_E\}$, instead of one, $\{\vec\alpha~|~ \vec\alpha = \vec\alpha_s \}$. If we look at the lens equation (\[eqSing\]), it is an explicit mapping from an image position to its source position. So, when the lensing trio is aligned, a continuum of image positions is mapped to one source position under the lens equation. In other words, the Einstein ring is the set of statinary points of the lens equation. The curve of stationary points of a mapping seems to be said to be critical (I have an impression that anything in mathematics that may be remotely interesting is said to be critical), hence Einstein ring is a critical curve of a point mass lens. The stationarity is due to the azimuthal (or axial) symmetry of the lensing system, hence the tangent (azimuthal vector) to the Einstein ring vanishes under the lens equation but not the normal. The linear differential behavior of a mapping is conveniently described by the Jacobian matrix ($d\vec\alpha_s/d\vec\alpha$) written out in a 2 by 2 array, and the criticality shows up as a vanishing eigenvalue of the Jacobian matrix. We differentiate equation (\[eqSing\]). = [J]{} [dz d|z]{} \[eqDerivative\] where the Jacobian matrix ${\cal J}$ is given as follows. = ; \[eqJac\] The eigenvalues are \_ = 1 || \[eqEval\] (The eigenvalues are real, which must be expected from that the Jacobian matrix is hermitean : ${\cal J}^{\dagger} = {\cal J}$). When $|z| = R_E$, $|\kappa| = 1$, and $\lambda_-$ vanishes. So does the Jacobian determinant, of course, which is the product of the eigenvalues.
In the case of a binary lens, the differential equations are exactly the same except for that $\kappa$ is given by + [\_2(z-x\_2)\^2]{} . \[eqKappa\] We have chosen $R_E = 1$ as in the (normalized) binary equation (\[eqBi\]). On the critical curve, where $\lambda_- = 0$ and $\lambda_+ = 2$, $\kappa$ is a pure phase because $|\kappa| = 1$. So, the critical curve is the set of the solutions to the analytic equation (\[eqKappa\]) with = e\^[2i]{} : \[0, ) . When the separation $\ell \equiv |x_1 - x_2|$ between the two lens elements is smaller than $\ell_-$, the critical curve is made of three loops and so is the caustic curve. \_- = ([3 ]{} +[3 ]{})\^[-[34]{}]{} ; \_- < 1 So, the caustic curve of a binary lens with $\ell \lsim 0.7$ is made of three disjointed loops irrelevantly of the fractional mass parameter $\epsilon_s$. Figure \[fig-caustic\] shows an example of a (symmetric or equal mass) binary lens with $\ell = 0.55$. The critical curve is in blue, the caustic curve is in red, and the two crosses in black are the positions of the lenses. The line that connects the two lens elements of a binary lens is referred to as the lens axis. The caustic loop with four cusps (quadroid) always crossed the lens axis, and the two triangular caustic loops (trioid) alway are off the lens axis. The small critical loop encloses the limit point which is at $z_{\ast\pm} = \pm i \ell/2$. The corresponding points (correspondence by the mapping of the lens equation) fall inside the trioids. \_ = i ([2]{} - [1]{}) \[eqTriPosition\] Figure \[fig-caustic\] shows a source trajectory with one end at $\omega_{\ast +}$ in greeen and the corresponding image trajectories in magenta. We have chosen this half-way trajectory so that the accidental symmetry due to the equal mass would not mire the visual clarity of the behavior of the image trajectories. The union of the yellow curves and the magenta curves represent the total images of the line source trajectory with $\omega = - 1.75 i$. Readers are encouraged to be impressed by the similarity between the source trajectory and the image trajectory at the bottom of the plot and the relatively small area the image trajecotries inside the large critical loop occupy. The parity of the images is positive outside the large critical loop and inside the small critical loops. Inside the large critical loop and outside the small critical loops, the images have negative parity. There are usually three images in a binary lens, and the corresponding image trajectories are the one at the bottom of the plot converging to $\infty$ (positive) and the two outside the small critical loop converging to the lens positions. Since $\omega = - 1.75 i$ is inside a caustic loop, we expect two more images while the source trajectory remains inside the caustic loop. They are the small segments that connected at a critical point on the small critical loop in the upper half plane.
In the case of a symmetric lens, the two limit points and the lens positions form a square. So, the distance between the two small critical curves is about the same as the separation between the lensing masses, and the caustic crossing images are at a distance of about half the separation from the center of the mass. As $\ell$ becomes small, the distance of the limit points decrease linearly with $\ell$ and do all the images but the image near the source. This means that all the images but the image near the source become only nominal images, and that is reflected on the off-axis caustics moving away from the lens axis inversely proportional to the separation $\ell$ (see equation (\[eqTriPosition\])). Also, the sizes of the critical loops and caustic loops shrink as $\ell$ shrinks. What it means is that the lensing elements are so close to each other that the binary lens behave more or less as a single lens. If we assume that $\ell = 0.1$ as in ZG1199, the caustic in the lower half-plane will be at $\omega = - 9.95 i$. The microlensing amplitude of a single lens of a source at a distance of 9.95 (in Einstein ring radius unit) from the lens is 1.000196. So, the effect of lensing on the image near the source is a brightening by $\sim 0.02 \%$, which is practically equivalent to no lensing at all. Also, as the trioid caustic shrinks practically to a point at $\omega = - 9.95 i$, the finite size source effect washes out the singular brightening effect of the critical curve (the images of the most of the part of the star falls away from the critical curve, and its average falls below any reasonable detection level). The side of the trioids measure about 0.00045 in units of Einstein ring radius. Usually, the Einstein ring radius is $\lsim 1$ mas, then the size of the troids will be $\lsim 0.45 \mu$as. The solar radius at 8 kpc from us will be about 0.565 $\mu$as.
Now, let the binary rotate. Let’s put the (abstract) lens plane at $D_1$. The linear speed of the troids will be hundred times larger than that of the binary masses. If we assume that the lens is half way to the source that is 8 kpc away from us, then the Einstein ring radius of a solar mass lens is 4 au. If the binary is face on, and $\ell = 0.4$ au, then the orbital velocity is $50$ km/sec, and trioids move at an apparent speed of $5,000$ km/sec $= 0.0167$ (in units of the speed of light) which is hardly a relativistic speed. If the lensed star has the solar radius, then it is 1.16 seconds on the (abstract) lens plane. So, it takes about 140 seconds for the trioid to sweep across the solar diameter. If the exposure time is order of a few minutes (with a moderate size telescope), the signal of the caustic crossing will be contained in one frame. If one arranges the apparent speed of the trioid to be bigger, the effect will be to shorten the duration of the signal. There is no physical bearing this “fast glance effect" of a far-away caustic shares with the “superluminal effect" of a particle beam moving at an angle with respect to the line of sight.
-
In a gravitational binary lens, one encounters three types of “some-form-of-OID’s": “trioid" with three cusps, “quadroid" with four cusps, and “hexoid" with six cusps borrowing the names from our own paper on line caustic crossing and limb darkening [@limbpaper]. The “-OID’s" in binary lenses are all simple loops with winding number one unlike in higher multiple point lenses. In an effort to avoid cooking up redundant nomenclatures for “some-form-of-OID’s", we looked up “CRC Concise Encyclopedia of Mathematics" edited by Chapman and Hall (CRC from hereon) with “tricuspid" suggested by an “authority" for “trioid" in mind.
“TricuspOID" seems to be a mathematical term even though “tricuspid" is not. So is “deltoid", which seems to originate from the shape of the Greek letter $\Delta$. It is also the anatomic term for a large muscle covering the shoulder joint. We get an impreesion that anatomy and geometry must have been developed hand in hand. A “nephroid" is an “-oid" with two cusps one can generate using a so-called supercritical lens. It looks somewhat “like a kidney" (again from Greek), and so its name, “nephroid". The cusps of a “nephroid" are spiky inward, hence a “nephroid" is an epicycloid. It doesn’t seem to be a taboo to refer to a “nephroid" as a “2-cusped epicycloid". An epicycloid with one cusp resembles the heart, and so a “cardioid". The parametric equation seems to be easy to recognize due to the close relation to the polar equation for an ellipsis. It is $r= a(1+\cos\theta)$ for a “cardioid" and $a= r(1+\cos\theta)$ for an ellipsis (with eccentricity 1). We have failed in finding anatomic names for epicycloids with three cusps or more. However, we have found a stellar nomenclature for a 4-cusped object. An “astroid" is a hypocycloid (spiky outward) with four cusps. It is also called a tetracuspID, cubocycloid, or paracycle according to our reference CRC. Cubocycloid must have derived from cuboid which refers to a rectangular parallelepiped and also one of the tarsal bones.
In microlensing where the lenses are gravitationally bound multiple point masses, the metric is flat asymptotically, and the image of a source far away from the lens system is an unlensed image (or source itself) with $J = 1$. The critical curves are always closed curves, and the lens equation is always subcritical even when one includes dispersed medium of Galactic particle dark matter. One consequence is that the caustic curves are smooth closed curves punctuated by (spiky outward) cusps, or simply, hypocycloids. There is no 1-cusped hypocycloid or 2-cusped hypocycloid. After consulting the 2000-page reference, we may still feel in jeopardy how to extend the naming tradition of “-OID’s". Or, is it “-ID’s"? The confusion between “-oid" and “-id" can arise from that a 3-cusped hypocycloid is called a “tricuspOID", and a 4-cusped hypocycloid is called a “tetracuspID" according to CRC. Considering that a 4-cusped hypocycloid is most commonly referred to as an “astroid" not “tetracuspid", we conjecture that “tetracuspid" must have derived from dental teminology “cuspid" and lapsed attention to the particular characteristics of the points (cusps) in cycloids. A cuspid refers to a canine tooth which has a single projection point. A bicuspid refers to a premolar tooth which has two projection points. A tricuspid refers to a tooth that has three projection points and also a tricuspid valve. The suffix “-cuspid" seems to mean “pointed". Cusps in cycloid are pointed in a particular manner where the tangent flips its sign. This explains why “tricuspid" is not a mathematical terminology for a 3-cusped hypocycloid. This explains why we referred to these objects as “some-form-of-oids" early on in this article. There doesn’t seem to be a usage of tetracuspid in dentistry or in anatomy.
Now, we discuss why chose the pattern of “number-oid" (“quadroid") instead of “number-vertex-oid" (as in “tricuspoid") or “anatomy-oid" (as in “nephroid"). Following the “tradition" of borrowing anatomical names is excluded because of the likelihood of arbitrary number of cusps that may define caustic curves. So, the extension of imagination that may stem from “deltoid" and “cubocycloid" (and also “astroid" which is obviously an anatomy of a star if we look at a bright star in an HST frame) meets the dead end. We read that Euler studied a deltoid in 1745 in relation to an optical problem and also by Steiner in 1856, and a deltoid is referred to as Steiner’s hypocycloid in some literature. Despite all our intention to honor them, our option becomes limited to the pattern of “number-oid" or “number-vertex-oid". Let’s consider epicycle and epicycloid to differentiate the two. An epicycle depicts a circular motion of an object around a center that moves along a larger circle, and the curve is smooth everywhere. An epicylcoid depicts a cicrular motion of an object around the smaller circle that rolls on the larger circle, and the curve is smooth except at the cusps (the winding number is determined by the ratio of the two circles). It seems to be clear that “-oid" in “-oid" objects we discussed so far represents a particular resemblance to a cycle: curved and smooth like a circle but punctuated with points where the tangent flips its sign. In a binary lens, they are also simply closed curves. Thus, it is very clear that the number of cusps and the number of the smooth segments (or “sides") of a caustic loop are the same, and it is sufficient to assign a number to specify the particular “-oid". In a higher multiple point lens, a caustic loop can have winding number larger than 1. However, the winding number is a finite integer, and the number of vertices (or cusps) and the number of sides (or smooth segments) are the same. The caustic curve of a gravitational binary lens consists of one hexoid, two quadroids, or, one quadroid and two trioids.
-
We have reviewed that the intrinsic variables of the lens equation are angular variables. As in figure (\[fig-caustic\]), we are free to choose a plane, set the distance scale, and mark all the variables, parameters, and objects we find fit from the lens equation. When $\ell \rightarrow 0$, the binary lens converges to a single lens, and the quadroid caustic around the center of mass contracts to a point caustic. So, “the single lens has a point caustic at the position of the lens." We may draw the critical curve and caustic curve on the same (abstract) lens plane as we did in figure (\[fig-caustic\]). The lens equation is a mapping from the chosen (abstract) lens plane to itself, or a mapping from the image plane to the source plane where the image plane refers to the (abstract) lens plane parameterized by the image position variable and the source plane refers to the (abstract) lens plane parameterized by the source position variable.
Now, is it confusing to call $D_s$-plane (the plane defined by the radial position of the radiation emission source) the source plane? Having understood that we only need to define one space (angular space) or a plane that corresponds to the angular space, it doesn’t seem to matter whatever the plane may be called. Once we know clearly what degrees of freedom we are manipulating through the lens equation, it doesn’t seem to be a confusing practice to let the beloved term “source plane" be used in both ways: based on the radial coordinate or based on the transverse coordinates. What is invariant seems to be that physics lies in relations not in nomenclatures. It is good to have distinguishable nomenclatures, but there is nothing holy (meaning leaving no room for scientific reasoning and fluidity) about the “source plane". We find it sufficient to exercise a bit of contextual understanding to let the “source plane" enjoy the both definitions and let lensing colleagues keep their inertial tradition of describing lensing. Is it a law or a rule that the caustic curve lies on the source plane? We may choose to call the plane parameterized by the source position variable $\omega$ the $\omega$-plane, perhaps from boredom or from a courtesy to leave the term “source plane" for the practitioners who are attatched to the radial coordinate. Then, the caustic curve will lie on the $\omega$-plane. So, it may constitute a proper question to ask if it is a law or a rule, and no paper should be shredded over this unsubstantiated dogma on wording.
Are newcomers to the field confused by the confusing usage of terminologies as Gould claims? Perhaps, not. We find it an unsubstantiated claim. Considering the suggestion of superluminal caustics, we would think that the proper route to resolve any confustion is for students to take a bit more than a few hours to understand lensing from first principles. We have found that the origin of the controversies lies in conceptual misunderstandings.
-
Maxwell’s equations tell us how electromagnetic fields and matter interact. Lienard-Wiechert potentials satisfy the Maxwell’s equations with a moving charge. Special relativity is a property of the space time and so is the spin of a particle. When the velocity of the system becomes comparable to the speed of light, the space-like components of the fields become comparable to the time-like components. And, a novelty one may witness (as a student) from playing with the Lienard-Wiechert potentials is to see the electromagnetic wave propagate and actually carry the energy out to infinity. If we only pick out the time-component $A_0$, we will be at a loss with the discrepancies with the measurements of the electric and magnetic fields. Einstein field equations tell us how gravitational fields (or metric) and matter interact. When the velocity of the system becomes comparable to the speed of light, we expect that one should examine not only the time-time component ($g_{00}$) but also the other five components of the metric. We find it a baffling practice for ZG1199 to declare without substantiation that “retarding the Newtonian potential" (with additional factor 2 mentioned above) results in the “gravitational analog" that governs the behavior of the null geodesics as seen by an observer. We have no doubt that it will take a bit more than a few hours to examine the metric even at the post Newtonian level. However, we find it a worthy exercise to be carried out.
Acknowledgments {#acknowledgments .unnumbered}
===============
It is our pleasure to express our gratitude to Clara Bennett for the copy of “CRC Concise Encyclopedia of Mathematics" given to the author during the last winter solstice. It is a great gift to be snowed in with.
\[@\]jnl\#1[[\#1]{}]{}
, A. 1923, “The Principle of Relativity”, Dover Publications.
, A. 1999, “Gravitational Lensing: Recent Progress and Future Goals”, Boston University, July 1999.
, A. 2000, astro-ph/0001421
, A, & Loeb, A. 1992, , 396, 104
, S., et al. 1997, , 483, 565
, S., et al. (The MPS and MOA Collaborations) 2000, , in press (astro-ph/9905151)
, S, & Bennett, D. 1999, astro-ph/9912050
, P. 1999, astro-ph/9909466
, B., et al. 1998, , 507, 46
, P, & Weiss, A. 1986, , 164, 237
, S. 1972, “Gravitation and Cosmology”, John Wiley & Sons, Inc.
Wyithe, J., Turner, E., & Webster, R. 2000, astro-ph/0001307
, Z, & Gould, A. 2000, astro-ph/0001199
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'T. Schirmer.'
- 'A. Abergel'
- 'L. Verstraete'
- 'N. Ysard'
- 'M. Juvela'
- 'A. P. Jones'
- 'E. Habart'
bibliography:
- 'Zotero.bib'
date: 'Received 12 March 2020; accepted ??'
title: Dust evolution across the Horsehead Nebula
---
[Micro-physical processes on interstellar dust surfaces are tightly connected to dust properties (i.e. dust composition, size and shape) and play a key role in numerous phenomena in the interstellar medium (ISM). The large disparity in physical conditions (i.e. density, gas temperature) in the ISM triggers an evolution of dust properties. The analysis of how dust evolves with the physical conditions is a stepping-stone towards a more thorough understanding of interstellar dust.]{} [The aim of this paper is to highlight dust evolution in the Horsehead Nebula PDR region.]{} [We use *Spitzer*/IRAC (3.6, 4.5, 5.8 and 8 [$\mu$m]{}), *Spitzer*/MIPS (24 [$\mu$m]{}) together with *Herschel*/PACS (70 and 160 [$\mu$m]{}) and *Herschel*/SPIRE (250, 350 and 500 [$\mu$m]{}) to map the spatial distribution of dust in the Horsehead over the entire emission spectral range. We model dust emission and scattering using the THEMIS interstellar dust model together with the 3D radiative transfer code SOC.]{} [We find that the nano-grains dust-to-gas ratio in the irradiated outer part of the Horsehead is 6 to 10 times lower than in the diffuse ISM. Their minimum size is 2 to 2.25 times larger than in the diffuse ISM and the power-law exponent of their size distribution, 1.1 to 1.4 times lower than in the diffuse ISM. Regarding the denser part of the Horsehead, it is necessary to use evolved grains (i.e. aggregates, with or without an ice mantle).]{} [It is not possible to explain the observations using grains from the diffuse medium. We therefore propose the following scenario to explain our results. In the outer part of the Horsehead, all the nano-grains have not yet had time to re-form completely through photo-fragmentation of aggregates and the smallest of the nano-grains that are sensitive to the radiation field are photo-destroyed. In the inner part of the Horsehead, grains most likely consist of multi-compositional, mantled aggregates. ]{}
Introduction
============
Interstellar dust plays an essential role within the interstellar medium (ISM) through different microphysical processes happening on dust surfaces that can heat the gas, such as the photoelectric effect [e.g. @bakes_photoelectric_1994; @weingartner_photoelectric_2001], or cool the gas through gas-grain collisions [@burke_gas-grain_1983]. By acting as a catalyst, allowing atoms and molecules to react on its surface, dust is strongly involved in the chemistry of the ISM . Also, dust plays a role in the redistribution of UV-visible stellar radiation into IR-mm radiation, a process that depends on the dust mass and the volume of dust grains [e.g. @draine_interstellar_2003; @compiegne_global_2011]. The efficiency of these processes strongly depends on the dust properties, such as the grain size, composition and shape. It is therefore important to constrain dust properties in order to understand the different phenomena that take place in the ISM. To this purpose, several dust models have been developed and are classified in three categories. Those composed of silicate and graphite [e.g @mathis_size_1977; @draine_optical_1984; @kim_size_1994] with an extension of these models, using PAHs (polycyclic aromatic hydrocarbons) [e.g. @siebenmorgen_dust_1992; @li_ultrasmall_2001; @weingartner_dust_2001]. As a result of fragmentation and coagulation processes in the ISM, dust models with grains that have a core-mantle structure [e.g. @desert_interstellar_1990; @jones_structure_1990; @li_unified_1997] and dust composite models composed of silicate and carbon grain aggregates [e.g. @mathis_composite_1989; @zubko_interstellar_2004] have been proposed. In this paper, we use the THEMIS dust model [see @jones_evolution_2013; @kohler_hidden_2014; @jones_cycling_2014; @kohler_dust_2015; @ysard_dust_2015; @jones_global_2017], developed in combination with the results of laboratory experiments and astronomical observations. The cornerstone of this model is its self-consistent view of the evolution of the dust constituents through the ISM. This view is required for understanding dust evolution in response to the local ISM conditions (i.e. density, radiation field).
Some of the first evidence of dust evolution was shown by [@fitzpatrick_analysis_1986] through the variation in the 2175 [ ]{}interstellar bump from diffuse ($R_V=3.1$) to denser regions (up to $R_V$ $\sim$ 5.5). Similarly, other studies [e.g. @cardelli_relationship_1989; @cardelli_absolute_1991; @campeggio_total_2007] found the same variations that have for the first time been explained by [@kim_size_1994] by stating that these observations are consistent with a decrease in the carbonaceous nano-grains abundance (relative to the gas) together with an increase in larger grain abundance. It is also possible to follow dust evolution from its emission in the mid-IR (due to stochastically heated nano-grains) and in the far-IR (where large grains at thermal equilibrium emit). This has led to a wealth of studies [e.g. @boulanger_variations_1990; @laureijs_iras_1991; @abergel_comparative_1994; @bernard_pronaos_1999; @stepnik_evolution_2003; @flagey_evidence_2009] revealing that nano-grains disappear in dense regions as they coagulate onto larger grains. Dust evolution is also highlighted by variation of its far-IR opacity with the local environment [e.g. @juvela_galactic_2011; @planck_collaboration_planck_2011-1; @martin_evidence_2012; @roy_changes_2013; @ysard_variation_2013; @kohler_dust_2015; @juvela_galactic_2015] explained with dust coagulation and accretion of ice mantles, a scenario which is supported by numerical simulations of dust evolution in dense regions [e.g. @ossenkopf_dust_1994; @ormel_dust_2011; @kohler_dust_2015].
Photon-dominated regions (PDRs) [@hollenbach_dense_1997; @hollenbach_photodissociation_1999] correspond to the interface between HII regions and molecular clouds that are irradiated by energetic stars close by. In these regions, the physical conditions are strongly constrated hence PDRs are a unique place to study how do dust, gas and local physical conditions evolve with depth. Based on dust emission variations in the mid-IR observed with [*Spitzer*]{} in several PDRs (Ced 201, NGC 7023, $\rho$ Ophiuchi West filament), [@berne_analysis_2007] construed that such variations can be explained by the photo-processing of carbonaceous nano-grains, a scenario later reinforced in other PDRs [@abergel_evolution_2010; @pilleri_evaporating_2012; @boersma_properties_2014; @pilleri_variations_2015]. Using far-IR observations from [*Herschel*]{}, together with the near and mid-IR observations from [*Spitzer*]{}, [@arab_evolution_2012] found that the carbonaceous nano-grains abundance decreases together with an increase in the opacity of the large grains in the Orion bar. They claimed that these variations are likely due to coagulation processes in the denser part of this region. Evidence of dust evolution has also been shown in IC 63 based on extinction mapping [@van_de_putte_evidence_2019].
In this paper, we focus on a well-known PDR, the Horsehead, that has previously been studied from the perspective of dust observations [e.g. @abergel_isocam_2003; @teyssier_carbon_2004; @compiegne_aromatic_2007; @pety_are_2005; @compiegne_dust_2008; @arab_evolution_2012-2], gas observations [e.g. @habart_density_2005; @goicoechea_low_2006; @gerin_hco_2009; @guzman_h2co_2011; @pety_iram-30_2012; @ohashi_mapping_2013; @le_gal_new_2017] and laboratory experiments [@alata_vacuum_2015]. The most important question we try to answer is how the dust properties change with physical conditions. Thus, is it possible to understand these observations with grains from the diffuse ISM? Otherwise, is there a viable dust evolution scenario that can explain the observations and is consistent with the physical conditions in the Horsehead?
The paper is organised as follows. In Sect.\[sec:PDR\], we describe the previous studies and the observations of the Horsehead. In Sect.\[sec:models\_tools\], we detail the dust model we use, THEMIS, as well as the local dust emission tool, DustEM. We also present the effects of variations in dust properties on its emission in the optically thin case with DustEM in order to disentangle variations in the dust spectrum due to changes in dust properties and those due to radiative transfer effects. In Sect.\[sec:dust\_emission\_radiative\_transfer\], we present SOC, the 3D radiative transfer code we use, as well as the effect of variations in the dust parameters on dust emission in the optically thick case, in the case of the Horsehead. In Sect.\[sec:comparison\_observations\], we compare our model with the observations and present the best parameters we obtain. In Sect.\[sec:discussion\], we discuss our results and propose a scenario of dust evolution in the Horsehead. Finally, we present in Sect.\[sec:conclusion\] our conclusions.
A prototypical PDR: the Horsehead {#sec:PDR}
=================================
As physical conditions are strongly constrasted and spatially resolved in nearby photodominated regions, they are the ideal place to study dust evolution as a function of physical conditions. First, we introduce the different studies that have been made of the Horsehead; second, we present the observations of the Horsehead obtained with [*Spitzer*]{} and [*Herschel*]{}; third, we describe the density profile that we use to perform radiative transfer across the Horsehead.
A well studied PDR
------------------
The Horsehead is an archetypal PDR situated at $\sim$ 400 pc [@anthony-twarog_h-beta_1982] that is illuminated by the binary star $\sigma$-Orionis which is an O9.5V binary system [@warren_photometric_1977] with an effective temperature of $T_{\mathrm{eff}}\sim$ 34600 K [@schaerer_combined_1997] located at a projected distance $d_{\mathrm{edge}}\sim$ 3.5 pc from the Horsehead edge. Observations of the Horsehead have been made in the visible [e.g. @de_boer_diffuse_1983; @neckel_spectroscopic_1985] and at millimeter wavelengths for $^{12}$CO and $^{13}$CO [e.g. @milman_co_1973], $^{12}$CO [e.g. @stark_co_1982], NH$_{3}$ [e.g. @sandell_young_1986], CS [e.g. @lada_unbiased_1991], C$^{+}$ [e.g. @zhou_[c_1993] and $^{13}$CO [e.g. @kramer_structure_1996].
Later, mid-IR observations [@abergel_isocam_2003] with ISOCAM highlighted that the Horsehead is likely to be seen edge-on hence offers us a unique opportunity to study dust, gas and the evolution of local physical conditions with depth into the Horsehead. This has led to many studies at millimeter wavelengths for CO [@pound_looking_2003], C$^{18}$O [@hily-blant_velocity_2005], CS, C$^{34}$S and HCS$^{+}$ [@goicoechea_low_2006], CI and CO [@philipp_submillimeter_2006], DCO$^{+}$ [@pety_deuterium_2007], HCO and H$^{13}$CO$^{+}$ [@gerin_hco_2009], H$^{13}$CO$^{+}$, DCO$^{+}$ and HCO$^{+}$ [@goicoechea_ionization_2009], H$_{2}$CO [@guzman_h2co_2011], CF$^{+}$ [@guzman_iram-30m_2012], l-C$_{3}$H$^{+}$ [@pety_iram-30_2012], CH$_{3}$CN, HC$_{3}$N and C$_{3}$N [@gratier_iram-30_2013], H$_{2}$CO and CH$_{3}$OH [@guzman_iram-30_2013], NH$_{3}$ [@ohashi_mapping_2013], HCOOH, CH$_{2}$CO, CH$_{3}$CHO and CH$_{3}$CCH [@le_gal_new_2017].
Regarding dust, [@teyssier_carbon_2004] found that although small hydrocarbons are supposed to be photo-destroyed by the intense UV field at the edge of the Horsehead, they are still existing. They suggest that the photo-erosion of carbonaceous nano-grains into small hydrocarbons is more efficient than the photo-destruction of small hydrocarbons at the Horsehead edge. This scenario is reinforced by observations in [@pety_are_2005] as they found hydrocarbons such as CCH, c-C$_{3}$H$_{2}$ and C$_{4}$H in the UV-irradiated outer part of the Horsehead. It is also supported by laboratory experiments on thermal processed and UV-irradiated dust grains analogues [see @smith_optical_1984; @zubko_interstellar_2004; @alata_vacuum_2014; @alata_vacuum_2015; @duley_small_2015]. Based on Spitzer observations, [@compiegne_aromatic_2007] proposed a scenario where PAHs survive in HII regions and [@compiegne_dust_2008] construed that spectral variations in the mid-IR cannot only be explained by radiative transfer effects and therefore are a consequence of dust evolution across the Horsehead.
Observations with Spitzer and Herschel {#sub:sub:observations}
--------------------------------------
We use [*Spitzer*]{} and [*Herschel*]{} observations (see Appendix\[appendix:HH\_obs\]) of the Horsehead in 10 photometric bands from 3.6 [$\mu$m]{} to 500 [$\mu$m]{}, which cover nearly the entire dust spectrum. The processing of the [*Spitzer*]{} maps is detailed in [@bowler_infrared_2009]. Data were processed in the HIPE environment, with standard [*Herschel*]{} corrections for instrumental effects and glitches. PACS 70 [$\mu$m]{} and 160 [$\mu$m]{} maps were obtained after the superposition of two observations with a scan speed of 20/s whose directions were perpendicular to one another. The overall duration of these observations is 4122 seconds and cover 8.8$\times$ 4.5 of the Horsehead. Concerning SPIRE 250 [$\mu$m]{}, 350[$\mu$m]{} and 500 [$\mu$m]{}, they were obtained after the superposition of two observations with a scan speed of 30/s whose directions were perpendicular to one other. The overall duration of these observations is 1341 seconds and they cover 8$\times$ 8 of the Horsehead. Striping induced by offsets in the flux calibration from one detector to another was removed using the Scan Map Destriper module included in the HIPE environment.
We study the observed emission profiles through three different cuts across the Horsehead (see Fig.\[fig:HH\_24\]). The calibration uncertainty in the IRAC bands ([$\mathrm{IRAC}_{3.6}$]{}, [$\mathrm{IRAC}_{4.5}$]{}, [$\mathrm{IRAC}_{5.8}$]{} and [$\mathrm{IRAC}_{8.0}$]{}) is 2 $\%$ [@reach_absolute_2005], 4 $\%$ in [$\mathrm{MIPS}_{24}$]{} [@engelbracht_absolute_2007], 5 $\%$ in [$\mathrm{PACS}_{70}$]{} [@gordon_absolute_2007], 12 $\%$ in [$\mathrm{PACS}_{160}$]{} [@stansberry_absolute_2007] and 15 $\%$ in the 3 SPIRE bands [@swinyard_-flight_2010]. In this study, we considered all these errors to be independent of the wavelength to first order. Also, we consider that the emission in all of these 10 bands is coming from dust, which is not completely the case in [$\mathrm{IRAC}_{3.6}$]{} and [$\mathrm{IRAC}_{4.5}$]{}. We estimate with a model of atomic and molecular gas in PDRs, the Meudon PDR Code [@le_petit_model_2006], that gas can contributes less than 10$\%$ of the flux. However, this contribution does not affect the bulk of our results hence we consider that the observed emission is dust emission. Nevertheless, one must be careful in the interpretation of the observations as gas emission can be larger than dust emission in photometric bands covering shorter wavelengths (HST or NIRCAM onboard the JWST).
Density profile across the Horsehead {#sec:sub:density_profile}
------------------------------------
In this paper, radiative transfer calculations are performed, which require information on the density profile across the Horsehead. We use the profile described in [@habart_density_2005]. As the H$_{2}$ 1-0 S(1) fluorescent emission is very sensitive to both the radiation field and the gas density, they observed this line with the SOFI instrument at the NTT. This observation was combined with previous observations of H$_{\alpha}$ and dust mid-IR emission in order to constrain the density profile at the edge of the Horsehead. [@habart_density_2005] also used CO mm observations from the IRAM 30-m telescope [@abergel_isocam_2003; @teyssier_carbon_2004] and the Plateau de Bure Interferometer [@pety_are_2005] as well as 1.2 mm dust continuum emission obtained with MAMBO at the IRAM 30-m telescope [@teyssier_carbon_2004] to constrain the density profile in the inner part. All these observations were interpreted with the Meudon PDR Code. This density profile (see Fig.\[fig:density\_profile\], upper panel) was also used in [@compiegne_dust_2008] and [@arab_evolution_2012-2] and is defined as follows :
$$\label{eq:density_profile}
n_{\mathrm{H}}(z)=\left\{
\begin{array}{l l}
n_{0} \times \left(\frac{z}{z_{0}}\right)^{\gamma} & \quad \text{if $z<z_{0}$}\\
n_{0} & \quad \text{if $z>z_{0}$}\\ \end{array} \right.$$
where : $$\label{eq:density_parameters}
n_{0} = 2 \times 10^{5}\,\mathrm{H\,cm^{-3}} \; ; \;
z_{0} = 0.06\,\mathrm{pc} \; ; \;
\gamma = 2.5 \; ; \; z = d_{\star} - d_{\mathrm{edge}}.$$ with $z$ the position from the edge of the Horsehead, $\gamma$ the power-law exponent of the gas density profile and $z_0$ the depth beyond which constant density $n_0$ is reached.
In this study they also estimated the length of the Horsehead along the line of sight, [$l_{\rm{PDR}}$]{}. They found that this parameter is constrained to be between 0.1 pc and 0.5 pc. We assume that the density profile is independent of the position along the line of sight (see Fig.\[fig:density\_profile\], bottom panel).
Dust modelling {#sec:models_tools}
==============
The interpretation of the multi-wavelength observations of the Horsehead depends on its structure, the incident radiation field and the dust model. We therefore need a dust model and modelling tools to compute dust emission based on the local physical conditions. First, we describe our adopted dust model THEMIS; second, we introduce DustEM, that is used to compute the local dust emission and we describe how dust emission evolves with its properties in the optically thin case using DustEM.
THEMIS {#sec:sec:THEMIS}
------
The Heterogeneous dust Evolution Model for Interstellar Solids, THEMIS[^1] [e.g., @jones_evolution_2013; @kohler_hidden_2014; @jones_global_2017], is based on observational constraints and laboratory measurements on interstellar dust analogues that are amorphous hydrocarbons, a-C(:H); [e.g., @jones_variations_2012-2; @jones_variations_2012-1; @jones_variations_2012] and amorphous silicates, a-Sil. This model includes dust evolution through processes such as photo-processing, fragmentation and coagulation resulting from wide variations in the ISM physical condition.
THEMIS for the diffuse ISM [@jones_evolution_2013; @kohler_hidden_2014; @ysard_dust_2015] is composed of amorphous silicates ([a-Sil/a-C]{}) surrounded by a mantle of aromatic-rich carbon, and amorphous hydrocarbon solids which encompasse a-C:H material that are H-rich hence aliphatic-rich and a-C material that are H-poor hence aromatic-rich. Assuming a typical penetration depth of a UV photon in an amorphous carbon grain is about 20 nm [see Fig.15 in @jones_variations_2012-1], carbonaceous grains that are smaller than 20 nm are entirely photo-processed hence aromatic. Larger grains are composed of an aliphatic core and surrounded by an aromatic mantle that is 20 nm thick, which prevents photo-processing of the core hence allows the core to remain aliphatic. This view provides us with a continuous description of carbonaceous grains from the smallest that mostly contain aromatic cycles and are stochatiscally heated to the largest that are at thermal equilibrium. Details about the size distribution can be found in table \[tab:parameters\_size\_distribution\]. As these grains are composed of either an a-C:H core or a silicate core surrounded in both cases by an aromatic carbonaceous mantle, they are called Core-Mantle grains (CM).
In the dust evolution framework assumed by THEMIS [@jones_cycling_2014], large grains can form a second mantle either through accretion of C and H atoms, available in the gas phase or through coagulation of a-C nano-grains on the larger grains surfaces. These grains are called Core-Mantle-Mantle (CMM). In denser regions, CMM grains coagulate together to form aggregates [@kohler_dust_2015] called Aggregate-Mantle-Mantle (AMM) grains. Where the shielding from energetic photons is efficient enough, a mantle of water ice can form around AMM, leading to Aggregated-Mantle-Mantle-Ice (AMMI) grains.
In the following, we use several dust mixtures [see Fig.1 in @jones_global_2017]. Parameters associated with the size distribution of these dust mixtures can be found in Table\[tab:parameters\_size\_distribution\] and the size distributions themselves in Fig.\[fig:s\_dist\] (upper panel) with the associated spectra (see Fig.\[fig:s\_dist\], bottom panel), computed with DustEM (see Sect. \[sec:sec:DustEM\]). In the near-IR (1 to 5 [$\mu$m]{}) and mid-IR (5 to 30 [$\mu$m]{}), dust emission comes mainly from the [a-C]{} grains. In the far-IR (from 50 to 500 [$\mu$m]{}), dust emission comes mainly from [a-Sil/a-C]{} and [a-C:H/a-C]{} grains.
Influence of dust properties on its emission with DustEM {#sec:sec:DustEM}
--------------------------------------------------------
DustEM[^2] [@compiegne_global_2011] is a modelling tool that computes the extinction, the emission and the polarisation of interstellar dust grains heated by photons, in the optically thin case (i.e. no radiative transfer).
In order to disentangle the effects of radiative transfer from variations in the dust properties on emission, we study the influence of such variations with DustEM. We modify the following parameters :
1. The [a-C]{} abundance, i.e. the [a-C]{} mass to gas ratio, [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, varying from 0.01 $\times\,10^{-2}$ to 0.20 $\times 10^{-2}$ with steps of 0.01 $\times\,10^{-2}$.
2. The [a-C]{} minimum size, [$a_{\mathrm{min,\,a-C}}$]{}, varying from 0.4 nm to 0.9 nm with steps of 0.02 nm.
3. The slope of the [a-C]{} power law size distribution, $\alpha$, varying from -6 to -4 with steps of 0.1.
The results are shown in Fig.\[fig:test\] where the spectra in panels *d*, *e* and *f* are associated with the size distributions in panels *a*, *b* and *c*, respectively. All the spectra are obtained with a radiation field that corresponds to a blackbody at 34600 K scaled so that $G_0=100$ (i.e. the radiation field illuminating the Horsehead).
A decrease in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} or an increase in [$a_{\mathrm{min,\,a-C}}$]{} or $\alpha$ leads to a decrease in the smallest [a-C]{} grains hence a decrease in the near-IR emission. As the total dust mass is fixed, an increase in [$a_{\mathrm{min,\,a-C}}$]{} or $\alpha$ leads to a redistribution of the dust mass from the smallest to the largest [a-C]{} grains hence an increase in the mid-IR emission. In the far-IR, dust emission is unaffected by variations in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ as [a-C]{} grains are barely responsible for any dust emission at these long wavelengths. However, the dust emission in the far-IR slightly increases with an increase in $\alpha$ as the mass of the largest [a-C]{} increases significantly, unlike an increase in [$a_{\mathrm{min,\,a-C}}$]{}.
Radiative transfer modelling within the Horsehead {#sec:dust_emission_radiative_transfer}
=================================================
The Horsehead is an optically thick region that requires a radiative transfer modelling to properly interpret our multi-wavelength observations. We present the 3D radiative transfer code SOC we use in this study. Performing radiative transfer is time consuming, and so we here explore the influence of the Horsehead length along the line of sight [$l_{\rm{PDR}}$]{}, and dust properties (i.e. [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$) on dust emission after radiative transfer calculations.
Radiative transfer code : SOC {#sec:sub:sub:SOC}
-----------------------------
SOC is a 3D Monte Carlo radiative transfer code, parallelised using OpenCL libraries [@juvela_soc_2019], that computes dust emission and scattering. SOC has been benchmarked with other radiative transfer codes in [@gordon_trust._2017] and used in [@juvela_dust_2018; @juvela_galactic_2018; @juvela_synthetic_2019].
The radiation field corresponds to that of a blackbody at 34600 K produced by a star to which a dilution factor has been applied to obtain [$G_{0}$]{} = 100 at the Horsehead edge. This radiation field is estimated on a logarithmic grid of 334 frequencies that extends from $3\times 10^{9}$ Hz to $3\times 10^{16}$ Hz. As the Horsehead edge is located outside the HII region, there are no photons above 13.6 eV hence we applied the Lyman cut to the radiation field that is heating the Horsehead edge. Each frequency is simulated with $10^{6}$ photons.
In SOC, clouds can be defined on regular Cartesian grids or octree grids. In this paper, we model the Horsehead using a cartesian grid that contains $N_{\mathrm{X}} \times N_{\mathrm{Y}} \times N_{\mathrm{Z}}$ cubes that measure 0.0025 pc per side. $N_{\mathrm{X}}$ is equal to 77 and corresponds to the number of cubes along the Horsehead-star axis. $N_{\mathrm{Y}}$ is equal to 7 and corresponds to the number of cubes along the axis perpendicular to the Horsehead-star axis and the line-of-sight axis (i.e. the observer-Horsehead axis). $N_{\mathrm{Z}}$ correponds to the number of cubes in the Horsehead along the line-of-sight hence depends on the value of [$l_{\rm{PDR}}$]{}: $N_{\mathrm{Z}}=$ [$l_{\rm{PDR}}$]{}/0.0025 pc. For each cube, we associate a value of gas density as described in Sect.\[sec:sub:density\_profile\].
In our study, we compute only dust emission as regardless of the dust properties, dust scattering contributes up to less than 1 $\%$ to the total dust brightness in the near-IR photometric bands. After the integration along the line-of-sight, dust emission profiles across the Horsehead are integrated into the different photometric bands and convolved with the PSFs.
Influence of [$l_{\rm{PDR}}$]{} on dust emission {#sec:sub:sub:lpdr_radiative_transfer}
------------------------------------------------
In the following, we study dust emission at two positions : the near-IR peak position (NIR PP) in the Horsehead and the far-IR peak position (FIR PP). These positions corresponds respectively to the peak of emission in [$\mathrm{IRAC}_{3.6}$]{} and [$\mathrm{SPIRE}_{500}$]{}, shown in the Fig.\[fig:density\_profile\]. Also, in order to lighten the reading of the results obtained, we introduce $I_{{\mathrm{mod,\,max}}}(i) = \mathrm{max}\left(I_{\mathrm{mod},\,i}(z)\right)$ where $I_{\mathrm{mod},\,i}(z)$ is the dust modelled emission in the $i$-th band at the position $z$ along the cut.
Whether it is at the NIR PP or at the FIR PP, dust emission increases in all bands with [$l_{\rm{PDR}}$]{} (see Fig.\[fig:I\_emi\_profile\_1\], top and middle panels) since the dust mass increases along the line of sight as the column density[^3] increases with [$l_{\rm{PDR}}$]{}. One may also note that dust emission increases linearly with [$l_{\rm{PDR}}$]{} (see Fig.\[fig:I\_emi\_profile\_1\], bottom panel) revealing that dust self-absorption, which depends on both the column density and the wavelength, is negligible at these wavelengths in the [$l_{\rm{PDR}}$]{} range we are considering. Consequently, we can consider that the intensity increases linearly with [$l_{\rm{PDR}}$]{} in the near, mid and far-IR and does not affect the shape of the dust spectrum. In the following, we therefore consider [$l_{\rm{PDR}}$]{} as a multiplying factor on the dust spectrum.
Influence of dust properties on dust emission after radiative transfer {#sec:sub:dust_prop_rad_transfer}
----------------------------------------------------------------------
Conversely to Sect.\[sec:sec:DustEM\] where we study the influence of dust properties on dust emission in the optically thin case, we study here the influence of these properties in the optically thick case by performing a radiative transfer calculation. These results are shown in Fig.\[fig:test\] where the spectra in panels *g*/*j* , *h*/*k* and *i*/*l* are respectively associated with the size distributions in panel *a*, *b* and *c*. Spectra in panels *g*, *h* and *i* are located at the NIR PP and those in panels *j*, *k* and *l* are located at the FIR PP.
Dust grains are warmer at the NIR PP (see Fig.\[fig:test\], panels *g*, *h* and *i*) than at the FIR PP (panels *j*, *k* and *l*) as the maximum intensity shifts towards longer wavelengths. This effect is due to the damping of the radiation field with increasing depth into the Horsehead.
One may note that at the NIR PP, dust emission in the far-IR does not vary with [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} (see Fig.\[fig:test\], panel *g*), conversely to that is seen in the inner part (panel *j*). As dust emission in the far-IR is unaffected by variations in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} in the optically thin case (see Sect.\[sec:sec:DustEM\]), this is strictly a radiative transfer effect. As the [a-C]{} grains bear a large fraction of the total dust cross-section, an increase in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} increases significantly the extinction. Therefore as [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} increases, the radiation field is increasingly damped at the NIR PP and there are less photons available at the FIR PP to heat the larger grains. Indeed, as we can see in panel *j*, the wavelength associates with the maximum of emission shifts towards longer wavelengths with an increase in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}. Therefore, dust emission in the far-IR varies with [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, due to radiative transfer effects.
Regarding the other changes in the spectra, they are due to variations in dust properties and are explained in Sect.\[sec:sec:DustEM\].
With evolved grains {#sec:sub:evolved_grains}
-------------------
Previously, we used only CM-grains throughout the Horsehead. To study the influence of dust evolution on the emission across the Horsehead, we use CM-grains with modified size distributions (i.e. CM-grains with values of [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ that differ from the diffuse ISM) in the outer part of the Horsehead where the dust is likely to be more diffuse ISM-like, and aggregate-grains (AMM, AMMI) above a density threshold of 7 $\times\,10^{4}$ H.cm$^{-3}$, where dust grains are assumed to be coagulated. In order to simplify our study, we define 3 different cases depending on the dust we use :
$\bullet$ **Case *a* :**
: CM-grains with modified size distributions across all the Horsehead.
$\bullet$ **Case *b* :**
: CM-grains with modified size distributions in the outer part of the Horsehead and AMM in the inner part of the Horsehead.
$\bullet$ **Case *c* :**
: CM-grains with modified size distributions in the outer part of the Horsehead and AMMI in the inner part of the Horsehead.
Dust modelled emission profiles for the three cases are shown in Fig.\[fig:I\_emi\_CM\_AMM\_AMMI\].
As the maximum of emission in the near and mid-infrared is located in the outer part of the Horsehead, there is no modification of dust emission at these wavelengths as we always use modified CM grains. Regarding dust emission in the far-infrared, this increases when coagulated (AMM, AMMI) dust grains are used because they are more emissive. One may note that AMMI are more emissive than AMM as the dust mass in AMMI is larger than in AMM because of the ice mantle.
Comparison with observations {#sec:comparison_observations}
============================
In this section, we constrain our dust model with the observations. First, we present our results using diffuse ISM-like dust (i.e. CM-grains); second, we introduce the methodology we use in the following parts; third, we constrain the four parameters [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$ and [$l_{\rm{PDR}}$]{} for the 3 cases of evolved grains as defined in Sect.\[sec:sub:evolved\_grains\] and across the 3 cuts (see Fig.\[fig:HH\_24\]).
Diffuse case {#sec:sec:diffuse_case}
------------
The results are shown in Fig.\[fig:SOC\_diffuse\]. The 10 upper panels correspond to the modelled emission across the Horsehead using CM-grains, with [$l_{\rm{PDR}}$]{} varying from 0.1 pc to 0.5 pc, for the 10 photometric bands. The observed emission is shown for cut 2. The bottom panels show the corresponding ratios of maximum observed and modelled intensities.
Regardless of the cut considered, it is not possible to simultaneously fit the observations in all the photometric bands (see Fig.\[fig:SOC\_diffuse\], upper panel), whatever the [$l_{\rm{PDR}}$]{} value. With [$l_{\rm{PDR}}$]{} = 0.1 pc, we are able to roughly reproduce the observations in the near and mid-infrared but in the far-infrared, the modelled dust emission is too low by a factor $\sim$ 10 (see Fig.\[fig:SOC\_diffuse\], bottom panels). With [$l_{\rm{PDR}}$]{} = 0.5 pc, we are able to reproduce the observations in the far-infrared but in the near and mid-infrared, the modelled dust emission is too high by a factor of at least $\sim$ 10.
If [$l_{\rm{PDR}}$]{} is higher than 0.10 pc (see Sect.\[sec:sub:density\_profile\]), near and mid-infrared modelled dust emission will always be too high, which implies reducing the dust abundance that is responsible for the emission at these wavelengths hence decreasing the [a-C]{} dust-to-gas ratio, [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} (see Sect.\[sec:sub:dust\_prop\_rad\_transfer\]). On the other hand, the ratio between the modelled dust emission and the observations is not the same in the five near and mid-IR bands. One is therefore required to change the shape of the spectrum by varying [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ (see Sect.\[sec:sub:dust\_prop\_rad\_transfer\]).
To summarise, it is not possible to reproduce the observations across any of the three cuts in the Horsehead for any value of [$l_{\rm{PDR}}$]{}, using only diffuse ISM-like dust. We must therefore consider evolved dust.
Methodology {#sub:sub:methodology}
-----------
For the sake of reducing computation time, instead of exploring the 4D-space defined by [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$ and [$l_{\rm{PDR}}$]{}, we explore the 3D-space defined by [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ as variation in [$l_{\rm{PDR}}$]{} does not affect the shape of the dust spectrum (see Sect.\[sec:sub:sub:lpdr\_radiative\_transfer\]), conversely to variations in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ (see Sect.\[sec:sub:dust\_prop\_rad\_transfer\]). Therefore, [$l_{\rm{PDR}}$]{} can be adjusted after the fact.
Adjusting the shape of the modelled dust spectra to the observed dust spectra means that the ratio between $I_{{\mathrm{obs,\,max}}}(i)$ and $I_{{\mathrm{mod,\,max}}}(i)$ has to be roughly the same in every band. Therefore, we minimise the following parameter : $$\label{eq:chi2}
\chi^{2} = \sum_{i\,\in\,\mathrm{filters}} \left(\frac{X_{i}-\mu}{\sigma_{i}} \right)^{2} \;,$$ with $$\label{eq:chi2_1}
X_{i} = \frac{I_{\mathrm{obs,max}}(i)}{I_{\mathrm{mod,max}}(i)} \quad ; \quad
\sigma_{i} = r_{\mathrm{obs}}(i)\,X_{i} \quad ; \quad
\mu = \left< X_{i} \right>_{i\,\in\,\mathrm{filters}}$$ where $r_{\mathrm{obs}}$ is the relative error for each filter and defined in Sect.\[sub:sub:observations\] and $I_{{\mathrm{obs,\,max}}}(i) = \mathrm{max}\left(I_{\mathrm{obs},\,i}(z)\right)$ with $I_{\mathrm{obs},\,i}(z)$, the dust observed in the $i$-th band at the position $z$ along the cut.
The following procedure is thus applied :
1. We constrain [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ with a fixed [$l_{\rm{PDR}}$]{} in order to adjust the shape of the modelled dust spectrum to the observed dust spectrum by minimising [$\chi^{2}$]{}.
2. We use the dust properties ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$) associated with [$\chi^{2}_{\mathrm{min}}$]{} (i.e. the minimum value of [$\chi^{2}$]{} in the 3D-space defined by [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$) and we adjust the overall modelled dust spectrum to the observed dust spectrum by multiplying the flux in all bands by the same factor to get $\mu$ = 1, which constrains [$l_{\rm{PDR}}$]{}.
We choose to remove the [$\mathrm{IRAC}_{4.5}$]{} and [$\mathrm{PACS}_{70}$]{} bands because it is not possible to simultaneously fit the observations in the 10 bands with these 2 included. We discuss this decision further in Sect.\[sec:discrepancy\].
Constrain [$l_{\rm{PDR}}$]{} and dust properties [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ {#sec:sec:a_min_vsg_alpha_Eg}
-----------------------------------------------------------------------------------------------------------------------------------
First, we study the [$\chi^{2}$]{} distribution in the 3D-space ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$) for each of the 3 cuts and the 3 cases, in order to obtain the best set of parameters in these 9 cases. The 3D-space is defined as follows :
1. [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} varying from 0.001 $\times$ $10^{-2}$ to 0.041 $\times$ $10^{-2}$ with steps of 0.002 $\times$ $10^{-2}$.
2. [$a_{\mathrm{min,\,a-C}}$]{} varying from 5 nm to 10 nm with steps of 0.25 nm.
3. $\alpha$ varying from -13 to -3 with steps of 0.5.
Second, we study the [$\chi^{2}$]{} distribution in the 2D-spaces ([$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$), ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}) and ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, $\alpha$). Finally, we conclude with the comparison between the observed and modelled dust emission profiles for each of the 3 cuts and the 3 cases with the best sets of parameters.
For more clarity, we define [$\chi^{2}_{\mathrm{min,\,2D}}\left(M_{\mathrm{a-C}}/M_{\mathrm{H}}\right)$]{} that is the minimum value of [$\chi^{2}$]{} in the 2D-space ([$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$) for a given value of [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}. We also define [$\chi^{2}_{\mathrm{min}}$]{}, that is the minimum value of [$\chi^{2}$]{} in the 3D-space ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$), i.e. the minimum value of [$\chi^{2}_{\mathrm{min,\,2D}}\left(M_{\mathrm{a-C}}/M_{\mathrm{H}}\right)$]{}.
### [$\chi^{2}$]{} distribution in the 3D-space ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$) {#sec:sec:a_min_vsg_alpha_ab_vsg}
Figure\[fig:SOC\_chi2min\_final\] shows [$\chi^{2}_{\mathrm{min,\,2D}}\left(M_{\mathrm{a-C}}/M_{\mathrm{H}}\right)$]{} and Tab.\[tab:best\_fit\] summarises these results.
First and foremost, [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} is between 0.01 $\times 10^{-2}$ and 0.03 $\times 10^{-2}$, i.e. between 6 to 10 times lower than in the diffuse ISM (0.17 $\times 10^{-2}$) regardless of the cut or the case considered. Second, [$a_{\mathrm{min,\,a-C}}$]{} is between 0.8 and 0.925 nm, i.e. between 2 and 2.25 times larger than in the diffuse ISM (0.4 nm). Third, $\alpha$ is between -7 and -5.5, i.e. between 1.1 to 1.4 times lower than in the diffuse ISM (-5).
One may note that regardless of the cut, [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} increases from case *a* to case *c* (see Fig.\[fig:SOC\_chi2min\_final\]). In case *a*, we only use modified CM grains (i.e. CM grains with values of [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$ that differ from the diffuse ISM) in the outer part and in the inner part of the Horsehead but we use modified CM-grains in the outer part of the Horsehead and AMMI in the inner part in case *c*. As AMMI are more emissive in the far-IR than CM grains, emission in this wavelength range must decrease to fit the observations, which is achievable by reducing [$l_{\rm{PDR}}$]{} (see Sect.\[sec:sub:sub:lpdr\_radiative\_transfer\]), hence [$l_{\rm{PDR}}$]{} decreases from case a to case *c*. This decrease in [$l_{\rm{PDR}}$]{} implies a decrease in emission in the near and mid-IR hence [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} must increase to counterbalance this variation.
----------------------------------------------------------- ------- ------- ------- ------- ------- ------- ------- ------- -------
cut 1 cut 2 cut 3 cut 1 cut 2 cut 3 cut 1 cut 2 cut 3
$10^{2}$ $\times$ [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} 0.009 0.011 0.011 0.011 0.017 0.013 0.013 0.021 0.017
[$a_{\mathrm{min,\,a-C}}$]{} \[nm\] 0.825 0.825 0.925 0.825 0.8 0.925 0.825 0.8 0.9
$\alpha$ -7.0 -6.0 -7.5 -6.5 -5.5 -7.5 -6.5 -5.5 -6.5
[$l_{\rm{PDR}}$]{} \[pc\] 0.283 0.297 0.273 0.290 0.267 0.282 0.275 0.254 0.265
[$\chi^{2}_{\mathrm{min}}$]{} 49.6 45.1 36.0 51.0 33.9 36.9 41.3 30.5 30.7
----------------------------------------------------------- ------- ------- ------- ------- ------- ------- ------- ------- -------
### [$\chi^{2}$]{} distribution in the 2D-spaces ([$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$), ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}) and ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, $\alpha$) {#sec:sec:a_min_vsg_alpha_ab_vsg}
We show in Fig.\[fig:SOC\_grid\_TOT\], the [$\chi^{2}$]{} distribution for cut 2 in the 2D-spaces ([$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$), ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}) and ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, $\alpha$). We choose to focus on only one cut as we are interested in the behaviour of the [$\chi^{2}$]{} distribution here, which is the same regardless of the cut.
The most important result is that, regardless of the case, there is a unique minimum in all 2D-spaces. Also, as explained in Sect.\[sec:sec:DustEM\], a decrease in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} is, to first order, similar to an increase in [$a_{\mathrm{min,\,a-C}}$]{} and $\alpha$, regarding dust emission in the near and mid- IR. An increase in [$a_{\mathrm{min,\,a-C}}$]{} is therefore counterbalanced by a decrease in $\alpha$ to keep low values of [$\chi^{2}$]{}, and an increase in [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} is counterbalanced by an increase in [$a_{\mathrm{min,\,a-C}}$]{} and in $\alpha$ hence explaining the banana-shape of the low [$\chi^{2}$]{} values in each of the 2D-spaces.
It can also be seen that from case *a* to case *c*, the position of [$\chi^{2}$]{} minimum value moves. From case *a* to case *c*, dust emission in the far-IR increases (see Fig.\[fig:I\_emi\_CM\_AMM\_AMMI\]) hence this effect is counterbalanced by a decrease in [$l_{\rm{PDR}}$]{} that also reduces dust emission in the near and mid-IR. To compensate for this decrease in dust emission in the near and mid-IR, the value of [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{} associated with the [$\chi^{2}$]{} minimum value increases from case a to case c in the 2D-spaces ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}) and ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, $\alpha$). In the 2D-space ([$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$), this effect is counterbalanced by a decrease in $\alpha$ and an increase in [$a_{\mathrm{min,\,a-C}}$]{}.
### Comparison between dust modelled emission and dust observed emission profiles {#sec:sec:dust_emission_profiles}
Here, we use the best set of parameters ([M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, [$a_{\mathrm{min,\,a-C}}$]{}, $\alpha$ and [$l_{\rm{PDR}}$]{}) that are listed in Table.\[tab:best\_fit\] and compare the modelled emission profiles in the 10 photometric bands for the three cases with the observed emission profiles in the three cuts (see Fig.\[fig:SOC\_final\_obs\]). We focus on three aspects: the maximum of intensity in each of the 10 bands, the position of these maxima and the width of these profiles.
In the near and mid-IR, except in [$\mathrm{IRAC}_{4.5}$]{}, the maximum emission is well reproduced, regardless the case or the cut. In [$\mathrm{PACS}_{70}$]{}, although the maximum of emission is never reproduced, the discrepancy between the maximum modelled emission and the maximum observed emission decreases from case *a* to case *c*. From [$\mathrm{SPIRE}_{250}$]{} to [$\mathrm{SPIRE}_{500}$]{}, the maximum emission is in the error bars, regardless of the case or the cut and the discrepancy between the maximum modelled emission and the maximum observed emission decreases from case a to case c. Regarding [$\mathrm{PACS}_{160}$]{}, the maximum emission is within the error bars only for case *c* for cuts 2 and 3 but never for cut 1, regardless of the case.
Concerning the position of the maximum emission, it is well reproduced from [$\mathrm{IRAC}_{3.6}$]{} to [$\mathrm{PACS}_{70}$]{} regardless of the cut and the case. Regarding the cut 1, there is a small discrepancy between the position of the maximum emission and the position of the observed emission from [$\mathrm{PACS}_{70}$]{} to [$\mathrm{SPIRE}_{500}$]{}. For cut 2, there is the same discrepancy in [$\mathrm{SPIRE}_{350}$]{} and [$\mathrm{SPIRE}_{500}$]{}, regardless of the case. For cut 3, all the positions are well reproduced.
Regarding the width of the profiles, they are well reproduced from [$\mathrm{IRAC}_{3.6}$]{} to [$\mathrm{PACS}_{160}$]{} but slightly different from [$\mathrm{SPIRE}_{250}$]{} to [$\mathrm{SPIRE}_{500}$]{}, which could be due to large structures in the Horsehead.
To summarise, the observed dust emission is well reproduced in the near and mid-IR, except in [$\mathrm{IRAC}_{4.5}$]{}, regardless of the case and the cut. In the far-IR, the discrepancy between observed dust emission and modelled dust emission decreases from case *a* to case *c*.
Discussion {#sec:discussion}
==========
First, we discuss the discrepancy between the dust modelled emission and the dust observed emission in [$\mathrm{IRAC}_{4.5}$]{} and in [$\mathrm{PACS}_{70}$]{}; second, the results obtained are described; third, we propose a scenario of dust evolution in agreement with the results obtained. We end with a discussion of dust processing timescales in support to this scenario.
Discrepancy in [$\mathrm{IRAC}_{4.5}$]{} and [$\mathrm{PACS}_{70}$]{} {#sec:discrepancy}
---------------------------------------------------------------------
In [$\mathrm{IRAC}_{4.5}$]{}, the modelled dust emission is always overestimated (see Fig.\[fig:SOC\_final\_obs\]) by a factor 2 to 4. As this filter covers the dust continuum and the wings of the IR bands from a-C:H nano-grains, this suggests that the wings of the IR bands in this region are different (i.e., weaker and/or narrower, see for instance [@bouteraon_carbonaceous_2019] for more details about the variability of the IR band widths) from those in the diffuse ISM. Indeed, we are here looking at dust that is evolving from dense cloud dust in response to interaction with UV photons.
Moreover, a-C:H nano-grains freshly produced may not yet have time to be entirely photo-processed, hence have a large band-to-continuum ratio because of their high fraction of aliphatic bonds, as opposed to aromatic bonds. As discussed in [@jones_evolution_2013], this requires a-C:H nano-grains with a band gap larger than 0.1 eV, the value adopted in the diffuse ISM.
However, as we do not have dust spectroscopic informations of the Horsehead in the near-IR, we are not able to answer to these previous questions. In the near future, JWST spectroscopic data should allow us to understand such changes in the structure of a-C:H nanograins.
In [$\mathrm{PACS}_{70}$]{}, models always overestimate the emission (see Fig.\[fig:SOC\_final\_obs\]) by a factor 3 to 4. This suggests that large grains ([a-Sil/a-C]{} and [a-C:H/a-C]{} for case *a*, AMM for case *b* and AMMI for case *c*) are somewhat too warm and not emissive enough. This is supported by recent laboratory experiments in which the mass absorption coefficient of silicates in the far-IR is larger (up to an order of magnitude) than those currently used in THEMIS [see Fig.5 in @demyk_low_2017]. As a consequence, the large grains we use in there are probably not emissive enough. The incorporation of these new laboratory results in THEMIS will most likely reduces the discrepancy in [$\mathrm{PACS}_{70}$]{}.
Main results
------------
Using the 3D radiative transfer code SOC together with the dust model THEMIS, we can reproduce the Horsehead observations in 8 of the 10 photometric bands of [*Spitzer*]{} and [*Herschel*]{}.
The main results for the outer part of the Horsehead are the following :
1. The nano-grains (i.e. [a-C]{} grains) dust-to-gas mass ratio, [M$_{\mathrm{a-C}}$/M$_{\mathrm{H}}$]{}, is 6 to 10 times lower than in the diffuse ISM.
2. The minimum size of the nano-grains, [$a_{\mathrm{min,\,a-C}}$]{}, is 2 to 2.25 times larger than in the diffuse ISM.
3. The power-law exponent of the nano-grains size distribution, $\alpha$, is 1.1 to 1.4 times lower than in the diffuse ISM, i.e. the size distribution is steeper.
The best size distributions for the three cuts and case *c* are shown in Fig.\[fig:s\_dist\_final\]. Concerning the inner part of the Horsehead, we tested 3 different kinds of dust, diffuse ISM-like dust (CM) with modified size distributions in case *a*, aggregates of grains (AMM) in case *b*, aggregates of grains with ice mantles (AMMI) in case *c*. At long wavelengths (from [$\mathrm{PACS}_{160}$]{} to [$\mathrm{SPIRE}_{500}$]{}) The results are significantly better when using AMMI instead of CM grains. Regarding [$\mathrm{PACS}_{70}$]{}, even if we are not able to reproduce the observed emission with our model, using aggregates (AMM/AMMI) instead of diffuse ISM-like dust (CM) with modified size distributions, significantly improves the fit in this band.
Finally, the length of the Horsehead along the line-of-sight, [$l_{\rm{PDR}}$]{}, is found to be within the range of 0.26 and 0.30 pc which is in agreement with previous gas studies [@habart_density_2005].
Dust evolution scenario {#sub:sub:scenario}
-----------------------
Our results show significant variations of the dust size distribution and in the following we outline a possible scenario of dust evolution across the Horsehead interface. Given the strong incident radiation field, we assume that the dominant process is the exposure of dust grains from the dense molecular cloud (the inner region) to the UV light of $\sigma$-Ori. This suggests two major photo-processing sequences: (i) the partial fragmentation of aggregate grains from the inner region and (ii) the destruction of the smallest a-C:H nano-grains. We discuss the significance of these sequences by comparing their timescales to the advection timescale $\tau_{a}$, i.e., the time that the incident UV light needs to heat up and dissociate the molecular gas at the cloud border.
The advection timescale is defined as $\tau_{\mathrm{a}}=L/v_{\mathrm{DF}}$ where $L\sim 0.05$ pc is the width of the outer part of the Horsehead and $v_{\mathrm{DF}}\sim 0.5$ km.s$^{-1}$ is the velocity of the dissociation front [@hollenbach_photodissociation_1999]. With these values, we find $\tau_{\mathrm{a}}\sim 10^{5}$ years.
Due to the lack of studies, we take the photo-darkening timescale as a lower limit to photo-fragmentation of aggregate grains and photo-destruction of [a-C]{} nano-grains, described by $\tau_{\mathrm{ph}}$. Indeed, photo-darkening involves the dissociation of CH-bonds, a process that is more likely faster than the breaking of CC-bonds that must occur in photo-fragmentation [@jones_h_2015]. We thus express $\tau_{\mathrm{ph}}$ at the cloud edge in terms of the photo-darkening rate $\Lambda_{\mathrm{pd}}$ [@jones_cycling_2014] : $$\tau_{\mathrm{ph}} \simeq \Lambda_{\mathrm{pd}}^{-1} = {1\over {\sigma_{\mathrm{CH}}\,F^0_{\mathrm{UV}}\,Q_{\mathrm{abs}}(a)\,\epsilon(a)}},$$ where $F^0_{\mathrm{UV}}\simeq 3.8\times 10^9$ photons.s$^{-1}$.cm$^{-2}$ is the unattenuated UV field, $\epsilon(a)={\rm min}(1,{2\over a[{\rm nm}]})$ is a size-dependent photo-darkening efficiency, $\sigma_{\mathrm{CH}}\simeq 10^{-19}$ cm$^{2}$ is the CH bond photo-dissociation cross-section and $Q_{\mathrm{abs}}(a)$, the dust absorption efficiency which depends almost solely on the radius in the UV range. In the case of AMMI, $\tau_{\mathrm{ph}}$ is larger in reality because the ice mantle needs to be vaporised first but we do not take into account this effect as the time $\tau_{\mathrm{ph}}$ we estimate is already a lower limit.
We show $\tau_{\mathrm{ph}}(a)$ in Fig.\[fig:photodarkening\] for CM, AMM and AMMI. As discussed in [@ysard_mantle_2016], more than 50 $\%$ of the AMM(I) dust mass is contained in grains larger than 250 nm. From this figure, one can see that aggregate grains can be photo-fragmented because $\tau_{\mathrm{ph}}\sim \tau_{\mathrm{a}}$. One can also see that [a-C]{} nano-grains can be efficiently destroyed as $\tau_{\mathrm{ph}}<\tau_{\mathrm{a}}$ for [a-C]{} nano-grains. Similar results were found by [@alata_vacuum_2014], from laboratory experiments on a-C:H grain analogues, later applied to the Horsehead [@alata_vacuum_2015].
From this analysis emerges the following scenario. Within an advection timescale, the a-C nano-grains formed by fragmentation of aggregate grains are also partially destroyed by UV photons. This naturally explains the depletion of a-C:H grains around a=10 nm seen in Fig.\[fig:s\_dist\_final\]). We note that the size distribution of these freshly formed small grains is significanlty different from the diffuse ISM case (blue curve in Fig.\[fig:s\_dist\_final\])). This evolved size distribution could reflect the photo-evaporated layer described by [@bron_photoevaporating_2018].
Conclusion {#sec:conclusion}
==========
With [*Herschel*]{} and [*Spitzer*]{} data, we studied the Horsehead using 10 photometric bands, from 3.6 [$\mu$m]{} to 500 [$\mu$m]{}, covering the entire dust spectrum. We modelled the dust emission across the Horsehead using the THEMIS dust model together with the 3D radiative transfer code SOC.
We show that it is not possible to reproduce the observations in the Horsehead using dust grains from the diffuse ISM hence the necessity to modify their size distributions and compositions. Dust therefore evolves across the Horsehead.
In the outer part of the Horsehead, the [a-C]{} nano-grains dust-to-gas ratio is 6 to 10 times lower and their minimum size, 2 to 2.25 times larger than in the diffuse ISM. The power law of the size distribution is steeper than in the diffuse ISM. In the inner part of the Horsehead, we show that using aggregate grains with or without ice mantles significantly reduces the discrepancy between our model and the observations. The discrepancy between the observations and our model at 4.5 [$\mu$m]{} could be due to the shape of the aromatic band wings whence the overestimation of the dust modelled emission. We also find that large grains are too warm because our modelled dust emission at 70 [$\mu$m]{} is overestimated. However, laboratory studies show that large silicate grains are more emissive than those used in dust models hence cooler. These new results will soon be implemented in THEMIS.
Based on a time-scale analysis, we propose a scenario where the [a-C]{} nano-grains form by the partial photo-fragmentation of aggregate grains and are processed by the UV photons, leading to a size distribution depleted in grains of size from 5 to 10 nm. In the denser regions of the Horsehead, the dust composition is typical of dense clouds.
Spectroscopic observations of the Horsehead are required to go further on the structure and size distribution of [a-C]{} nano-grains. Indeed, the observations with the JWST will, for the first time, spatially resolve the individual IR dust signatures across the Horsehead, offering an unprecedented look at the evolution of the interstellar matter in photon-dominated regions.
We would like to thank the CNES and the P2IO LabeX for supporting Thiebaut Schirmer PhD work.
This work was supported by the Programme National “Physique et Chimie du Milieu Interstellaire” (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. HIPE is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff Univ. (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA)
Size distribution {#appendix:size_distribution}
=================
Size distributions of dust in THEMIS follow either a power-law with an exponential cut-off, defined as follow : $$\frac{\mathrm{d}n}{\mathrm{d}a} \propto \left\{
\begin{array}{ll}
a^{\alpha} & \qquad \mathrm{if} \quad a < a_{\mathrm{t}} \\
a^{\alpha} \times \exp\left(-\left(\frac{a-a_{\mathrm{t}}}{a_{\mathrm{c}}}\right)^{3}\right) & \qquad \mathrm{if} \quad a \geq a_{\mathrm{t}}
\end{array}
\right.$$
or a log-normal law, defined as follow : $$\frac{\mathrm{d}n}{\mathrm{d}a} \propto \frac{1}{a} \times \exp\left(-\left(\frac{\log(a/a_{0})}{\sigma}\right)^{2}\right)$$ where all the parameters for each dust distribution are listed in Table\[tab:parameters\_size\_distribution\].
[lccccccc]{} Name & size & $\alpha$ & $a_{\mathrm{min}}$ & $a_{\mathrm{max}}$ & $a_{\mathrm{c}}$ & $a_{\mathrm{t}}$ & $a_{0}$\
& & & \[nm\] & \[nm\] & \[nm\] & \[nm\] & \[nm\]\
\
[a-C]{}& p-law & 5 & 0.4 & 4900 & 10 & 50 & -\
[a-C:H/a-C]{}& log-n & - & 0.5 & 4900 & - & - & 7\
[a-Sil/a-C]{}& log-n & - & 1 & 4900 & - & - & 8\
\
AMM & log-n & - & 47.9 & 700 & - & - & 479\
\
AMMI & log-n & - & 91.2 & 700 & - & - & 610\
The Horsehead seen with [*Spitzer*]{} and [*Herschel*]{} {#appendix:HH_obs}
========================================================
\
\
\
[^1]: THEMIS is available here : [https://www.ias.u-psud.fr/themis/](https://www.ias.u-psud.fr/themis/index.html)
[^2]: DustEM is available here : <http://www.ias.u-psud.fr/DUSTEM>
[^3]: $N_{\mathrm{H}}(z)=n_{\mathrm{H}}(z)\,l_{\mathrm{PDR}}$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Brain tumor segmentation plays a pivotal role in medical image processing. In this work, we aim to segment brain MRI volumes. 3D convolution neural networks (CNN) such as 3D U-Net [@cciccek20163d] and V-Net [@vnet] employing 3D convolutions to capture the correlation between adjacent slices have achieved impressive segmentation results. However, these 3D CNN architectures come with high computational overheads due to multiple layers of 3D convolutions, which may make these models prohibitive for practical large-scale applications. To this end, we propose a highly efficient 3D CNN to achieve real-time dense volumetric segmentation. The network leverages the 3D multi-fiber unit which consists of an ensemble of lightweight 3D convolutional networks to significantly reduce the computational cost. Moreover, 3D dilated convolutions are used to build multi-scale feature representations. Extensive experimental results on the BraTS-2018 challenge dataset show that the proposed architecture greatly reduces computation cost while maintaining high accuracy for brain tumor segmentation. Our code will be released soon.'
author:
- 'Chen Chen$^{1,}$[^1], Xiaopeng Liu$^{2,\star}$, Meng Ding$^3$, Junfeng Zheng$^2$, Jiangyun Li$^{2,\dagger}$'
bibliography:
- 'reference.bib'
title: '[3D Dilated Multi-Fiber Network for Real-time Brain Tumor Segmentation in MRI]{}'
---
Introduction
============
Recent advances in the treatment of gliomas have increased the demands on using magnetic resonance imaging (MRI) techniques for the diagnosis, tumor monitoring, and patient outcome prediction. Accurate segmentation of brain tumor is critical for diagnosis and treatment planning. However, automated brain tumor segmentation in multi-modal MRI scans is a challenging task due to the heterogeneous appearance and shape of gliomas [@bakas2018].
Deep learning has triumphed over various computer vision tasks. A flurry of research has leveraged Convolution Neural Networks (CNNs) for brain tumor segmentation and achieved great success. Havaei *et al.* [@havaei2017brain] present a two-pathway CNN architecture and predict the label for each pixel by taking as input a local image patch in a sliding-window fashion. Ronneberger *et al.* [@ronneberger2015u] develop a fully convolutional network (FCN), namely U-Net, to process the entire image for dense prediction. The network follows an encoder-decoder structure and is trained end-to-end to produce a full-resolution segmentation. Although these 2D CNN-based approaches have achieved impressive segmentation performance, these models ignore crucial 3D spatial context given that most clinical imaging data are volumetric, e.g. 3D MR images. To better represent the 3D volumes of imaging data, Cicek *et al.* [@cciccek20163d] generalize the U-Net from 2D to 3D by exploring 3D operations, e.g. 3D convolution and 3D max pooling, in the FCN, leading to the 3D U-Net. Similarly, V-Net [@vnet] uses volumetric convolutions to process MRI volumes and yields more accurate segmentation than the 2D approaches.
It has been shown that an effective way of reasoning volumetric structure is to use 3D convolutions in deep neural network architectures [@cciccek20163d; @vnet; @dou20173d]. However, using multiple layers of 3D convolutions suffers from high computational cost compared with regular 2D CNNs due to an extra dimension. A few attempts have been made to alleviate this issue by using light-weight network architectures. For example, 3D-ESPNet [@nuechterlein20183d] extends ESPNet, a fast and efficient network based on point-wise convolution for 2D semantic segmentation, to 3D medical image data. SD-UNet [@chen2018s3d] takes advantages of the separable 3D convolution, which divides each 3D convolution into three parallel branches, in order to reduce the number of learnable network parameters. However, the performance of these efficient models is not comparable to the state-of-the-art.
**Contribution.** In this paper, to bridge the gap between model efficiency and accuracy for 3D MRI brain tumor segmentation, we propose a novel 3D dilated multi-fiber network (DMFNet). It builds upon the multi-fiber unit [@chen2018multi], which uses the efficient group convolution, and introduces a weighted 3D dilated convolution operation to gain multi-scale image representation for segmentation. DMFNet only has 3.88M parameters. Moreover, with the inference times of 0.019s on one GPU and 20.6s on one CPU for a single 3D volumetric segmentation, it achieves dice scores of 80.12%, 90.62% and 84.54% respectively for the enhancing tumor, the whole tumor and the tumor core on the 2018 BraTS challenge [@bakas2017advancing; @menze2015multimodal].
Method
======
Dilated Multi-Fiber (DMF) Unit
------------------------------
3D convolution kernel is normally operated on the entire channels of the feature maps, which scales up the computational complexity exponentially in terms of floating point operations per second (FLOPs). Group convolution is an effective solution for model speeding up, which has been explored for efficient network design, e.g. ShuffleNet [@shufflenet]. Although the grouping strategy could reduce the number of parameters, simply replacing the regular convolution with the group convolution may impact the information exchange between channels and hurt the learning capacity. Multi-fiber (MF) [@chen2018multi] is proposed for video action recognition and can facilitate information flow between groups. Inspired by that, we extend the multi-fiber unit design with an adaptive weighted dilated convolution to capture the multi-scale features in brain MR images. In the following, we detail the key components of our DMF unit.
![(a) A residual unit with two regular convolution layers. (b) The multi-fiber design consisting of multiple separated residual units, called fibers. (c) The multi-fiber (MF) unit takes advantage of a *multiplexer* for information routing. (d) The proposed dilated multi-fiber (DMF) unit with an adaptive weighting scheme for different dilation rates. (e) The schematic diagram of the 3D dilated convolution operation. $d$ is the dilation rate. $d=1$ indicates the regular convolution.[]{data-label="MFunit"}](DMFunit2.pdf){width="\textwidth"}
**Channel Grouping.** The idea of channel grouping is to split the convolutional channels as multiple groups that can reduce the connections between the feature maps and kernels for parameter saving significantly. As examples shown in (a) and (b), the regular residual unit is grouped into $g$ parallel residual units that are called fibers. We assume the kernel size is constant, e.g. $kernel=3\times 3\times 3$ and denote $param_{(a)}$ and $param_{(b)}$ as the parameter amounts of (a) and (b), respectively. Thus, we have $param_{(a)}=kernel\times(c_{in}\times c_{mid}+c_{mid}\times c_{out})$, where ${c_*}$ is the number of channel. With the strategy of multiple fibers grouping, the amount of parameter comes to $param_{(b)}=g\times kernel\times (c_{in}/{g}\times{c_{mid}}/{g}+{c_{mid}}/{g}\times{c_{out}}/{g})=param_{(a)}/{g}$, which is $g$ times less than $param_{(a)}$.
**Multiplexer.** To facilitate the information exchange between fibers, the $1\times 1\times 1$ convolutions, dubbed as multiplexer, are utilized for information routing among different fibers [@chen2018multi]. It is comprised of two $1\times 1\times 1$ convolution layers, as illustrated in . And the input channel $c_{in}$ is squeezed to $c_{in}/2$ and then inflated to $c_{in}$. By employing two $1\times 1\times 1$ convolutions ($params=c_{in}\times c_{in}/2+c_{in}/2\times c_{in}=c_{in}^2/2$), it can reduce half of the parameters as compared to using one $1\times 1\times 1$ convolution ($params=c_{in}^2$). [Besides, the residual shortcuts, which are placed outside the multiplexer and the entire unit, allow the information pass through from lower level to higher level directly, leading to enhanced learning capability without additional parameters.]{}
**Residual Shortcuts.** The first residual connection is placed outside the multiplexer and the second residual connection is placed outside the entire DMF unit, both of which allow the information pass through from lower level to higher level directly that are conducive to enhance the learning capability. Moreover, the residual connections would not bring extra network parameters.
**Dilated Fiber.** To enlarge the respective field and capture the multi-scale 3D spatial correlations of the brain tumor lesions, the dilated convolution [@dilated] is employed. As shown in , the dilated fiber is comprised of three 3D dilated convolution branches with the dilation rates of $d=1$, 2 and 3 respectively. We allocate the learnable weights $\omega_1$, $\omega_2$ and $\omega_3$ to each dilated branches, and then sum them up. This weighted sum strategy is conductive to select most valuable information automatically from different field of view. The weight coefficients are one-initialized, which means the branches contribute equally at the beginning of the training process.
Dilated Multi-Fiber Network (DMFNet) Architecture
-------------------------------------------------
Using the MF and DMF units as the building blocks, the overall encoder-decoder network architecture of DMFNet is shown in . The 4-channel input corresponds to 4-modal MRI data. The main body of the network is composed by the MF/DMF units, excluding the first and last convolution layers. In the feature encoding stage, we apply the DMF unit in the first six encoding units to achieve multi-scale representation, which is benefited by the various sizes of receptive field in the dilated convolution. In the decoding stage, the high resolution features from the encoder are concatenated with the upsampled features, which is similar to the U-Net. We adopt the trilinear interpolation for upsampling the feature maps. Also, batch normalization and ReLU function are performed before each convolution operation of MF/DMF units.
![The proposed dilated multi-fiber network for 3D MRI brain tumor segmentation, where $g$ is referred to the number of groups, e.g. $g=16$ used in this work.[]{data-label="Architecture"}](Architecture1.pdf){width="\textwidth"}
There are several experiential guidelines for the construction of the lightweight network. Firstly, use the $2$-stride convolution layers for downsampling and get rid of the pooling layers to save the FLOPs. Assuming $F=F_D\times F_H\times F_W$ is size of the input feature map of the downsample layer, so $\frac{1}{8}F$ is the output size. For the scheme of an $1$-stride convolution layer and a pooling layer, we obtain the amount of the multiply add $FLOPs=K_D\times K_H\times K_W\times C_{in}\times C_{out}\times\frac{1}{g}\times F + F$. For the scheme of the $2$-stride convolution layer, it comes to $FLOPs=K_D\times K_H\times K_W\times C_{in}\times C_{out}\times\frac{1}{g}\times \frac{1}{8}F$. It is found that applying the $2$-stride convolution operation as the solution of downsamping would consume just $\frac{1}{8}$ less calculation amount while keep the same amount of the parameters.
Secondly, keep the network thin in the both sides. We are able to avoid expensive memory requirements by lightening the convolution operation on the big resolution downsampled maps and upsampled maps, for example, reduce the amount of the channels or use less number of the layers on the big resolution feature maps.
Thirdly, put the upsample operation behind the convolution during decoding the concatenated features, that is much helpful in saving both parameters and FLOPs.
Experiments and Results
=======================
Data and evaluation metric
--------------------------
The 3D MRI data, which provided by the Brain Tumor Segmentation (BraTS) 2018 challenge [@menze2015multimodal; @bakas2017advancing], consists of four kinds of MR sequences, namely native T1-weighted (T1), post-contrast T1-weighted (T1ce), T2-weighted (T2) and Fluid Attenuated Inversion Recovery (FLAIR). Each of them has a volume of $240\times240\times155$. The labels for tumor segmentation include the background (label 0), necrotic and non-enhancing tumor (label 1), peritumoral edema (label 2) and GD-enhancing tumor (label 4). The dataset consists of 285 cases of patients for training and 66 cases for validation. Although the testing set is not available currently, the performance of the validation set that assessed by the online evaluation server is used to validate the effectiveness of the proposed method.
Formally, the effectiveness is evaluated by the computational complexity and the segmentation accuracy. The complexity is determined by the amount of network parameters and FLOPs (i.e. multiplication and addition) [@shufflenet]. The segmentation accuracy is measured by the dice score metrics, including [Dice\_ET]{} – dice score of the enhancing tumor region (i.e. label 1), [Dice\_WT]{} – the dice score of the whole tumor region (i.e. label 1, 2 and 4), and [Dice\_TC]{} – the dice score of the regions of the tumor core (i.e. label 1 and 4).
Implementation details
----------------------
In our experiments, we use a batch size of 12 and train the DMFNet model on 4 parallel Nvidia GeForce 1080Ti GPUs for 500 epochs. We adopt the Adam optimizer with an initial learning rate ${\alpha}_0=0.001$. To increase the training data, we use the following data augmentation techniques: (1) random cropping the MRI data from $240\times 240\times 155$ voxels to $128\times 128\times 128$ voxels; (2) random mirror flipping across the axial, coronal and sagittal planes by a probability of 0.5; (3) random rotation with the angle between $[-10^{\circ},+10^{\circ}]$; (4) random intensity shift between $[-0.1,0.1]$ and scale between $[0.9,1.1]$. The generalized dice loss (GDL) is employed to train the network. L2 norm is applied for model regularization with a weight decay rate of $10^{-5}$.
out of the consideration of the categories imbalance issue.
$$GDL = 1-2\times \frac{\sum\limits_{c\in C} w_c \sum\limits_{n\in N} r_{cn}p_{cn}}{\sum\limits_{c\in C}w_c \sum\limits_{n\in N} r_{cn}+p_{cn}}$$
Where $C$ is the number of classes, $N$ is the number of class voxel counted per batch, weight of each class $w_c=\frac{1}{{(\sum\limits_{n\in N}r_{cn})}^2}$. Moreover, we apply the L2 norm as the model regularization with a weight decay rate of $10^{-5}$.
Experimental results and analysis
---------------------------------
We first conduct the five-fold cross validation experiment on the BraTS 2018 training set, and report the results in . Our method yields better dice scores of the enhancing tumor region and whole tumor region as compared with S3D-UNet [@chen2018s3d].
Model Dice\_ET$\pm\sigma$ (%) Dice\_WT$\pm\sigma$ (%) Dice\_TC$\pm\sigma$ (%)
------------------------- ------------------------- ------------------------- -------------------------
DMFNet 75.06$\pm$3.49 89.02$\pm$1.24 81.96$\pm$2.53
S3D-UNet [@chen2018s3d] 73.95 88.81 84.42
: Experimental results (average $\pm$ standard deviation) of the five-fold cross validation on the BraTS 2018 training set.[]{data-label="table1"}
Model ORE TTA Dice\_ET$\pm\sigma$ (%) Dice\_WT$\pm\sigma$ (%) Dice\_TC$\pm\sigma$ (%)
-------- ---------- ---------- ------------------------- ------------------------- -------------------------
DMFNet $\times$ $\times$ 71.55$\pm$2.60 88.83$\pm$1.29 81.77$\pm$2.18
DMFNet $\surd$ $\times$ 75.59$\pm$3.83 88.83$\pm$1.29 81.77$\pm$2.18
DMFNet $\surd$ $\surd$ 75.06$\pm$3.49 89.02$\pm$1.24 81.96$\pm$2.53
: Experimental result of the five-fold cross validation of the training set.[]{data-label="table1"}
**Comparison with state-of-the-art.** We conduct experiments on the BraTS 2018 validation set and compare our method with the state-of-the-art approaches. The performance comparison is presented in . Our proposed DMFNet achieves scores of 80.12%, 90.62% and 84.54% on Dice\_ET, Dice\_WT and Dice\_TC, respectively. Compared to the best scores achieved by NVDLMED [@myronenko20183d] (single model), it can be seen that our model only has marginal performance gaps of $0.06\%$ for the whole tumor, $1.61\%$ for the enhancing tumor and $1.48\%$ for the tumor core respectively. However, our DMFNet has $10\times$ less parameters and $55\times$ less FLOPs. Therefore, our method is a much more efficient algorithm yet can achieve comparable segmentation accuracy. We also show a visual comparison of the brain tumor segmentation results of various methods including 3D\_UNet [@cciccek20163d], Kao *et al.* [@kao2018brain] and our DMFNet in . It is obvious that DMFNet is able to generate better segmentation (especially at the class boundaries) due to the multi-scale representation of dilated convolutions.
Model Params(M) FLOPs(G) Dice\_ET(%) Dice\_WT(%) Dice\_TC(%)
--------------------------------- ----------- ----------- ------------- ------------- -------------
0.75$\times$ MFNet (**ours**) **1.81** **13.36** 79.34 90.22 84.25
MFNet (**ours**) [3.19]{} [20.61]{} [79.91]{} [90.43]{} [84.61]{}
[DMFNet (**ours**)]{} [3.88]{} [27.04]{} [80.12]{} [90.62]{} [84.54]{}
3D U-Net [@cciccek20163d] 16.21 1669.53 75.96 88.53 71.77
S3D-UNet [@chen2018s3d] 3.32 75.20 74.93 89.35 83.09
3D-ESPNet [@nuechterlein20183d] 3.63 76.51 73.70 88.30 81.40
Kao et al. [@kao2018brain] 9.45 203.96 78.75 90.47 81.35
No New-Net [@isensee2018no] 10.36 202.25 81.01 **90.83** 85.44
NVDLMED [@myronenko20183d] 40.06 1495.53 **81.73** 90.68 **86.02**
: Performance comparison on the BraTS 2018 validation set.[]{data-label="table2"}
![The visual comparison of MRI brain tumor segmentation results. GT indicates the ground-truth. The regions in red represent the necrotic and non-enhancing tumor, the regions in green represent the peritumoral edema and the regions in blue represent the GD-enhancing tumor.[]{data-label="VisFig"}](sample.png){width="10cm"}
**Model efficiency.** It is also evident from that our DMFNet significantly outperforms the methods which have similar or close model complexity (\# of parameters and FLOPs), i.e. S3D-UNet and 3D-ESPNet. Without using the dilated convolution, the 3D MFNet further reduces the model complexity. Moreover, we devise a remarkably lightweight and efficient network (denoted by 0.75$\times$ MFNet in ) by reducing the number of channels in MFNet (see ) to 75%. Therefore, it has only 1.81M parameters and 13.36G FLOPs. Nevertheless, its dice scores still reveal the network has strong learning capability for 3D brain tumor segmentation. In addition, DMFNet obtains an average inference time of 0.019s on one GPU (Nvidia 1080Ti) or 20.6s on one CPU (E5-2690 v3 @ 2.60GHz) for a single 3D MR image segmentation.
**Ablation study.** The performance comparison between MFNet and DMFNet () demonstrates that the dilated convolution is able to boost the dice scores. Since an adaptive weighting strategy is used for convolutions with different dilation rates (Fig. \[MFunit\] (d)), its efficacy is justified in by comparing it with the equal weight scheme ($\omega_1=\omega_2=\omega_3=1$). Due to the ability of learning and selecting the multi-scale context information adaptively, such weighting strategy results in more favorable scores, in particular for the Dice\_[ET]{}.
Model Weighting scheme Dice\_ET(%) Dice\_WT(%) Dice\_TC(%)
-------- -------------------------------------------- ------------- ------------- ------------- -- --
DMFNet [$\omega_1=\omega_2=\omega_3=1$]{} 78.969 90.539 84.207
DMFNet Learnable $\omega_1$,$\omega_2$,$\omega_3$ 80.12 90.62 84.54
: The effect of the weighting strategy of the dilated sub-fibers.[]{data-label="table4"}
![The changing weights $\omega_1$, $\omega_2$ and $\omega_3$ in the training process. DMF unit 1 is the $1^{st}$ DMF unit and similar to DMF units 2$\sim$6, i.e. the green blocks in .[]{data-label="figure3"}](subfiber.pdf){width="\textwidth"}
The weights $\omega_1$, $\omega_2$ and $\omega_3$ in the training process are plotted in . It is noticed that $\omega_1$ (green line, corresponds to small receptive field) plays a major role in the first unit, and its effect is decreasing in the higher layers. While, we also observe that the network favors the dilated branch with $\omega_3$ (red line, corresponds to large receptive field), which has leading influences in DMF units 2-6. It may be because the kernel with small receptive field is not able to capture useful semantic information in the higher layers which have small dimension.
Conclusion
==========
In this work, we have developed a lightweight and efficient Dilated Multi-Fiber network, with only 3.88M parameters and around 27G FLOPs, that can achieve real-time inference for 3D brain tumor segmentation in MRI. To reduce the heavy computational burden in 3D convolution significantly, we explored multi-fiber units with the spirit of group convolution. Meanwhile, we introduced a learnable weighted 3D dilated convolution to gain multi-scale image representation, which is able to enhance the segmentation accuracy. The experimental results on the 2018 BraTS challenge show that our approach achieved comparable dice sores (80.12%, 90.62% and 84.54% for ET, WT and TC, respectively) yet with 10$\times$ less model parameters and 50$\times$ less computational FLOPs, compared with the state-of-the-art algorithm, e.g. NVDLMED [@myronenko20183d]. This makes our method more practical for handling large-scale 3D medical datasets.
[^1]: Chen Chen and Xiaopeng Liu contributed equally.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The next-to minimal supersymmetric standard model (NMSSM) with non-universal Higgs masses, or the semi-constrained NMSSM (scNMSSM), extend the minimal supersymmetric standard model (MSSM) by a singlet superfield and assume universal conditions except for the Higgs sector. It can not only keep the simpleness and grace of the fully constrained MSSM and NMSSM, and relax the tension that they face after the 125-GeV Higgs boson discovered, but also predict an exotic phenomenon that Higgs decay to a pair of light singlet-dominated scalars ($10\!\sim\! 60\;{\rm GeV}$). This condition can be classified to three scenarios according to the identities of the SM-like Higgs and the light scalar: (i) the light scalar is CP-odd, and the SM-like Higgs is $h_2$; (ii) the light scalar is CP-odd, and the SM-like Higgs is $h_1$; (iii) the light scalar is CP-even, and the SM-like Higgs is $h_2$. In this work, we compare the three scenarios, checking the interesting parameter schemes that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and sensitivities in hunting for the exotic decay at the HL-LHC and the future lepton colliders such as CEPC, FCC-ee, and ILC.'
author:
- Shiquan Ma
- Kun Wang
- Jingya Zhu
bibliography:
- 'ref.bib'
title: 'Higgs decay to light scalars in the semi-constrained NMSSM'
---
Introduction {#sec:intro}
============
In 2012 a new boson of about $125\GeV$ was discovered at the LHC [@Aad:2012tfa; @Chatrchyan:2012xdj], and in later years it was verified as the SM-like Higgs boson with more and more data [@Khachatryan:2016vau; @Sirunyan:2018koj; @Aad:2019mbh; @CMS-PAS-HIG-19-005; @Sopczak:2020vrs]. But some other questions still exist, e.g., whether another scalar survives in the low mass region, and whether there is exotic Higgs decay to light scalars. Before the LHC, for the low integrated luminosity (IL) the LEP did not exclude a light scalar with a smaller production rate than the SM-like Higgs [@Barate:2003sz]. The CMS(ATLAS) collaboration searched for resonances directly in $bj\mu\mu$ channel in the $10\!\sim\!60$ ($20\!\sim\!70$) GeV [@Sirunyan:2018wim; @ATLAS-CONF-2019-036]. The two collaborations also searched for the exotic Higgs decay to light resonances in final states with $b\bar{b}\tau^+\tau^-$ [@Sirunyan:2018pzn], $b\bar{b}\mu^+\mu^-$ [@Aaboud:2018esj; @Sirunyan:2018mot], $\mu^+\mu^-\tau^+\tau^-$ [@Sirunyan:2020eum; @Sirunyan:2018mbx; @Sirunyan:2019gou], $4\tau$ [@Khachatryan:2017mnf; @Sirunyan:2019gou], $4\mu$ [@CMS-PAS-HIG-16-035; @CMS-PAS-HIG-18-003; @Aaboud:2018fvk], $4b$ [@Aaboud:2018iil], $\gamma\gamma gg$ [@Aaboud:2018gmx], $4\gamma$ [@ATLAS-CONF-2012-079]. But there is still sufficient space left of physics on the exotic decay. For example, in the $b\bar{b}\tau^+\tau^-$ channel reported by CMS collaboration [@Sirunyan:2018pzn], the $95\%$ exclusion limit is $3\%$ at least in the $20\sim60\GeV$ region. But according to simulations, the future limits can be $0.3\%$ at the High-Luminosity program of the Large Hadron Collider (HL-LHC) [@CMS-PAS-FTR-18-035], $0.04\%$ at the Circular Electron Positron Collider (CEPC), and $0.02\%$ at the Future Circular Colliders in $e^+e^-$ collisions (FCC-ee) [@An:2018dwb; @Liu:2016zki].
This exotic Higgs decay to light scalars can be motivated in many theories beyond the Standard Model (BSM) [@Curtin:2013fra], e.g., the next-to minimal supersymmetric standard model (NMSSM), the simplest little Higgs model, the minimal dilaton model, the two-Higgs-doublet model, the next-to two-Higgs-doublet model, the singlet extension of the SM, etc. Several phenomenological studies on the exotic decay exist in these models [@Dermisek:2005gg; @Dermisek:2006wr; @Dermisek:2006py; @Carena:2007jk; @Cheung:2007sva; @Cao:2013gba; @Cheung:2007sva; @Han:2013ic; @Cao:2013cfa; @LiuLiJia:2019kye; @Han:2018bni; @Chun:2017yob; @Bernon:2014nxa; @Engeln:2018mbg; @Haisch:2018kqx; @Liu:2016ahc].
The NMSSM extend the MSSM by a singlet superfield $\hat{S}$, solving the $\mu$-problem of it, and relax its fine-tuning tension after Higgs discovered in 2012 [@King:2012tr; @Benbrik:2012rm; @Cao:2012fz; @Cao:2012yn; @Kang:2012sy; @King:2012is; @Ellwanger:2011aa]. However, as supersymmetric (SUSY) models, the MSSM and NMSSM both suffer from a huge parameter space of over 100 dimensions. In most studies, some parameters are assumed equal at low-energy scale manually, leaving only about 10 free ones, and without considering the Renormalization Group Equations (RGEs) running from high scales [@King:2012tr; @Benbrik:2012rm; @Cao:2012fz; @Cao:2012yn; @Kang:2012sy; @King:2012is; @Ellwanger:2011aa]. In Ref.[@Cao:2013gba] a Higgs boson of $125\GeV$ decay to light scalars were studied in the NMSSM with parameters set in this way. While in constrained models, congeneric parameters are assumed universal at the Grand Unified Theoretical (GUT) scale, leaving only four free parameters in the fully-constrained MSSM (CMSSM) and four or five in the fully-constrained NMSSM (CNMSSM) [@Ellwanger:2010es; @LopezFogliani:2009np; @Belanger:2008nt; @Djouadi:2008uj; @Ellwanger:2008ya; @Hugonie:2007vd; @Kowalska:2012gs; @Gunion:2012zd]. However, it was found that CMSSM and CNMSSM were nearly excluded considering the $125\GeV$ Higgs data, high mass bounds of gluino and squarks in the first two generations, muon g-2, dark matter relic density and detections [@Gunion:2012zd; @Kowalska:2012gs; @Cao:2011sn; @Ellis:2012aa; @Bechtle:2015nua; @Athron:2017qdc; @Wang:2018vrr].
The semi-constrained NMSSM (scNMSSM) relaxes the unified conditions of the Higgs sector at the GUT scale, thus it is also called NMSSM with non-universal Higgs mass (NUHM) [@Das:2013ta; @Ellwanger:2014dfa; @Wang:2018vxp; @Nakamura:2015sya]. It not only keeps the simpleness and grace of the CMSSM and CNMSSM, but also relax the tension that they facing after the SM-like Higgs discovered [@Wang:2020dtb], and also predicts interesting light particles such as a singlino-like neutralino [@Wang:2020tap], and light Higgsino-dominated NLSPs [@Wang:2019biy; @Ellwanger:2018zxt; @Ellwanger:2016sur], etc. In this work, we study the scenarios in the scNMSSM with a light scalar of $10\sim60\GeV$, and the detections of exotic Higgs decay to a pair of it.
The main point of this paper is listed as follows. In , we introduce the model briefly and give some related analytic formulas. In we present in detail the numerical calculations and discussions. Finally, we draw our conclusions in .
The model and analytic calculations {#sec:ana}
===================================
The superpotential of NMSSM, with $\mathbb{Z}_3$ symmetry, is written as [@Maniatis:2009re] $$\label{F-term}
W=W_{\rm Yuk}+\lambda \hat{S} \hat{H}_{u}\cdot\hat{H}_{d}+\frac{1}{3}\kappa \hat{S}^3\,,$$ from which the so-called F-terms of the Higgs potential can be derived as $$V_{\rm F}=|\lambda S|^2(|H_u|^2+|H_d|^2)+|\lambda H_u\cdot H_d+\kappa S^2|^2 \,.$$ The D-terms is the same as in the MSSM $$\label{D-term}
V_{\rm D} =\frac{1}{8}\left(g_1^2+g_2^2\right)\left(|H_d|^2-|H_u|^2\right)^2 +\frac{1}{2}g_2^2\left|H^{\dagger}_u H_d\right|^2 \,,$$ where $g_1$ and $g_2$ are the gauge couplings of $U(1)_Y$ and $SU(2)_L$ respectively. Without considering the SUSY-breaking mechanism, at a low-energy scale the soft-breaking terms can be imposed manually to the Lagrangian. In the Higgs sector these terms corresponding to the superpotential are $$\begin{aligned}
\label{soft-term}
V_{\rm soft}&=&M^2_{H_u}|H_u|^2+M^2_{H_d}|H_d|^2+M^2_S|S|^2 \nonumber\\
&&+\left(\lambda A_{\lambda}SH_u\cdot H_d+\frac{1}{3}\kappa A_{\kappa}S^3+h.c.\right) \,,\end{aligned}$$ where $M^2_{H_u},\, M^2_{H_u},\, M^2_{S}$ are the soft masses of Higgs fields $H_u,\, H_d,\,S$, and $A_\lambda,\, A_\kappa$ are the trilinear couplings at $M_{\rm SUSY}$ scale respectively. However, in the scNMSSM the SUSY breaking is mediated by gravity, thus the soft-parameters at $M_{\rm SUSY}$ scale are running naturally from the GUT scale complying with the RGEs.
At electroweak symmetry breaking, $H_u$, $H_d$ and $S$ get their vacuum expectation values (VEVs) $v_u$ , $v_d$ and $v_s$ respectively, with $\tan\beta\equiv v_u/v_d$, $\sqrt{v_u^2+v_d^2}\approx173\GeV$, and $\mu_{\rm eff}\equiv \lambda v_s$. Then they can be written as $$\begin{aligned}
&&H_u=\left(
\begin{array}{c}
H_u^+ \\
v_u+\frac{\phi_1+i\varphi_1}{\sqrt{2}} \\
\end{array}
\right), \quad
H_d=\left(
\begin{array}{c}
v_d+\frac{\phi_2+i\varphi_2}{\sqrt{2}} \\
H_d^- \\
\end{array}
\right), \quad \nonumber \\
&&~~ S=v_s+\frac{\phi_3+i\varphi_3}{\sqrt{2}}.\end{aligned}$$ The Lagrangian is consist of the F-terms, D-terms, and soft-breaking terms, so with the above equations one can get the tree-level squared-mass matrix of CP-even Higgses in the base $\{\phi_1, \phi_2, \phi_3\}$ and CP-odd Higgses in the base $\{\varphi_1, \varphi_2, \varphi_3\}$ [@Maniatis:2009re]. After diagonalizing the mass squared matrixes including loop corrections [@Carena:2015moc], one can get the mass-eigenstate Higgses (three CP-even ones $h_{1,2,3}$ and two CP-odd ones $a_{1,2}$, in mass order) from the gauge-eigenstate ones ($\phi_{1,2,3}, \varphi_{1,2,3}$): $$\begin{aligned}
&& \quad h_i=S_{ik}\, \phi_k, \quad a_j=P_{jk}\, \varphi_k \,,\end{aligned}$$ where $S_{ik}, P_{jk}$ are the corresponding components of $\phi_k$ in $h_i$ and $\varphi_k$ in $a_j$ respectively, with $i,k=1,2,3$ and $j=1,2$.
In the scNMSSM, the SM-like Higgs (hereafter denoted as $h$ uniformly) can be CP-even $h_1$ or $h_2$, and the light scalar (hereafter denoted as $s$ uniformly) can be CP-odd $a_1$ or CP-even $h_1$. Then the couplings between the SM-like Higgs and a pair of light scalars $C_{hss}$ can be written at tree level as [@Ellwanger:2004xm] $$\begin{aligned}
C_{h_2h_1h_1}^{\rm tree}
&\!=\!&\frac{\lambda^2}{\sqrt{2}}
\big[v_u(\Pi^{122}_{211}+\Pi^{133}_{211})
\\
&&+v_d(\Pi^{211}_{211}+\Pi^{233}_{211})
+v_s(\Pi^{311}_{211}+\Pi^{322}_{211})
\big]
\nonumber\\
&&-\frac{\lambda\kappa}{\sqrt{2}}
\bigl(v_u\Pi^{323}_{211}+v_d\Pi^{313}_{211}+2v_s\Pi^{123}_{211}\bigr)
\nonumber\\
&&+\sqrt{2}\kappa^2v_s \Pi^{333}_{211}-\frac{\lambda A_{\lambda}}{\sqrt{2}}\Pi^{123}_{211}+\frac{\kappa A_{\kappa}}{3\sqrt{2}}\Pi^{333}_{211}
\nonumber\\
&&+\frac{g^2}{2\sqrt{2}}
\left[v_u (\Pi^{111}_{211}-\Pi^{122}_{211})-v_d (\Pi^{211}_{211}-\Pi^{222}_{211})
\right] \,, \nonumber $$ where $$\Pi^{ijk}_{211}=2S_{2i}S_{1j}S_{1k}+2S_{1i}S_{2j}S_{1k}+2S_{1i}S_{1j}S_{2k} \,;$$ or $$\begin{aligned}
C_{h_a a_1a_1}^{\rm tree}
&=&\frac{\lambda^2}{\sqrt{2}}
\big[v_u (\Pi^{122}_{a11}+\Pi^{133}_{a11})
\\
&&+v_d(\Pi^{211}_{a11}+\Pi^{233}_{a11})
+v_s(\Pi^{311}_{a11}+\Pi^{322}_{a11})
\big]
\nonumber\\
&&+\frac{\lambda\kappa}{\sqrt{2}}
\big[v_u(\Pi^{233}_{a11}-2\Pi^{323}_{a11})+v_d (\Pi^{133}_{a11}-2\Pi^{313}_{a11})
\nonumber\\
&&+2v_s (\Pi^{312}_{a11}-\Pi^{123}_{a11}-\Pi^{213}_{a11})
\big]
+\sqrt{2}\kappa^2 v_s\Pi^{333}_{a11}
\nonumber\\
&&+\frac{\lambda A_{\lambda}}{\sqrt{2}}(\Pi^{123}_{a11}+\Pi^{213}_{a11}+\Pi^{312}_{a11})-\frac{\kappa A_{\kappa}}{3\sqrt{2}}\Pi^{333}_{a11}
\nonumber\\
&&+\frac{g^2}{2\sqrt{2}}
\left[v_u(\Pi^{111}_{a11}-\Pi^{122}_{a11})-v_d (\Pi^{211}_{a11}-\Pi^{222}_{a11})
\right] \,, \nonumber\end{aligned}$$ where $\Pi^{ijk}_{a11}=2S_{ai}P_{1j}P_{1k}$ and $a=1,2$. Thus the width of Higgs decay to a pair of light scalars can be given by $$\Gamma(h\to s s)=\frac{1}{32\pi m_{h}}C^2_{hss}\left({1-\frac{4m^2_{s}}{m^2_h}}\right)^{1/2} \,.
\label{eq3}$$
Then the light scalars continually decay to light SM particles, such as a pair of light quarks or leptons, or gluons or photons though loops. The widths of light scalar decay to quarks and charged leptons at tree level are given by $$\begin{aligned}
&&\Gamma(s\to l^+l^-) = \frac{\sqrt{2}G_F}{8\pi}m_s m^2_l \left({1-\frac{4m^2_l}{m^2_s}}\right)^{p/2} \,,
\label{eq1} \\
&&\Gamma(s\to q \bar{q}) = \frac{N_c G_F}{4\sqrt{2}\pi}C^2_{s q q}m_s m^2_q \left({1-\frac{4m^2_q}{m^2_s}}\right)^{p/2} \,,
\label{eq2}\end{aligned}$$ where $p=1$ for CP-odd $s$, and $p=3$ for CP-even $s$. And the couplings between light scalar and up-type or down-type quarks are given by $$\begin{aligned}
C_{h_1t_L t^c_R}&=&\frac{m_t}{\sqrt{2}v \sin\beta}S_{11} \,,
\\
C_{h_1b_L b^c_R}&=&\frac{m_b}{\sqrt{2}v \cos\beta}S_{12} \,,
\\
C_{a_1t_L t^c_R}&=&i\frac{m_t}{\sqrt{2}v \sin\beta}P_{11} \,,
\\
C_{a_1b_L b^c_R}&=&i\frac{m_b}{\sqrt{2}v \cos\beta}P_{12} \,.\end{aligned}$$
Numerical calculations and discussions {#sec:num}
======================================
In this work, we first scan the following parameter space with <span style="font-variant:small-caps;">NMSSMTools-5.5.2</span> [@Ellwanger:2004xm; @Ellwanger:2005dv], $$\begin{aligned}
&& 0\!<\!\lambda\!<\!0.7, \qquad 0\!<\!\kappa\!<\!0.7, \qquad 1\!<\!\tan\!\beta\!<\!30,
\nonumber \\
&& 100\!<\!\mu_{\rm eff}\!<\!200\GeV, \qquad 0\!<\!M_0\!<\!500\GeV,
\\
&& 0.5\!<\!M_{1/2}\!<\!2\TeV, \qquad |A_0|,\, |A_{\lambda}|,\, |A_{\kappa}|\!<\!10\TeV \,.
\qquad \nonumber\end{aligned}$$
The constraints we imposed in our scan including: (i) An SM-like Higgs of $123\!\!\sim\!\!127\GeV$, with signal strengths and couplings satisfying the current Higgs data [@Khachatryan:2016vau; @Sirunyan:2018koj; @Aad:2019mbh; @CMS-PAS-HIG-19-005; @Sopczak:2020vrs]. (ii) Search results for exotic and invisible decay of the SM-like Higgs, and Higgs-like resonances in other mass regions, with <span style="font-variant:small-caps;">HiggsBounds-5.7.1</span> [@Bechtle:2008jh; @Bechtle:2011sb; @Bechtle:2013wla]. (iii) The muon g-2 constraint, like in Ref.[@Wang:2020tap]. (iv) The mass bounds of gluino and the first-two-generation squark over $2\TeV$, and search results for electroweakinos in multilepton channels [@Sirunyan:2018ubx]. (vi) The dark matter relic density $\Omega h^2$ below $0.131$ [@Tanabashi:2018oca], and the dark matter and nucleon scattering cross section below the upper limits in direct searches [@Aprile:2018dbl; @Aprile:2019dbj]. (vii) The theoretical constraints of vacuum stability and Landau pole.
After these constraints, the surviving samples can be categorized into three scenarios:
- Scenario I: $h_2$ is the SM-like Higgs, and the light scalar $a_1$ is CP-odd;
- Scenario II: $h_1$ is the SM-like Higgs, and the light scalar $a_1$ is CP-odd;
- Scenario III: $h_2$ is the SM-like Higgs, and the light scalar $h_1$ is CP-even.
In Tab. \[tab1\], we list the ranges of parameters and light particle masses in the three scenarios. From the table, one can see that the parameter ranges are nearly the same expect for $\lambda$, $\kappa$, and $A_\kappa$, but the mass spectrums for light particles are totally different.
Scenario I Scenario II Scenario III
----------------------------------------- ---------------- ---------------- ----------------
$\lambda$ $0\sim0.58$ $0\sim 0.24$ $0\sim 0.57$
$\kappa$ $0\sim0.21$ $0\sim0.67$ $0\sim0.36$
$\tan\beta$ $14\sim27$ $10\sim28$ $13\sim28$
$\mu_{\rm eff}\;[\rm GeV]$ $103\sim200$ $102\sim200$ $102\sim200$
$M_0\;[\GeV]$ $0\sim500$ $0\sim500$ $0\sim500$
$M_{1/2}\;[\TeV]$ $1.06\sim1.47$ $1.04\sim1.44$ $1.05\sim1.47$
$A_0\;[\TeV]$ $-2.8\sim0.2$ $-3.2\sim-1.0$ $-2.8\sim0.6$
$A_{\lambda} (M_{\rm GUT})\;[\TeV]$ $1.3\sim9.4$ $0.1\sim10$ $1.1\sim9.8$
$A_{\kappa}(M_{\rm GUT})\;[\TeV]$ $-0.02\sim5.4$ $-0.02\sim0.9$ $-0.7\sim5.7$
$A_{\lambda} (M_{\rm SUSY})\;[\rm TeV]$ $2.0\sim10.1$ $0.8\sim10.9$ $1.6\sim10.2$
$A_{\kappa}(M_{\rm SUSY})\;[\rm GeV]$ $-51\sim42$ $-17\sim7$ $-803\sim11$
$m_{\tilde{\chi}^0_1}\;[\GeV]$ $3\sim129$ $98\sim198$ $3\sim190$
$m_{h_1}\;[\GeV]$ $4\sim123$ $123\sim127$ $4\sim60$
$m_{h_2}\;[\GeV]$ $123\sim127$ $127\sim5058$ $123\sim127$
$m_{a_1}\;[\GeV]$ $4\sim60$ $0.5\sim60$ $3\sim697$
: The ranges of parameters and light particle masses in Scenario I, II and III.[]{data-label="tab1"}
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
To study the different mechanisms of Higgs decay to light scalars in different scenarios, we recombine relevant parameters, and show them in Fig.\[fig:1\]. From this figure one can find that:
- For Scenarios I and III, $\lambda A_{\lambda}S_{22} \!\approx\! \lambda^2v_s$, where $0.03\!\lesssim\! S_{22}\!\lesssim\!0.07$ is at the same order with $1/\tan\!\beta$, for the mass scale of the CP-odd doublet scalar $M_A \!\thicksim\! 2\mu_{\rm eff}/\sin\!2\beta \!\thicksim\! A_{\lambda} \!\gg\! \kappa v_s$ and $\tan\!\beta\!\gg\!1$ [@Cao:2013gba]. Thus the SM-like Higgs is up-type-doublet dominated.
- For Scenario I, $\kappa A_{\kappa}$, $k^2v_s$, and $\lambda\kappa v_s$ are at the same level of a few GeV; but for Scenario II, $\kappa^2 v_s$ can be as large as a few TeV for small $\lambda$ and large $\kappa$.
- Specially, for Scenario III, $\kappa A_{\kappa} \!\approx\! -4\kappa^2 v_s$, or $A_\kappa \!\approx\! -4\kappa v_s$.
According to the large data of the $125\GeV$ Higgs, and current null results searching for non-SM Higgs, the $125\GeV$ Higgs should be doublet dominated and the light scalar should be singlet dominated. Therefore, both the singlet component in the SM-like Higgs and the doublet component in the light Higgs should be a small quantity generally. We show how small they can be, and their relative scale in Fig.\[fig:2\]. From this figure, we can see as following for the three scenarios.
- Scenario I: The up-type-doublet component of the light scalar $-\!0.0015 \!\lesssim\! P_{11} \!<\!0$ and is proportional to the parameter $\lambda$, thus the total doublet component of the light scalar $P_{1D}\!\equiv\! \sqrt{P_{11}^2+P_{12}^2}\!\thickapprox\! P_{11}\tan\beta \!\lesssim\!0.04$; while the singlet component of the SM-like Higgs $|S_{23}|\!\lesssim\!0.3$.
- Scenario II: The up-type-doublet component of the light scalar $-\!0.0006 \!\lesssim\!P_{11}\!<\!0 $ and is proportional to the parameter $\lambda$, thus total doublet component of the light scalar $0<P_{1D}\!\lesssim\!0.013$; while the singlet component in the SM-like Higgs $|S_{13}|\!\lesssim\!0.3$.
- Scenario III: The up-type-doublet component of the light scalar and the singlet component of the SM-like Higgs are anticorrelated $S_{11}\!\thickapprox\!-S_{23}$, and the range of them is $-0.15\!\lesssim\! S_{11}\!\lesssim\! 0.2$, with the sign related to the parameter $\lambda$. It also means that the mixing in the CP-even scalar sector is mainly between the singlet and the up-type doublet, and we checked that $0.03\!\lesssim\!S_{22}\!\lesssim0.07$ and $S_{12}\!\lesssim\!0.03$. Thus the SM-like Higgs is up-type doublet dominated, which is applicable in all three scenarios, with $S_{21}\!\approx\! 1$ in Scenario I and III and $S_{11}\!\approx\!1$ in Scenario II.
Considering the values of and correlations among parameters and component coefficients, the couplings between the SM-like Higgs and a pair of light scalars can be simplified as: $$\begin{aligned}
C_{h_2a_1a_1}
&\simeq&
\sqrt{2}\lambda^2v_u+\sqrt{2}\lambda A_{\lambda}P_{11}\tan\!\beta\,,
\label{ch2a1a1}
\\
C_{h_1a_1a_1}
&\simeq&
\sqrt{2}\lambda^2v_u+\sqrt{2}\lambda A_{\lambda}P_{11}\tan\!\beta +2\sqrt{2}\kappa^2v_s S_{13} \,, \qquad
\label{ch1a1a1}
\\
C_{h_2h_1h_1}
&\simeq&
\sqrt{2}\lambda^2v_u-\sqrt{2}\lambda A_{\lambda}S_{12} +\sqrt{2}\lambda^2v_s S_{11}
\nonumber \\
&&+2\sqrt{2}\kappa^2 v_sS_{23} +\frac{3g^2}{\sqrt{2}}v_u S_{11}S_{11}
\nonumber \\
&&-2\sqrt{2}\lambda\kappa v_s S_{12} \,.
\label{ch2h1h1}\end{aligned}$$
In Fig.\[fig:3\] we show the exotic branching ratio $Br(h\!\to\!ss)$ including one-loop correction correlated with the mass of the light scalar, and the coupling between the SM-like Higgs and a pair of the light scalars at tree level. Since the 125 GeV Higgs is constrained to be very SM-like, its decay widths and branching ratios to SM particles cannot vary much. Thus combined with Eq.(\[eq3\]), it is natural that the branching ratios to light scalars are proportional to the square of the tri-scalar couplings. The significant deviations for the negative-coupling samples in Scenario III are because of the one-loop correction of the stop loops, $$\begin{aligned}
\Delta C_{h_2h_1h_1} &\simeq& S_{21} S_{11}^2 \frac{3\sqrt{2}m_t^4}{16\pi^2 v_u^3} \ln \left( \frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right),\end{aligned}$$ which can be as large as $5\GeV$. While for Scenario I and II, they are $$\begin{aligned}
\Delta C_{h_2a_1a_1} &\simeq& S_{21} P_{11}^2 \frac{3\sqrt{2}m_t^4}{16\pi^2 v_u^3} \ln \left( \frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right),\end{aligned}$$ $$\begin{aligned}
\Delta C_{h_1a_1a_1} &\simeq& S_{11} P_{11}^2 \frac{3\sqrt{2}m_t^4}{16\pi^2 v_u^3} \ln \left( \frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right).\end{aligned}$$ Since $P_{11}\!\ll\! S_{11}$ as seen form Fig.\[fig:2\] the loop correction in Scenario I and II is much smaller than that in Scenario III. In the following figures and discussions, we refer to the coupling $C_{hss}$ as including the one-loop correction $\Delta C_{hss}$ if without special instructions.
Detections at the HL-LHC
------------------------
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
At the LHC, the SM-like Higgs first can produce in gluon fusion (ggF), vector boson fusion (VBF), associated with vector boson ($\rm Wh$, $\rm Zh$), or associated with $t\bar{t}$ processes, where cross section in the ggF process is much larger than that of others. Then the SM-like Higgs can decay to a pair of light scalars, and each scalar can then decay to a pair of fermions, or gluons, or photons. The ATLAS and CMS collaborations have searched for these exotic decay mode in final states of $b\bar{b}\tau^+\tau^-$ [@Sirunyan:2018pzn], $b\bar{b}\mu^+\mu^-$ [@Aaboud:2018esj; @Sirunyan:2018mot], $\mu^+\mu^-\tau^+\tau^-$ [@Sirunyan:2020eum; @Sirunyan:2018mbx; @Sirunyan:2019gou], $4\tau$ [@Khachatryan:2017mnf; @Sirunyan:2019gou], $4\mu$ [@CMS-PAS-HIG-16-035; @CMS-PAS-HIG-18-003; @Aaboud:2018fvk], $4b$ [@Aaboud:2018iil], $\gamma\gamma gg$ [@Aaboud:2018gmx], $4\gamma$ [@ATLAS-CONF-2012-079], etc. These results are included in the constraints we considered.
As we checked, the main decay mode of the light scalar is usually to $b\bar{b}$ when $m_s\gtrsim 2m_b$. However, the color backgrounds at the LHC are very large, thus minor $\rm Zh$ production process is used in detecting $h\!\!\to\!\! 2s \!\!\to\!\! 4b$, as well VBF used for $h\!\!\to\!\! 2s \!\!\to\!\! \gamma\gamma gg$. For the other decay mode, the main production processes ggF can be used. Considering the cross sections of production and branching ratios of decay, and the precisions of detection, we found the detections in $4b$, $2b2\tau$, and $2\tau 2\mu$ channels are important for the scNMSSM. And the signal rates are $\mu_{\rm Zh} \!\times\! Br(h\!\to\! ss \!\to\! 4b)$, $\mu_{\rm ggF} \!\times\! Br(h\!\to\! ss \!\to\! 2b2\tau)$, and $\mu_{\rm ggF} \!\times\! Br(h\!\to\! ss \!\to\! 2\tau2\mu)$ respectively, where $\mu_{\rm ggF}$ and $\mu_{\rm Zh}$ are the ggF and $\rm Zh$ production rate normalized to their SM value respectively.
For detections of the exotic decay at the HL-LHC, we use the simulation results of $95\%$ exclusion limit in Refs.[@Cao:2013gba; @CMS-PAS-FTR-18-035]. Suppose with integrated luminosity of $L_0$, the $95\%$ exclusion limit for branching ratio in some channel is $Br_0$ in the simulation result, then for a sample in the model if the signal rate is $\mu_i\!\times\! Br$ ($i$ denote the production channel), the signal significance with integrated luminosity of $L$ will be $$\begin{aligned}
ss = 2 \;\frac{\mu_i\!\times\! Br}{Br_0} \sqrt{\frac{L}{L_0}},\end{aligned}$$ and the integrated luminosity needed to exclude the sample in the channel at $95\%$ confidence level (with $ss=2$) will be $$\begin{aligned}
L_{\rm e}=L_0 \left(\frac{Br_0}{\mu_i\!\times\! Br}\right)^2,\end{aligned}$$ and the integrated luminosity needed to discover the sample in the channel (with $ss=5$) will be $$\begin{aligned}
L_{\rm d}= L_0 \left(\frac{5}{2}\right)^2 \left(\frac{Br_0}{\mu_i\!\times\! Br}\right)^2.\end{aligned}$$
In Fig.\[fig:4\], \[fig:5\], and \[fig:6\], we show the signal rates for surviving samples in the three scenarios, and the $95\%$ exclusion limits [@Cao:2013gba; @CMS-PAS-FTR-18-035] in the $4b$, $2b2\tau$, and $2\tau2\mu$ channels respectively. From these figures one can see that
- With the light scalar heavier than $30\GeV$, the easiest way to discover the exotic decay is in the $4b$ channel, and the minimal integrated luminosity needed to discover the decay in this channel can be $650\fbm$ for Scenario II.
- With the light scalar lighter than $20\GeV$, the $2\tau2\mu$ channel can be important, especially for samples in the Scenario II, and the minimal integrated luminosity needed to discover the decay in this channel can be $1000\fbm$.
- With the light scalar heavier than $2m_b$, chance all exist to discover the decay in the $2b2\tau$ channel, and the minimal integrated luminosity needed to discover the decay in this channel can be $1500\fbm$ for Scenario II.
Detections at the future lepton colliders
-----------------------------------------
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
In future lepton colliders such as CEPC, FCC-ee, and International Linear Collider (ILC), the main production process of the SM-like Higgs is $\rm Zh$, and the color backgrounds are very little, thus these lepton colliders are powerful in detecting the exotic decay. There have been simulation results in many channels, such as $4b$, $4j$, $2b2\tau$, $4\tau$, etc. [@Liu:2016zki]. With the same method as in the last subsection, one can do similar analyses.
In Fig.\[fig:7\], \[fig:8\], \[fig:9\], and \[fig:10\], we show the signal rates for surviving samples in the three scenarios, and the $95\%$ exclusion limits at the CEPC, FCC-ee, and ILC, and in the $4b$, $4j$, $2b2\tau$, and $4\tau$ channels respectively [@Liu:2016zki]. From these figures one can see that:
- As in Fig.\[fig:7\], when the light scalar is heavier than about $15\GeV$ and the tri-scalar coupling is large enough, the branching ratio of $4b$ channel is significant. The minimal integrated luminosity needed to discover the decay in this channel can be $0.31\fbm$ for Scenario II and III at the ILC.
- As in Fig.\[fig:8\], for Scenario I and II, the exotic Higgs decay can be expected to be observed in the $4j$ channel when its mass is lighter than $11\GeV$. While for Scenario III, the light scalar available by CEPC can be as heavy as $40\GeV$. And the minimal integrated luminosity needed to discover the exotic decay in this channel can be $18\fbm$ for Scenario II at the ILC.
- As in Fig.\[fig:9\] and \[fig:10\], the signal rates in $2b2\tau$ and $4\tau$ channel are in similar trends. The branching ratios are tiny before the light scalar reaches the mass threshold, the maximum of branching ratios occur around $m_s=12\GeV$, and the minimal integrated luminosity needed to discover the decay in $2b2\tau$ channel can be $3.6\fbm$ for Scenario II at the ILC, in $4\tau$ channel can be $0.22\fbm$ for Scenario III at the ILC.
Conclusions {#sec:con}
===========
[ccccc]{} \*[Deacy Mode]{}& &HL-LHC&CEPC&FCC-$ee$&ILC ($b\bar{b}$)($b\bar{b}$)&$650\fbm$(@II)&$0.42\fbm$(@III)&$0.41\fbm$(@III)&$0.31\fbm$(@II)($jj$)($jj$)&-&$21\fbm$(@II)&$18\fbm$(@II)&$25\fbm$(@II)($\tau^+\tau^-$)($\tau^+\tau^-$)&-&$0.26\fbm$(@III)&$0.22\fbm$(@III)&$0.31\fbm$(@III)($b\bar{b}$)($\tau^+\tau^-$)&$1500\fbm$(@II)&$4.6\fbm$(@II)&$3.6\fbm$(@II)&$4.4\fbm$(@II)($\mu^+\mu^-$)($\tau^+\tau^-$)&$1000\fbm$(@II)&-&-&-
In this work, we have discussed the exotic Higgs decay to a pair of light scalars in the scNMSSM, or the NMSSM with NUHM. First, we did a general scan over the nine-dimension parameter space of the scNMSSM, considering the theoretical constraints of vacuum stability and Landau pole, and experimental constraints of Higgs data, non-SM Higgs searches, muon g-2, sparticle searches, relic density and direct searches for dark matter, etc. Then we found three scenarios with a light scalar of $10\!\sim\!60\GeV$: (i) the light scalar is CP-odd, and the SM-like Higgs is $h_2$; (ii) the light scalar is CP-odd, and the SM-like Higgs is $h_1$; (iii) the light scalar is CP-even, and the SM-like Higgs is $h_2$. For the three scenarios, we check the parameter schemes that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and the detections at the hadron colliders and future lepton colliders.
In this work, we compare the three scenarios, checking the interesting parameter schemes that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and the detections at the hadron colliders and future lepton colliders.
Finally, we draw following conclusions regarding a light scalar, and the exotic Higgs decay to a pair of it in the scNMSSM:
- There are interesting different mechanisms in the three scenarios to tune parameters to get the small tri-scalar couplings.
- The singlet component of the SM-like Higgs in the three scenarios are at the same level of $\lesssim0.3$, and is roughly one-order larger than the doublet component of the light scalar in Scenario I and II.
- The coupling between the SM-like Higgs and a pair of light scalars at tree level is $-3\!\sim\!5$, $-1\!\sim\!6$ and $-10\!\sim\!5$ GeV for Scenario I, II, and III respectively.
- The stop-loop correction to the tri-scalar coupling in Scenario III can be a few GeV, much larger than that in Scenario I and II.
- The most effective way to discover the exotic decay at the future lepton collider is in the $4\tau$ channel; while that at the HL-LHC is $4b$ for the light scalar heavier than 30 GeV, or $2b2\tau$ and $2\tau2\mu$ for a lighter scalar.
In details, the minimal integrated luminosity needed to discover the exotic Higgs decay at the HL-LHC, CEPC, FCC-ee, and ILC are summarized in Tab.\[tab2\], and the tuning mechanisms in the three scenarios to get the small tri-scalar coupling can be seen from Figs. \[fig:1\], \[fig:2\] and Eqs. (\[ch2a1a1\]), (\[ch1a1a1\]), (\[ch2h1h1\]).
#### Acknowledgements. {#acknowledgements. .unnumbered}
This work was supported by the National Natural Science Foundation of China (NNSFC) under grant No. 11605123.
|
{
"pile_set_name": "ArXiv"
}
|
---
address:
- 'Laboratoire de Physique Théorique et Hautes Energies. Universités Paris VI-Pierre et Marie Curie - Paris VII-Denis Diderot, 2 Place Jussieu, 75252 Paris Cedex 05, France.'
-
-
-
-
author:
- 'M. Tissier, B. Delamotte, D. Mouhanna'
title: 'Heisenberg frustrated magnets: a nonperturbative approach'
---
‘=11
\#1[0= -.025em0-0 .05em0-0 -.025em.0433em0]{}
specialpagefalse
Understanding the effect of competing interactions in three dimensional classical spin systems is one of the great challenges of condensed matter physics. However, after twenty five years of investigations, the nature of the universality class for the phase transition of the simplest frustrated model, the antiferromagnetic Heisenberg model on a triangular lattice (AFHT model), is still a strongly debated question$^{\cite{kawamura10}}$. Due to frustration, the ground state of the AFHT model is given by a canted configuration – the famous 120$^{\circ}$ structure – that implies a matrix-like order parameter$^{\cite{kawamura2}}$ and thus, the possibility of a new universality class. Experiments performed on materials supposed to belong to the AFHT universality class display indeed exponents different from those of the standard $O(N)$ universality class: for VCl$_2$$^{\cite{kadowaki}}$: $\beta=0.20(2),\gamma=1.05(3),
\nu=0.62(5)$, for VBr$_2$$^{\cite{wosnitza}}$: $\alpha=0.30(5)$, for CuFUD$^{\cite{koyama}}$: $\beta=0.22(2)$ and for Fe\[S$_2$CN(C$_2$H$_5$)$_2$\]$_2$Cl$^{\cite{defotis1,defotis2,defotis3}}$: $\beta=0.24(1),
\gamma=1.16(3)$. These results however call for several comments. First, the exponents violate the scaling relations, at least by two standard deviations. Second, they differ significantly from those obtained by Monte Carlo (MC) simulations performed either directly on the AFHT model ($\nu\simeq 0.59(1),\gamma\simeq 1.17(2),\beta\simeq
0.29(1),\alpha \simeq 0.24(2)$), and on models supposed to belong to the same universality class: AFHT with rigid constraints ($\nu= 0.504(10),\gamma= 1.074(29),\beta=
0.221(9),\alpha = 0.488(30) $), dihedral (i.e. $V_{3,2}$ Stiefel) models ($\nu\simeq
0.51(1),\gamma\simeq 1.13(2),\beta\simeq 0.193(4),\alpha \simeq 0.47(3) $). See Ref.[[@loison2]]{} for a review, and references therein. Finally, the anomalous dimensions $\eta$ obtained by means of scaling relations is found to be negative in experiments as well as in MC simulations, a result forbidden by first principles for second order phase transitions$^{\cite{zinn}}$. All these results are hardly compatible with the assumption of universality. It has been proposed that the exponents are, in fact, effective exponents characterizing a very weakly first order transition, the so-called “almost second order phase transition$^{\cite{zumbach7,zumbach,zumbach4}}$”.
From the theoretical point of view the situation is also very unsatisfactory since one does not have a coherent picture of the expected critical behaviour of the AFHT model between two and four dimensions. On the one hand, the weak coupling expansion performed on the suitable Landau-Ginzburg-Wilson (LGW) model in the vicinity of $d=4$ leads to a first order phase transition due to the lack of a stable fixed point$^{\cite{bailin,garel,yosefin}}$. On the other hand, the low temperature expansion performed around two dimensions on the Non-Linear Sigma (NL$\sigma$) model predicts a second order phase transition of the standard $O(4)/O(3)$ universality class$^{\cite{aza4}}$. Since there is no indication that these perturbative results should fail in their respective domain of validity – i.e. for small $\epsilon=4-d$ and small $\epsilon=d-2$ – this situation raises two problems. First, and contrary to what happens in the non-frustrated case, one cannot safely predict the three dimensional behaviour from naïve extrapolations of the perturbative results. Although a direct computation in three dimensions, possible on the LGW model$^{\cite{antonenko3,loison1}}$, can circumvent this difficulty, such an approach misses a second fondamental problem: the incompatibility between the symmetries of the NL$\sigma$ and LGW models. Indeed, the renormalization group flow drives the NL$\sigma$ model action towards an $O(4)$ symmetric regime, more symmetric than the microscopical system, a phenomenon that [ *cannot*]{} occur within all previous treatments of the LGW model (see ref. [[@aza4]]{} and below). The LGW model is therefore unable to find the $O(4)$ behaviour which has been nevertheless observed numerically in $d=2$$^{\cite{southern}}$. This raises serious doubts on the perturbative analysis of the LGW model away from $d=4$. Reciprocally, the [*perturbative*]{} analysis of the NL$\sigma$ model, based on a Goldstone mode expansion, predicts an $O(4)/O(3)$ fixed point everywhere between $d=2$ and $d=4$, as for the $N=4$ ferromagnetic model, in contradiction with the perturbative LGW results and the experimental and numerical situation in $d=3$. All this suggests that non perturbative features could play a major role and thus imposes to go beyond the standard perturbative approaches.
In this letter we realize this program by using the Wilson renormalization group framework$^{\cite{wilson}}$. We obtain a coherent picture of the physics of the AFHT model everywhere between $d=2$ and $d=4$. We find that the fixed point expected from the NL$\sigma$ model approach exists indeed in the vicinity of $d=2$ but disappears below – and close to – three dimensions. The transition for AFHT in $d=3$ is thus [*weakly*]{} first order contrary to the different predictions of both a new universality class$^{\cite{kawamura2}}$ and an $O(4)/O(3)$ second order behaviour$^{\cite{aza4}}$. We get effective exponents compatible with the numerical and experimental data quoted above. For generalization to $N>4$-component spins, we find the transition in $d=3$ to be second order with exponents in good agreement with recent extensive MC simulations – contrary to those found from three loop Padé-Borel resummed series$^{\cite{loison1}}$.
Our approach relies on the concept of effective average action$^{\cite{wetterich2,wetterich3}}$, $\Gamma_k[\phi]$, which is a coarse grained free energy where only fluctuations with momenta $q\ge k$ have been integrated out. The field $\phi$ corresponds to an average order parameter at scale $k$, the analog of a magnetization at this scale. At the scale of the inverse lattice spacing $\Lambda$, $\Gamma_{k=\Lambda}$ is the continuum limit of the lattice hamiltonian obtained, for example, by means of an Hubbard-Stratonovich transformation. On the other hand, the usual free energy $\Gamma$, generating one particle-irreducible correlation functions, is recovered in the limit $k\to 0$. The $k$-dependence of $\Gamma_k$ is controlled by an exact evolution equation$^{\cite{wetterich1,morris1}}$: $${\partial \Gamma_k\over \partial t}={1\over 2} \hbox{Tr} \left\{(\Gamma_k^{(2)}+R_k)^{-1}
{\partial R_k\over \partial t}\right\}
\label{renorm}$$ where $t=\ln \displaystyle {k / \Lambda}$. The trace has to be understood as a momenta integral as well as a summation over internal indices. In Eq.(\[renorm\]), $R_k$ is the effective infrared cut-off which suppresses the propagation of modes with momenta $q<k$. A convenient cut-off is provided by$^{\cite{wetterich1,morris4}}$: $R_k(q)=Z
q^2/(\exp(q^2/k^2)-1)$, where $Z$ is the field renormalization. In Eq.(\[renorm\]), $\Gamma_k^{(2)}$ is the [*exact field-dependent*]{} inverse propagator – i.e. the second derivative of $\Gamma_k$.
The effective average action $\Gamma_k$ is a functional invariant under the symmetry group of the system and thus depends on all the invariants built from the average order parameter. In our case, it is well known that the order parameter is a set of two vectors $\vec{\phi_1}$ and $\vec{\phi_2}$ that can be gathered in a real $N\times 2$ matrix $\phi_{ab}$ for $N$-component spins$^{\cite{kawamura2}}$. The symmetry of the system is the usual spatial rotation group $O(N)$ times a $O(2)$ corresponding to the symmetry of the underlying triangular lattice$^{\cite{aza4}}$. This $O(2)$ is realized on $\phi_{ab}$ as a right $O(2)$ “rotation" that turns the $\vec{\phi_i}$ into each other. There are two independent $O(N)\otimes O(2)$ invariants built out of $\phi_{ab}$: $\rho={\hbox{Tr}}\ ^{t}\phi\phi$ and $\tau={1\over 2}{\hbox{Tr}} (^{t}\phi\phi)^2-{1\over
4}({\hbox{Tr}}\ ^{t}\phi\phi)^2$.
The exact effective average action involves all the powers of $\rho,\tau$ and of derivative terms, and so Eq.(\[renorm\]) is a nonlinear functional equation, too difficult to be solved exactly in general. We therefore need to truncate it. One possibility is to keep in $\Gamma_k$ only the momentum (i.e. derivative)-independent part, an approximation called the Local Potential Approximation (LPA). In the case of frustrated magnets, this has been considered by Zumbach$^{\cite{zumbach7,zumbach,zumbach4}}$. This approximation however misses the field-renormalization and worse, as described below, the phenomenon of enlarged symmetry around $d=2$ found perturbatively in the NL$\sigma$ model$^{\cite{aza4}}$. This does not mean that this approximation is not useful: it is simply, in essence, unable to answer the question of the matching of the different perturbative approaches. Another truncation is however possible which preserves this possibility: it consists in an expansion of $\Gamma_k$ around its minimum in order to keep a finite number of monomials in the invariants $\rho$ and $\tau$ while including the derivative terms allowing to recover the different perturbative results. We choose the simplest such truncation: $$\begin{array}{l}
\Gamma_k= \displaystyle \int d^dx \left\{{Z\over 2} \nabla\phi_{ab}\nabla \phi_{ab}+ {
\omega\over 4}\ (\epsilon_{ab}\phi_{ca} \nabla \phi_{cb})^2 \right. \nonumber
\\
\\
\left. \displaystyle\hskip1.7cm+{\lambda\over 4}\left({\rho\over
2}-\kappa\right)^2+{\mu\over 4}\tau\right\}
\label{action}
\end{array}$$ where $\left\{\omega, \lambda, \kappa, \mu,Z\right\}$ are the coupling constants which parametrize the model. All terms but one - the “current term” $(\epsilon_{ab}\phi_{ca}
\nabla \phi_{cb})^2$ - are very natural and correspond to those appearing in the usual LGW action that realizes the symmetry breaking scheme of frustrated magnets. Indeed for $\lambda$ and $\mu \ge 0$, the minimum of the action is realized by a configuration of the form $\phi_{ab}^{min}=\sqrt{\kappa} R_{ab}$, where $R_{ab}$ is a matrix built with two orthonormal $N$-component vectors. The symmetry of this minimum is a product of a diagonal $O(2)$ group and a residual $O(N-2)$ group. The symmetry breaking scheme is thus $O(N)\otimes O(2)\to O(N-2)\otimes O(2)_{diag}$$^{\cite{aza4}}$. Note that for $\phi_{ab}=\phi_{ab}^{min}$ one has: $\rho=2\kappa$ and $\tau=0$ so that Eq.(\[action\]) corresponds indeed to a quartic expansion around the minimum. The spectrum in the low temperature phase consists in $2N-3$ Goldstone modes and three massive modes: one singlet of mass $m_1=\kappa \lambda$ and one doublet of mass $m_2=\kappa\mu$ which correspond to fluctuations of the relative angle and of the norms of the two vectors $\vec{\phi_1}$ and $\vec{\phi_2}$.
Without the current term, the truncation Eq.(\[action\]) is however not sufficient in our case. This term plays a crucial role since, for $N=3$, it allows the model to enlarge its symmetry from $O(3)\otimes O(2)$ to $O(3)\otimes O(3)\sim O(4)$ at the fixed point around $d=2$, leading to the well known $O(4)/O(3)$ behaviour$^{\cite{aza4}}$. The current term is systematically discarded in the perturbative treatment of the LGW model around four dimensions, for the - correct - reason that it is power-counting irrelevant. Here we can include it in our [*ansatz*]{} since it is anyway present in the full effective action $\Gamma_k$ and, in fact, [*must*]{} include it since it becomes relevant somewhere between two and four dimensions. The formalism we use is in charge to decide where it is important.
Let us emphasize that the effective average action method leads to non trivial and/or new results even within a quartic truncation of $\Gamma_k$. One can mention the Kosterlitz-Thouless phase transition$^{\cite{grater}}$, low energy QCD$^{\cite{jungnickel1}}$, the abelian Higgs model and superconductivity$^{\cite{bergerhoff1,bergerhoff2}}$, etc. The accuracy of the results thus obtained depends on two main features: i) the smallness of the anomalous dimension $\eta$ and ii) the fact that the thermodynamics of the system is controlled by a unique minimum of $\Gamma_k$. Note finally that this technique has been successfully employed in the case of the principal chiral model to solve a conflict between perturbative approaches$^{\cite{tissier1}}$, similar to what is studied here. However we stress that in the principal chiral case, there was no conflict between the symmetries of the LGW and NL$\sigma$ models.
The flow equations for the different coupling constants $\kappa$, $\lambda$, $\mu$, $\omega$ and $Z$ are derived by using Eq.(\[renorm\]) and Eq.(\[action\]) along the same lines as in [@jungnickel1]. The explicit recursion equations are too long to display and not particularly illuminating (see [[@site]]{}). Moreover, they require a numerical analysis, apart in $d=2+\epsilon$ and in $d=4-\epsilon$ where, as we now see, they get analytically tractable.
[*The physics around two dimensions*]{}. Around two dimensions, one expects that the perturbative “Goldstone mode" expansion of the NL$\sigma$ model works well. In the Goldstone regime, the fluctuations of the modulus of ${\vec\phi}_1$ and ${\vec\phi}_2$ and of their relative angle are frozen. This corresponds to the large mass limit $m_{1r}$, $m_{2r}\to \infty$. In this limit, our equations greatly simplify since the coupling constants divide in two sets $\lbrace \kappa, \omega, Z\rbrace$ and $\lbrace \lambda, \mu
\rbrace$ that do not mix. We only quote here the flow equations for the renormalized coupling constants of the first set: $$\left\{
\begin{array}{l}
\displaystyle{d\kappa_r\over dt}=-(d-2+\eta)\kappa_r +{N-2\over 2\pi} +{1\over 4\pi (1+
\kappa_r \omega_r)}
\\
\displaystyle{{d\omega_r\over dt}=(-2 + d + 2 \eta) \omega_r +}
\\
\ \ \ \ \displaystyle{1+\kappa_r \omega_r +
(N-1)\kappa_r^2 \omega_r^2 +(N-2)\kappa_r^3 \omega_r^3\over 2 \kappa_r^2 \pi(1+ \kappa_r
\omega_r)}\ \\
\\
\displaystyle\eta=-{d \ln Z \over dt}={3 + 4\kappa_r \omega_r + 2\kappa_r^2 \omega_r^2
\over 4\kappa_r\pi(1 + \kappa_r \omega_r)}
\end{array}
\right.
\label{perturb2d}$$ These equations admit a fixed point for any $N>2$ of coordinates $\kappa_r \simeq
1/\epsilon$, $\omega_r \simeq \epsilon$, while $\lambda_r,\mu_r \simeq$ cst. The masses $m_{1r}^*$, $m_{2r}^*$ are thus very large, proving the consistency of the limit. In fact, modulo the change of variables: $\eta_1=\kappa_r$ and $\eta_2= 2 \kappa_r (1+ \kappa_r \omega_r)$ the equations for $\kappa_r$ and $\omega_r$ are exactly those obtained at one-loop in the perturbative analysis of the NL$\sigma$ model$^{\cite{aza4}}$. For $N=3$, they admit a fixed point for which the model is $O(4)$-symmetric.
Let us now recall how this phenomenon of enlarged symmetry for $N=3$ can be understood directly on the partition function. At the fixed point, the potential gets infinitely deep so that one recovers the hard constraints of the NL$\sigma$ model: ${\vec\phi_{1}}\perp{\vec\phi_{2}}$, and ${\vec\phi_{1}}^2 = {\vec\phi_{2}}^2=
\kappa^*_r$. For $N=3$, this allows us to rewrite the current term as the kinetic term of a third vector, the cross product of the two others: $(\epsilon_{ab}\phi_{ca} \nabla
\phi_{cb})^2\propto(\nabla {\vec\phi}_3)^2$ with ${\vec\phi}_3={\vec\phi}_1\wedge
{\vec\phi}_2$. The order parameter of the system is then a trihedral of orthogonal vectors $({\vec\phi}_1,{\vec\phi}_2,{\vec\phi}_3)$. Thus contrary to what could be expected from a naïve expansion in powers of the fields, the current term plays a role as important as the usual kinetic terms. At the fixed point, $\omega_r$ takes a value such that the three vectors play a symmetric role and the symmetry breaking scheme is $O(3)\otimes O(3)/ O(3)\sim O(4)/O(3)$ instead of $O(3)\otimes O(2)/ O(2)$. Such a result is of course missed within the LPA$^{\cite{zumbach7,zumbach,zumbach4}}$. Therefore, the presence of the current term does not only improve the accuracy of the calculation, it is necessary for its consistency.
[*The physics around four dimensions.*]{} Around four dimensions, we have expanded our equations at leading order in the coupling constants $\lambda_r$ and $\mu_r$. At this order the current term decouples and we are left with the following equations for the quartic coupling constants: $$\left\{
\begin{array}{l}
\displaystyle{d\lambda_r\over dt}= (-4 + d)\lambda_r + {1\over 16\pi^2}(4 \lambda_r \mu_r
+ 4\mu_r^2 + \lambda_r^2 (4 + N))\\
\\
\displaystyle{d\mu_r\over dt}=(-4 + d )\mu_r + {1\over 16 \pi^2}(6\lambda_r\mu_r + N
\mu_r^2).
\end{array}
\right.$$ They are those obtained at one loop from the LGW approach$^{\cite{bailin}}$. These flow equations admit a stable fixed point for $N>N_c\simeq21.8$, attesting that the phase transition is second order. For $N<N_c$ the transition is first order since no fixed point is found.
To higher orders, $N_c$ depends on the dimension. In $d=3$, three loop calculations resummed [*à la*]{} Padé-Borel predict $N_c(d=3)=3.91$$^{\cite{antonenko3}}$. Note however that this calculation exhibits unusual behaviours compared to the $O(N)$ case: the coefficients of the series do not decrease monotonically and the series themselves are not alternate$^{\cite{loison1}}$. These features reveal the poor summability of the series. Finally, in the $N=6$ case, for which the transition is second order, the predictions based on a Padé-Borel resummation, which provides $\nu=0.575$ and $\gamma=1.121$$^{\cite{loison1}}$, are in clear disagreement with recent numerical simulations, for which $\nu=0.700(11)$ and $\gamma=1.383(36)$$^{\cite{loison1}}$.
From this point of view our approach has several avantages: first, since it matches with the one loop perturbative results in $d=2$ and $d=4$ it is likely that the error does not vary much with the dimension – a fact that has been confirmed in the $O(N)$ case for which the precision for a given truncation is almost uniform with $d$. Second, it does not rely on a Padé-Borel resummation and therefore is free of the above mentionned problems of convergence. Of course, our results will change while improving the [*ansatz*]{} Eq.([\[action\]]{}) by incorporating terms of higher order in fields and derivatives. However, all cases already treated within the average action method suggest that the lowest order approximation gives fairly good results, even with this crude approximation. For example, in the ferromagnetic $O(3)$ model, one finds $\nu=0.703$$^{\cite{tetradis1}}$ which has to be compared to the six loop resummed perturbation series in three dimensions which provide $\nu=0.705$$^{\cite{zinn}}$.
[*The physics between two and four dimensions.*]{} Let us first study the fate of the fixed point found analytically in $d=2+\epsilon$ for $N=3$. By numerically integrating the flow equations, we find that this stable $O(4)/O(3)$ fixed point describes a smooth trajectory in the coupling constant space while $d$ is increased. Our flow equations actually admit another – but unstable – fixed point, which moves toward the stable fixed point as the dimension is increased. At a critical dimension $d_c\simeq 2.87$, the two fixed points collapse and disappear. Above $d_c$, no other stable fixed point is found and we conclude that the transition is first order in $d=3$. We thus show that the $O(4)/O(3)$ fixed point obtained from the NL$\sigma$ model plays no role in the three dimensional physics of frustrated magnets, as conjectured for example by Jolicœur and David$^{\cite{jolicoeur4}}$ and Dobry and Diep$^{\cite{dobry}}$. We also discard the possibility of a new universality class conjectured on the basis of a naïve extrapolation of the $\epsilon=4-d$ calculation$^{\cite{kawamura10,kawamura2}}$. The proximity of $d_c$ with $d=3$ however let open the possibility of a very weakly first order phase transition with effective critical exponents. This behaviour manifests itself in our equations by the existence of a minimum around which the RG flow slows down. This characterizes a very large, although finite correlation length $\xi$. A rough estimate of this correlation length – a few hundred lattice spacings – indicates that a pseudo-scaling behaviour can be observed although $\xi$ is not large enough to ensure a true universality. This could explain the broad spectrum of effective critical exponents found in experiments and numerical simulations. Although the flow equations do not have a fixed point, we are able to compute effective exponents by linearizing the flow equations around the minimum. We recover here the phenomenon of “almost second order phase transition” first introduced by Zumbach$^{\cite{zumbach7,zumbach,zumbach4}}$ within the LPA. To get accurate results we have to take into account the $\phi^6$-like terms in our [*ansatz*]{}. We find: $\nu=0.53$, $\gamma=1.03$ and $\beta=0.28$, which lie in between the various sets of exponents found experimentally and numerically (see above). For comparison Zumbach found $\nu\simeq0.63$ in the LPA$^{\cite{zumbach7,zumbach,zumbach4}}$, the difference being mainly due to the anomalous dimension.
Finally, we find a true fixed point in $d=3$ for $N$ larger than a critical value $N_c(d=3)\simeq 4$. For $N=6$, we get $\nu=0.74$ and $\gamma=1.45$ which compare well with the Monte Carlo data $\nu=0.700(11)$ and $\gamma=1.383(36)$$^{\cite{loison1}}$. They are close to the LPA results, where $\nu=0.76$$^{\cite{zumbach}}$, and much better than those obtained by a three-loop calculation in $d=3$$^{\cite{loison1}}$ (see above). We have checked that our exponents do not vary significantly when monomials of order six in the fields are included in the [*ansatz*]{} Eq.($\ref{action}$).
To conclude, using a non perturbative method, we have reached a global understanding of frustrated Heisenberg magnets including a matching between previous perturbative predictions and a good agreement with experimental and numerical data. It remains to understand the very origin of the disappearance of the NL$\sigma$ model fixed point. The role of non trivial topological configurations can be invoked. One can hope a complete understanding of this point through the average action method which successfully describes the Kosterlitz-Thouless phase transition$^{\cite{grater}}$.
We thank J. Vidal for a careful reading of the manuscript
LPTHE is a laboratoire associé au CNRS UMR 7589. e-mail: tissier,delamotte,[email protected]
[10]{}
H. Kawamura, , 4707, (1998).
H. Kawamura, , 4916, (1988).
H. Kadowaki, K. Ubukoshi, K. Hirakawa, J.L. Martinez, and G. Shirane, , 4027, (1987).
J. Wosnitza, R. Deutschmann, H.v. L[ö]{}hneysen, and R.K. Kremer, , 8045 , (1994).
K. Koyama and M. Matsuura, , 4085, (1985).
G.C. DeFotis, F. Palacio, and R.L. Carlin, , 380, (1978).
G.C. DeFotis and S.A. Pugh, , 6497, (1981).
G.C. DeFotis and J.R. Laughlin, , 713, (1986).
D. Loison and K.D. Schotte, .
J. Zinn-Justin, , (Oxford University Press, New York, 1989).
G. Zumbach, , 2421, (1993).
G. Zumbach, , 771, (1994).
G. Zumbach, , 225, (1994).
D. Bailin, A. Love, and M.A. Moore, , 1159, (1977).
T. Garel and P. Pfeuty, , 245, (1976).
M. Yosefin and E. Domany, , 1778, (1985).
P. Azaria, B. Delamotte, F. Delduc, and Th. Jolicoeur, , 485, (1993).
S.A. Antonenko and A.I. Sokolov, , 15901, (1994).
D. Loison, A.I. Sokolov, B. Delamotte, S.A. Antonenko, K.D. Schotte, and H.T. Diep, .
B.W. Southern and A.P. Young, , 13170, (1993).
K.G. Wilson and J. Kogut, , 75, (1974).
C. Wetterich, , 529, (1991).
C. Wetterich, , 451, (1993).
C. Wetterich, , 90, (1993).
T.R. Morris, , 2411, (1994).
T.R. Morris and J. F. Tighe, , 007 (1999).
M. Grater and C. Wetterich, , 378, (1995).
D.-U. Jungnickel and C. Wetterich, , 5142, (1996).
B. Bergerhoff, F. Frere, D. Litim, S. Lola, and C. Wetterich, , 5734, (1996).
B. Bergerhoff, D. Litim, S. Lola, and C. Wetterich, , 4273, (1996).
M. Tissier, D. Mouhanna, and B. Delamotte, .
M. Tissier, B. Delamotte, and D. Mouhanna, [in preparation. Equations available at]{}
[http://www.lpthe.jussieu.fr/${\tilde{\ }}$tissier.]{}
N. Tetradis and C. Wetterich, 197, (1992).
Th. Jolicœur and F. David, , 3148, (1996).
A. Dobry and H.T. Diep, , 6731, (1995).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Marina Rafajlovi[ć]{}$^{1,2,*}$, Anna Emanuelsson$^{1}$, Kerstin Johannesson$^{3,2}$, Roger K. Butlin$^{4,2}$, and Bernhard Mehlig$^{1,2}$\
$^1$*Department of Physics, University of Gothenburg, SE-412 96 Gothenburg, Sweden*\
$^2$*The Linnaeus Centre for Marine Evolutionary Biology, University of Gothenburg, SE-405 30 Gothenburg, Sweden*\
$^3$*Department of Marine Sciences, University of Gothenburg, Tjärnö SE-452 96 Strömstad, Sweden*\
$^4$*Department of Animal and Plant Sciences, University of Sheffield, Sheffield S10 2TN, UK*\
$^*$*Corresponding author, [[email protected]]{}*\
title: 'Supplementary information for “A universal mechanism generating clusters of differentiated loci during divergence-with-migration"'
---
Details about selection parameters used in the main results
===========================================================
In the main text, patterns of divergence for the model introduced in [**Materials and Methods**]{} are shown for two values of the selection parameter $\sigma$, that is, for $\sigma=4$, and $\sigma=2.5$. Selection is weaker in the former than in the latter case. In either case, the optimal trait values $\theta^{(1)}$ and $\theta^{(2)}$ in the two demes are set to $\theta^{(1)}=-\theta^{(2)}=2$. Therefore, for $\sigma=4$ we find that a perfectly adapted individual in one deme experiences a fitness disadvantage of about $0.39$ in comparison to perfectly adapted individuals in the other deme, and the fitness disadvantage of the first generation hybrids between perfectly adapted individuals in the two demes is about $0.12$ (also in relation to locally perfectly adapted individuals). Note that in earlier stages of divergence, the fitness disadvantage of the first generation hybrids in comparison to locally favourable individuals (that have not reached their optima) in either deme is smaller than $0.12$. In this respect, the extent of divergent selection in all stages of divergence under $\sigma=4$ in our model corresponds to weak levels of selection under the model of Feder et al. (2012) ($s_{\rm o}<0.12$ in their model). Under the stronger selection ($\sigma=2.5$) in our model, the corresponding fitness disadvantage of individuals that are perfectly adapted for the opposite deme, and of the first generation hybrids are about $0.72$, and $0.27$, respectively.
We also note that when selection is weak, that is, when $\sigma^2$ is large in comparison to the distance between the optima $\theta^{(1)}-\theta^{(2)}$, our fitness function (Eq. (1) in the main text) reduces to that used by Yeaman and Whitlock (2011), with $\gamma=2$, $\theta^{(1)}=-\theta^{(2)}=\theta$, and $\Phi=2\theta^2/\sigma^2$ in their model. In particular, the selection strength corresponding to setting $\sigma=2.5$ in our model is similar to that used by Yeaman and Whitlock (2011) for their parameter $\Phi=0.75$ (see above).
Finally, the standard deviation $\sigma_\mu$ of the Gaussian distribution from which mutation-effect sizes are drawn is set to $\sigma_\mu=0.05$ in the main text. With this choice, we find that in the first population (with the positive optimal phenotype) the selective advantage of a heterozygote with allele-effect sizes $0|0.05$ (the latter corresponding roughly to one mutation of effect-size $0.05$ landing on an allele of effect size $0$) over the homozygote with allele-effect sizes $0|0$ is about $0.006$ for the weaker selection tested ($\sigma=4$), whereas for the stronger selection ($\sigma=2.5$) it is about $0.016$ (which is approximately three times larger than under the weaker selection).
Two-locus establishment model
=============================
We use a two-locus establishment model to analyse the importance of the establishment advantage of mutations landing close to a diverged locus in comparison to those landing at a distance. In these simulations we assume that one of the two loci has diverged before a new mutation lands in the genome. This locus is assumed to have alleles of effect sizes $Y_{\rm s} >0$ and $-Y_{\rm s} $ that are in a migration-selection balance. The initial frequencies of these two allele-effect sizes in the two populations are determined using a set of recursive deterministic equations (see [**S4**]{}). Note that in populations of infinite size, a balance between allele-effect sizes $Y_{\rm s} $ and $-Y_{\rm s} $ (both having nonzero frequencies, see [**S4**]{}) will be established and maintained under any migration rate due to the symmetries assumed in the model (Yeaman and Otto 2011) (but the allele frequencies depend on the migration rate). With these settings, the extent of local genomic divergence $D_{\rm s} $ at this locus prior to mutation is $D_{\rm s} =4Y_{\rm s} $. In addition, the second locus is assumed to have alleles of effect size zero prior to mutation. Therefore the extent of local genomic divergence $D_{\rm w} $ at this locus is equal to zero. Thereafter we assume that a mutation lands on the undifferentiated locus (or the diverged one, see below) in the population where it is locally beneficial, and we simulate the dynamics of genotype frequencies under drift to estimate the probability that the mutation successfully establishes in the populations. In these simulations, the mutation-effect size $\epsilon>0$ is assumed to be fixed and equal to the standard deviation $\sigma_\mu$ of the mutation-effect size distribution. Because the mutation of size $\epsilon>0$ is beneficial in the first population (with the positive optimal phenotype) we assume it lands in the first population. Thereafter, new mutations are not allowed. We neglect mutations landing in the population where they are locally deleterious because these are much less likely to establish successfully in comparison to mutations landing in the population where they are locally favourable. Each simulation is advanced until either the mutant allele experiences extinction, or until it becomes most common (frequency $>50\%$) at the locus in the population where it is beneficial. In the latter case, a successful establishment event is noted. We run at least $10^5$ such independent simulations, and the establishment probability is estimated as the proportion of successful establishment events among all independent runs made. We estimate the establishment probabilities of a new mutation landing at various recombination distances $r_j$ from the diverged locus. The values of $r_j$ are chosen as $r_j=j r$ ($j=0,\ldots,50$), where $r=0.0005$ corresponds to the recombination distance between adjacent loci set in Fig. 1. When $j=0$, a mutation lands on the diverged locus (‘stacking’ à la Yeaman and Whitlock (2011)). In this case, we additionally assume that a mutation lands on a locally favourable allele, giving rise to the mutant allele that is locally advantageous in comparison to either allele at this locus prior to the mutation. When $j=50$, the distance between the two loci corresponds to a half of the total recombination distance assumed in Fig. 1. Varying $Y_{\rm s} $ from zero to unity we approximate different stages of divergence from no divergence to perfect local adaptation in both populations. Results obtained in this model are shown in Figs. S2-S3.
Two-locus gain-loss model
=========================
In this appendix we explain the assumptions used in the two-locus gain-loss model that is introduced in the main text. In this model the two loci are assumed to be at a recombination distance $r_j>0$. Both loci are assumed to be differentiated initially, one with a stronger and the other with a weaker extent of divergence. Each locus has two alleles with effect sizes that are symmetric around zero ($Y_{\rm s} >0$ and $-Y_{\rm s} $ at the more strongly diverged locus, and $0<Y_{\rm w} <Y_{\rm s} $ and $-Y_{\rm w} $ at the weakly diverged one). We set the allele-effect sizes at the weakly diverged locus to $Y_{\rm w}=\sigma_\mu$, and $-Y_{\rm w}$. We choose this value as a representative beneficial mutation-effect size in the first population in a situation when mutation-effect sizes are drawn from a Gaussian distribution with a zero mean and a standard deviation $\sigma_\mu$ (assuming that when divergence starts all loci have alleles with effect sizes zero). Mutation-effect sizes much smaller than this value appear with a higher probability, but they suffer from a lower establishment probability. By contrast, mutation-effect sizes larger than this value appear with a much smaller probability. The initial haplotype frequencies are assumed to be equal to those in the deterministically expected stable steady state of the system (see [**S4**]{}). After initialisation we run two sets of simulations. In one we aim to estimate the rates of local loss at the two loci (neglecting new mutations). In the other we aim at estimating the rate of local gain upon introducing a mutation. These two sets of simulations are described in the main text, where we also show and discuss the results obtained under the gain-loss model.
Deterministic approximation for a two-locus model {#app:twoloci}
=================================================
In this appendix we list a set of recursive two-locus deterministic equations for adaptive divergence that we use to determine haplotype frequencies at the start of simulations of the establishment, and gain-loss models introduced in [**Materials and Methods**]{} in the main text. The deterministic approximation for the dynamics of haplotype frequencies is valid in the limit of infinitely large populations.
The main assumptions of the model of adaptive divergence are introduced in the main text. The populations are assumed to be diploid and of equal size $N$ that is constant over time. For purposes of this appendix, we assume here that $N\rightarrow\infty$. The environmental conditions are assumed to be different in the two demes, so that a given phenotype is under divergent selection. The optimal phenotype in the first (second) deme is denoted by $\theta^{(1)}$ ($\theta^{(2)}$), and we assume that $\theta^{(1)}>0$, and $\theta^{(2)}=-\theta^{(1)}$. In the two-locus model, the phenotype of an individual is assumed to be determined by the diploid genotype at two adaptive loci. Each allele at a given locus is assigned an allele-effect size by which it additively contributes to the phenotype. Selection is assumed to be soft, so that a contribution of individual $i$ with phenotype $z$ to the gamete pool in deme $k$ is proportional to the fitness $w_i^{(k)}$ of this individual relative to the fitness of all individuals in this deme, where
$$\label{eq:fitness}
w_i^{(k)}={\rm e}^{-\frac{(z-\theta^{(k)})^2}{2\sigma^2}}\,\,.$$
The strength of selection is determined by the parameter $\sigma$ in such a manner that selection is weaker when $\sigma$ is larger, and vice versa. In the model, individuals firstly migrate to the opposite deme at a rate $m$ per individual, generation. Thereafter, random mating, recombination and selection occur locally within each deme. Recombination is assumed to occur at a rate $r$ per gamete, individual, generation. When $r=0$, the model described corresponds to a single-locus model.
In this appendix we assume that each locus has two possible alleles, and we aim at estimating the haplotype frequencies in the steady state of the system. The effect sizes of these alleles are assumed to be symmetric around zero, and we denote them by $Y_{\rm s} >0$ and $-Y_{\rm s} $ at one locus, and $Y_{\rm w} >0$ and $-Y_{\rm w} $ at the other locus. We assume that $Y_{\rm s} $, and $Y_{\rm w} $ are advantageous over $-Y_{\rm s} $, and $-Y_{\rm w} $, respectively in the first population. The opposite is true in the second population. When $Y_{\rm s} =Y_{\rm w} $ the two loci do not differ in the extents of their divergence, whereas for $Y_{\rm s} >Y_{\rm w} $, the first locus has a higher extent of divergence than the second one. In what follows, we use a deterministic approximation to find the haplotype (and allele) frequencies at the two loci in the stable steady state.
When two divergent populations are initialised with allele-effect sizes $x_1=Y_{\rm s} $ and $x_2=-Y_{\rm s} $ at one locus, and with $y_1=Y_{\rm w} $ and $y_2=-Y_{\rm w} $ at the other locus, a deterministic approximation shows that each locus establishes a stable dimorphism (see also Yeaman and Otto (2011)). This conclusion can be arrived at by iterating a system of recursive equations for the evolution of frequencies $p^{(k)}_{x_i,y_j;\tau}$, of haplotypes $x_i,y_j$ ($i,j=1,2$) in the two populations ($k=1,2$) from generation $\tau$ to generation $\tau+1$. The dynamics are fully determined by a set of six equations. For simplicity, however, we show here the corresponding equation for $p^{(1)}_{x_1,y_1;\tau+1}$ noting that the remaining five equations are obtained similarly:
$$\begin{aligned}
p^{(1)}_{x_1,y_1;\tau+1}=&\Bigl[(1-m) \Bigl(p^{(1)}_{x_1,y_1;\tau}\Bigr)^2+m \Bigl(p^{(2)}_{x_1,y_1;\tau}\Bigr)^2\Bigr]\frac{w^{(1)}_{x_1,y_1|x_1,y_1}}{\langle w^{(1)}_\tau \rangle}\nonumber\\
&+\Bigl[(1-m) p^{(1)}_{x_1,y_1;\tau} p^{(1)}_{x_1,y_2;\tau} +m p^{(2)}_{x_1,y_1;\tau}p^{(2)}_{x_1,y_2;\tau} \Bigr]\frac{w^{(1)}_{x_1,y_1|x_1,y_2}}{\langle w^{(1)}_\tau \rangle}\nonumber\\
&+\Bigl[(1-m) p^{(1)}_{x_1,y_1;\tau} p^{(1)}_{x_2,y_1;\tau} +m p^{(2)}_{x_1,y_1;\tau}p^{(2)}_{x_2,y_1;\tau} \Bigr]\frac{w^{(1)}_{x_1,y_1|x_2,y_1;\tau}}{\langle w^{(1)}_\tau \rangle}\nonumber\\
&+r\Bigl[(1-m) p^{(1)}_{x_1,y_2;\tau} p^{(1)}_{x_2,y_1;\tau} +m p^{(2)}_{x_1,y_2;\tau}p^{(2)}_{x_2,y_1;\tau} \Bigr]\frac{w^{(1)}_{x_1,y_2|x_2,y_1}}{\langle w^{(1)}_\tau \rangle}\nonumber\\
&+(1-r)\Bigl[(1-m) p^{(1)}_{x_1,y_1;\tau} p^{(1)}_{x_2,y_2;\tau} +m p^{(2)}_{x_1,y_1;\tau}p^{(2)}_{x_2,y_2;\tau} \Bigr]\frac{w^{(1)}_{x_1,y_1|x_2,y_2}}{\langle w^{(1)}_\tau \rangle}\,\,.\label{eq:1}\end{aligned}$$
Here, $\langle w^{(1)}_\tau \rangle$ denotes the average fitness of parents in the first population in generation $\tau$ and it is given by
$$\langle w^{(1)}_\tau\rangle=\sum_{i=1}^2\sum_{j=1}^2\sum_{l=1}^2\sum_{a=1}^2 \Bigl[(1-m) p^{(1)}_{x_i,y_j;\tau}p^{(1)}_{x_l,y_a;\tau}+m p^{(2)}_{x_i,y_j;\tau}p^{(2)}_{x_l,y_a;\tau}\Bigr]w^{(1)}_{x_i,y_j|x_l,y_a}\,\,,$$
where the subscripts $i$ and $j$ denote alleles at the two loci at one chromosome (similarly, $l=1,2$ and $a=1,2$ are used for the corresponding pair at the other chromosome). The superscript $k=1,2$ stands for the first and second population, respectively. The fitnesses $w^{(k)}_{x_i,y_j|x_l,y_a}$ ($i,j,k,l,a=1,2$) are given by Eq. (\[eq:fitness\]) with $z=x_i+y_j+x_l+y_a$.
Using Eq. (\[eq:1\]) and the remaining five equations of the system, we find recursively the state at which the system eventually relaxes within a predetermined numerical precision. As a stopping condition for finding this state we require that during $1000$ successive generations neither of the allelic frequencies change by more than $10^{-8}$. The maximum number of generations for finding the steady state is set to $10^5$. For all parameter values tested, we find a stable dimorphism with nonzero allele frequencies at both loci as well as linkage disequilibrium between loci, the extent of which can be determined according to the haplotype frequencies obtained.
In a special case when $r=0$, and the system is initialised with haplotypes $Y_{\rm s}, Y_{\rm w} $ and $-Y_{\rm s},-Y_{\rm w} $ (effectively corresponding to allele-effect sizes $Y_{\rm s} +Y_{\rm w} $ and $-Y_{\rm s} -Y_{\rm w} $ at a single locus), a deterministic approximation shows that, independently of the migration rate, the stable steady state of the system corresponds to both alleles having nonzero frequencies (but the actual frequencies depend on the migration rate and the selective advantage of locally beneficial allele). This stems from the analysis performed in Yeaman and Otto (2011) upon assuming in their model symmetric migration, and that the alleles labelled by $A$, and $a$ have effect sizes $-Y_{\rm s} -Y_{\rm w} $, and $Y_{\rm s} +Y_{\rm w} $, respectively. A stable dimorphism in the single-locus case is a consequence of the symmetries assumed in the model. As mentioned already, allele frequencies in the stable state attained depend on the migration rate, so that the frequency of locally favourable alleles is smaller under stronger migration.
We finalise this appendix by noting that we use the equations given above only to estimate the initial haplotype (and hence allele) frequencies in simulations under the two-locus establishment model, and under the two-locus gain-loss model introduced in the main text. All simulations are otherwise stochastic and performed under random genetic drift.
References
==========
[.25in]{}[1]{} Feder, J. L., R. Gejji, S. Yeaman, and P. Nosil. 2012. Establishment of new mutations under divergence and genome hitchhiking. Philos. Trans. R. Soc. Lond. B Biol. Sci. 367(1587): 461–474.
[.25in]{}[1]{} Yeaman, S., and S. P. Otto. 2011. Establishment and maintenance of adaptive genetic divergence under migration, selection, and drift. Evolution 65(7): 2123–2129.
[.25in]{}[1]{} Yeaman, S. and M. C. Whitlock. 2011. The genetic architecture of adaptation under migration-selection balance. Evolution 65(7): 1897–1911.
Supplementary Legends
=====================
[**Figure S1.**]{} Same as in Fig. 1[**A**]{} in the main text, but here patterns from two different realisations are shown. For clarity, grey lines here depict the total extent of divergence in intervals of $250$ generations (whereas Fig. 1[**A**]{} shows all measures in intervals of $50$ generations). For the explanation and parameter values used, refer to Fig. 1[**A**]{} in the main text.
[**Figure S2.**]{} Results of the two-locus establishment model. Shown is the probability of establishment of a new mutation of a fixed size landing at an undifferentiated locus as a function of the distance between this locus and the locus that is differentiated prior to the mutation (measured in units of the recombination rate $r$). The establishment probability at distance zero corresponds to the mutation landing at the already differentiated locus. Dashed lines show the probability $1/(2N)$ of fixation of a neutral mutation at a neutral locus in a diploid population of size $N$. Panels differ by the extent of divergence $D_{\rm s} $ at the more strongly diverged locus prior to the mutation. Selection is weaker in [**A**]{}, [**C**]{}, [**E**]{}, [**G**]{} ($\sigma=4$) than in [**B**]{}, [**D**]{}, [**F**]{}, [**H**]{} ($\sigma=2.5$). Remaining parameter values: population size in each deme $N=1000$, migration rate $m=0.1$, recombination rate $r=5\cdot 10^{-4}$, mutation-effect size $\epsilon=0.05$, $10^6$ independent simulations in [**A**]{}, [**C**]{}, [**E**]{}, [**G**]{}, and $10^5$ in [**B**]{}, [**D**]{}, [**F**]{}, [**H**]{}.
[**Figure S3.**]{} Establishment bias in the two-locus establishment model. Shown is the integral of the establishment probability over distances outside of a given region around the diverged locus relative to the integral over distances within the region, as a function of the proportion that the region accounts for (out of $L=100$ loci). Dashed line indicates the ratio of unity. Note that in [**A**]{} the ratio is, as expected, below unity (approximately $0.95$) when the proportion of the region around the diverged locus is equal to $0.5$, but this is difficult to observe due to the scale of the $y$-axis used. Selection is weaker in [**A**]{} ($\sigma=4$) than in [**B**]{} ($\sigma=2.5$). Parameters: mutation-effect size $\epsilon=0.05$, the extent of divergence at the already diverged locus $D_{\rm s}=0.4$, population size in each deme $N=1000$, migration rate $m=0.1$, $10^6$ independent simulations in [**A**]{}, and $10^5$ in [**B**]{}.
[**Figure S4.**]{} Patterns of divergence under the parameter values similar to those in Fig. 1[**A**]{}, [**B**]{} in the main text, but here the recombination distance between adjacent loci is two times larger ($r=0.001$). Shown are the results from a single realisation in [**A**]{}, and averages over $54$ independent realisations in [**B**]{}. Remaining parameter values are the same as in Fig. 1[**A**]{} in the main text.
[**Figure S5.**]{} Effect of drift. Same as in Fig. 1 in the main text, but here we contrast the results obtained under a small population size ($N=200$, [**A**]{}, [**B**]{}), and a large population size ($N=1000$, [**C**]{}, [**D**]{}). The mutation rate is set so that its value scaled by the corresponding population size is equal in the two cases ($\mu=10^{-4}$ in [**A**]{} and [**B**]{}, or $\mu=2\cdot10^{-5}$ in [**C**]{} and [**D**]{}). In both cases, the selection parameter is $\sigma=3.5$. Other parameter values are the same as in Fig. 1 in the main text.
[**Figure S6.**]{} Same as in Fig. 3 in the main text but for the selection parameter $\sigma=3.5$ (corresponding to that used in Fig. S5). For the explanation of the figure refer to Fig. 3 in the main text. Panels differ by the initial extent of divergence $D_{\rm s}$ at the more strongly diverged locus. Population size: $N=200$ (in [**A**]{}, [**C**]{}, [**E**]{}, [**G**]{} and [**I**]{}), and $N=1000$ (in [**B**]{}, [**D**]{}, [**F**]{}, [**H**]{} and [**J**]{}). Mutation rate: $\mu=10^{-4}$ (in [**A**]{}, [**C**]{}, [**E**]{}, [**G**]{} and [**I**]{}), and $2\cdot 10^{-5}$ (in [**B**]{}, [**D**]{}, [**F**]{}, [**H**]{} and [**J**]{}). Mutation-effect size: $\epsilon=0.05$. For each parameter combination, the rate of gain is estimated based on $10^5$ independent simulations. The rate of loss is estimated using $10^3$ independent simulations ([in **A**]{}, [**C**]{}, [**E**]{}, [**G**]{}, [**I**]{}), or $500$ simulations (in [**B**]{}, [**D**]{}, [**F**]{}, [**H**]{}, [**J**]{}). Other parameters are the same as in Fig. S5.
[**Figure S7.**]{} Patterns of divergence under the parameter values similar to those in Fig. 1[**C**]{} in the main text but with $20$ times more adaptive loci ($L=2000$), and $20$ times smaller variance $\sigma_\mu^2$ of mutation-effect sizes. Panel [**A**]{}: the extent of divergence at all loci in a single stochastic realisation of the model. The solid line shows the total extent of divergence in the underlying single realisation of the model (the values are depicted on the $y$-axis on the right). Panel [**B**]{}: correlations at pairs of loci as a function of time and the distance between them (measured in units of recombination rate $r$) averaged over $10$ independent realisations. The corresponding average total extent of divergence is shown by the solid line. Panel [**C**]{}: same as in [**A**]{}, but depicting a cluster of diverged loci that accounts for most of the total extent of divergence in the realisation shown. Loci are assumed to reside on two chromosomes, so that the loci labelled $1,\ldots,1000$ are on one chromosome, and loci labelled $1001,\ldots,2000$ are on the other. Root mean square of mutation-effect sizes: $\sigma_\mu=0.05/\sqrt{20}$. Remaining parameters are the same as in Fig. 1[**C**]{}.
[**Figure S8.**]{} Same as in Fig. 3 in the main text but for the parameters corresponding to those in Fig. S7. Panels differ by the initial extent of divergence $D_{\rm s}$ at the more strongly diverged locus. Mutation-effect size: $\epsilon=0.05/\sqrt{20}$. The extent of divergence $D_{\rm w} $ at the weakly diverged locus is set to $D_{\rm w}=4\epsilon$ (i. e. $D_{\rm w}=0.2/\sqrt{20}$). The rate of gain is estimated using $10^5$ independent simulations. The rate of loss is based on $200$ simulations. Other parameters are the same as in Fig. S7.
[**Figure S9.**]{} Patterns of divergence for the parameter values corresponding to those in Figure 1 in the main text, but here mutation-effect sizes are drawn from an exponential distribution mirrored around zero (that is, positive and negative effects are assumed to be equally likely). For the explanation of the results shown refer to the caption of Figure 1 in the main text. Number of independent realisations in panels [**B**]{}, [**D**]{}: $50$. Remaining parameter values are the same as in Fig. 1 in the main text.
[**Figure S10.**]{} A comparison between patterns of divergence in a single stochastic realisation of the model, but shown using two different measures for the extent of divergence at locus $j$, that is, in [**A**]{} we use the measure $D_l$ introduced in the main text (the total extent of divergence is equal to $\sum_{l=1}^L D_l$), and in [**B**]{} we use instead twice the difference between average allele-effect sizes at locus $l$ in the two populations (the average extent of divergence is equal to the sum of average allele-effect sizes at all $L$ loci simulated). All parameter values correspond to those in Figure 1[**C**]{} in the main text, but here the result of a different stochastic realisation is shown. For further explanation of the results shown refer to the caption of Figure 1[**C**]{} in the main text.
Supplementary Figures
=====================
[![\[fig:difr\_real\_s4p0\] ](figs/Figure_S1_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:IN\_est\] ](figs/Figure_S2_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:IN\_est1\]](figs/Figure_S3_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:r0p001\]](figs/Figure_S4_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:drift\]](figs/Figure_S5_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:loss\_gain\]](figs/Figure_S6_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:2000Loci\_scMu\]](figs/Figure_S7_Rafajlovic.eps "fig:"){width="16.5cm"}]{}
[![\[fig:loss\_gain\_L2000\] ](figs/Figure_S8_Rafajlovic.eps "fig:"){width="8cm"}]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the problem of rate and power allocation for a sensor network under the pairwise distributed source coding constraint. For noiseless source-terminal channels, we show that the minimum sum rate assignment can be found by finding a minimum weight arborescence in an appropriately defined directed graph. For orthogonal noisy source-terminal channels, the minimum sum power allocation can be found by finding a minimum weight matching forest in a mixed graph. Numerical results are presented for both cases showing that our solutions always outperform previously proposed solutions. The gains are considerable when source correlations are high.'
author:
- |
\
[^1] [^2]
bibliography:
- 'tip.bib'
- 'RGBIB.bib'
title: Rate and power allocation under the pairwise distributed source coding constraint
---
distributed source coding, Slepian-Wolf theorem, matching forest, directed spanning tree, resource allocation
Introduction {#sec:intro}
============
The availability of low-cost sensors has enabled the emergence of large-scale sensor networks in recent years. Sensor networks typically consist of sensors that have limited power and are moreover energy constrained since they are usually battery-operated. The data that is sensed by sensor networks and communicated to a terminal[^3] is usually correlated. Thus, for sensor networks it is important to allocate resources such as rates and power by taking the correlation into account. The famous Slepian-Wolf theorem [@slepianwolf] shows that the distributed compression (or distributed source coding) of correlated sources can in fact be as efficient as joint compression. Coding techniques that approach the Slepian-Wolf bounds have been investigated [@pradhandiscus] and their usage proposed in sensor networks [@xiongspmag]. Typically one wants to minimize metrics such as the total rate or total power expended by the sensors in such situations. A number of authors have considered problems of this flavor [@razvan; @ramamoorthy07; @cristescuBV05]. These papers assume the existence of Slepian-Wolf codes that work for a large number of sensors.
In practice, the design of low-complexity Slepian-Wolf codes is well understood only for the case of two sources (denoted $X$ and $Y$) and there have been constructions that are able to operate on the boundary of the Slepian-Wolf region. In particular, the design of codes (eg.[@liverisXG02],[@aaronG02],[@schonbergRP02]) is easiest for the corner points (asymmetric Slepian-Wolf coding) where the rate pair is either $(H(X), H(Y|X))$ or $(H(X|Y),
H(Y))$. Several symmetric code designs are proposed in [@schonbergRP04],[@totozarasoaRG08],[@baiYBH08] in which the authors mainly focus on two correlated sources. In [@liverisXG02], the correlation between two binary sources are assumed to be symmetric and the LDPC code is designed for a virtual BSC correlation channel, while the codes designed in [@schonbergRP02], [@schonbergRP04] and [@totozarasoaRG08] are suitable for arbitrary correlation between the two binary sources. The authors of [@stankovicLXG06] proposed code designs for multiple sources. For two uniformly distributed binary sources whose correlation can be modeled as a BSC channel, their design supports both symmetric and asymmetric coding and approaches Slepian-Wolf bound. However, when it comes to more than two sources, in order to achieve optimum rate (joint entropy), they have a strong assumption on correlation model, i.e., the correlation between all the sources is solely described by their modulo-2 sum. Thus, given the current state of the art in code design it is of interest to consider coding strategies for sensor networks where pairs of nodes can be decoded at a time instead of all at once. This observation was made in the work of Roumy and Gesbert in [@roumyG07jour]. In that work they formulated the pairwise distributed source coding problem and presented algorithms for rate and power allocation under different scenarios. In particular, they considered the case when there exist direct channels between each source node and the terminal. Furthermore, the terminal can only decode the sources pairwise. We briefly review their work below. The work of [@roumyG07jour] considers two cases.
- [*Case 1 - Noiseless node-terminal channels.*]{}\
Under this scenario, they considered the problem of deciding which particular nodes should be decoded together at the terminal and their corresponding rate allocations so that the total sum rate is minimized.
- [*Case 2 - Orthogonal noisy node-terminal channels.*]{}\
In this case the channels were assumed to be noisy and orthogonal and the objective was to decide which nodes would be paired so that overall power consumption is minimized.
In [@roumyG07jour], the problem was mapped onto the problem of choosing the minimum weight matching [@kleinbergT05] of an appropriately defined weighted undirected graph. Each node participate in joint decoding only once.
In this paper we consider a class of pairwise distributed source coding solutions that is larger than the ones considered in [@roumyG07jour]. The basic idea is that previously decoded data can be used as side information for other sources. A simple example demonstrates that it is not necessary to only consider matchings Consider four correlated sources $X_1, X_2, X_3$ and $X_4$. The solution of [@roumyG07jour] constructs a complete graph on the four nodes $X_1, \dots, X_4$ and assigns the edge weights as the joint entropies i.e. the edge $(X_i, X_j)$ is assigned weight $H(X_i, X_j)$. A minimum weight matching algorithm is then run on this graph to find the minimum sum rate and the rate allocation. Suppose that this yields the matching $(X_1, X_3)$ and $(X_2,
X_4)$ so that the sum rate becomes \_[i = 1]{}\^4 R\_i = H(X\_1, X\_3) + H(X\_2, X\_4). Since conditioning reduces entropy, it is simple to observe that
H(X\_1, X\_3) + H(X\_2, X\_4) H(X\_1) + H(X\_3|X\_1) + H(X\_2|X\_3) + H(X\_4 |X\_2).
We now show that an alternative rate allocation: $R_1 =
H(X_1), R_2 = H(X_2|X_3), R_3 = H(X_3|X_1)$ and $R_4 = H(X_4|X_2)$ can still allow pairwise decoding of the sources at the terminal. Note that at the decoder we have,
- $X_1$ is known since $R_1 = H(X_1)$.
- $X_3$ can be recovered by jointly decoding for $X_3$ and $X_1$ since $X_1$ is known and the decoder has access to $H(X_3|X_1)$ amount of data.
- $X_2$ can be recovered since $X_3$ is known (from above) and the decoder has access to $H(X_2|X_3)$ amount of data.
- Similarly, $X_4$ can be recovered.
As we see above, the sources can be decoded at the terminal in a pipelined manner. Note that we can leverage the coding solutions proposed for two sources at the corner points in this case since the encoder for $X_3$ can be designed assuming that $X_1$ is known perfectly, the encoder for $X_2$ can be designed assuming that $X_3$ is known perfectly etc. The method of source-splitting [@rimoldi_Source_Sp; @source_splitting] is closely related to this approach. Given $M$ sources and an arbitrary rate point in their Slepian-Wolf region, it converts the problem into a rate allocation at a Slepian-Wolf corner point for appropriately defined $2M - 1$ sources. However as pointed out before, code designs even for corner points are not that well understood for more than two sources. Thus, while using source-splitting can result in sum-rate optimality i.e. the sum rate is the joint entropy, it may not be very practical given the current state of the art. Moreover, for $M$ sources it requires the design of approximately twice as many encoders and more decoding sub-modules that also comes at the cost of complexity.
In this paper, motivated by complexity issues, we present an alternate formulation of the pairwise distributed source coding problem that is more general than [@roumyG07jour]. We demonstrate that for noiseless channels the minimum sum rate allocation problem becomes one of finding a minimum weight arborescence of an appropriately defined directed graph. Next, we show that in the case of noisy channels, the minimum sum power allocation problem can be mapped onto finding the minimum weight matching forest of an appropriately defined mixed graph[^4]. Simulation results show that our solutions are significantly better than those in [@roumyG07jour] in the cases when correlations are high.
This paper is organized as follows. We formulate the problem and briefly review previous solutions based on matching in Section \[sec:formulation\]. In Section \[sec:noiseless-case\] and \[sec:noisy-case\] we present our solution for noiseless channels and noisy channels respectively. Numerical results for the both cases are given in Section \[sec:results\] and Section \[sec:conclusion\] concludes this paper.
Problem formulation and overview of related work {#sec:formulation}
================================================
Consider a set of correlated sources $X_1, X_2, \dots, X_n$ transmitting data to one sink in a wireless sensor network. We assume that every source can transmit data directly to the terminal. The source $X_i$ compresses its data at rate $R_i$ and sends it to the sink. We assume that the sources encode only their own data. Furthermore, we consider the class of solutions where the sink can recover a given source with the help of at most one other source. The problem has two cases.
- [*Case 1 - Noiseless node-terminal channels.*]{}\
Assume that there is no noise in the channel. In order to reduce the storage requirement at the sensors, we want to minimize the sum rate, i.e., $\min \sum_{i=1}^{n} R_i$.
- [*Case 2 - Orthogonal noisy node-terminal channels.*]{}\
Assume that channels between sources and sink are corrupted by additive white Gaussian noise and there is no internode interference. In this case, source channel separation holds [@barrosS06]. The capacity of the channel between node $i$ and the sink with transmission power $P_i$ and channel gain $\gamma_i$ is $C_i(P_i)\triangleq \log(1+\gamma_iP_i)$, where noise power is normalized to one and channel gains are constants known to the terminal. Rate $R_i$ should satisfy $R_i\leq C_i(P_i)$. Let $[n]$ denote the index set $\{1,\ldots,n\}$. The transmission power is constrained by peak power constraint:$\forall i\in [n], P_i\leq P_{max}$. In this context, our objective is to minimize the sum power , i.e., $\min \sum_{i=1}^{n} P_i$. Note that in the implementation from the practical point of view, we can use joint distributed source coding and channel coding [@GFZZ07; @ZhongGF05], once the pairing of nodes involved in jointly decoding are known from the resource allocation solution.
We now overview the work of [@roumyG07jour]. For noiseless case, in order for the terminal to recover data perfectly, the rates for a pair of nodes $i$ and $j$ should be in the Slepian and Wolf region $$SW_{ij}\triangleq \left\{ (R_i,R_j) : R_i\geq H(X_i|X_j), R_j\geq
H(X_j|X_i),R_i+R_j\geq H(X_i,X_j) \right\}.$$ Note that $H(X_i,X_j)$ is the minimum sum rate while $i$ and $j$ are paired to perform joint decoding. The matching solution of the problem is as follows. Construct an undirected complete graph $G=
(V,E)$ , where $|V| = n$. Let $W_E(i,j)$ denote weight on undirected edge $(i,j)$, $W_E(i,j)=H(X_i,X_j)$. Then, find a minimum weight matching $\mathcal{P}$ of $G$. For $(i,j)\in
\mathcal{P}$, the optimal rate allocation $(R_i, R_j)$ can be any point on the slope of the SW region of nodes $i$ and $j$ since they give same sum rate for a pair. We can simply set $(R_i, R_j)$ for $(i,j)\in \mathcal{P}$ to be either $(H(X_i), H(X_j|X_i))$ or $(H(X_j), H(X_i|X_j))$, i.e., at the corner points of SW region. For noisy case, the rate region for a pair of nodes is the intersection of SW region and capacity region $C_{ij}$: $C_{ij}(P_i,P_j)\triangleq \{(R_i, R_j): R_i\leq C_i(P_i), R_j\leq
C_j(P_j)\}$. It is easy to see that for a node $i$ with rate $R_i$ and power $P_i$, at the optimum $R_i^*=C_i(P_i^*)$, i.e. the inequality $R_i\leq C_i(P_i)$ constraint is met with equality. Thus, the power assignment is given by the inverse function of $C_i$ which we denote by $Q_i(R_i)$, i.e., $P_i^*=
Q_i(R_i^*)=(2^{R_i^*}-1)/\gamma_i$. This problem can also be solved by finding minimum matching on a undirected graph. However the weights in this case are the minimum sum power for each pair of nodes. The solution has two steps:
1. Find optimal rate-power allocations for all possible node pairs: $\forall (i,j)\in [n]^2$ s.t. $i<j$: $$\label{eq:NoisyRGS1}
(R_{ij}^*(i),R_{ij}^*(j))=\arg\min Q_i(R_{ij}(i))+Q_j(R_{ij}(j))$$ $$s.t. (R_{ij}(i),R_{ij}(j))\in SW_{ij}\cap C_{ij}(P_{max},P_{max})$$ The power allocations are given by $P_{ij}^*(i)=Q_i(R_{ij}^*(i))$ and $P_{ij}^*(j)=Q_j(R_{ij}^*(j))$. The rates $R_{ij}(i),R_{ij}(j)$ are the rates for node $i$ and node $j$ when $i$ and $j$ are paired. Note that when $i$ and another node $k\neq
j$ are considered as a pair, the rate for $i$ may be different,i.e., $R_{ij}(i)\neq R_{ik}(i)$.
2. Construct an undirected complete graph $G= (V,E)$, where $W_E(i,j)=P_{ij}^*(i)+P_{ij}^*(j)$ for edge $(i,j)$, and find a minimum matching $\mathcal{P}$ in $G$. The power allocation for node pair $(i,j)\in \mathcal{P}$ denoted by $(P_i, P_j)$ is $(P_{ij}^*(i),P_{ij}^*(j))$ and the corresponding rate allocation can be found.
The solution for step (1) is given in [@roumyG07jour] and denoted as $(P_{ij}^*(i), P_{ij}^*(j), R_{ij}^*(i), R_{ij}^*(j))$. This solution is the optimum rate-power allocation between a pair of nodes $i$ and $j$ under the peak power constraint and SW region constraint. Note that in this case, the rate assignments for $i$ and $j$ do not necessarily happen at the corner of the SW region.
Noiseless case {#sec:noiseless-case}
==============
As shown by the example in Section \[sec:intro\], the rate allocation given by matching may not be optimum and in fact there exist other schemes that have a lower rate while still working with the current coding solutions to the two source SW problem. We now present a formal definition of the pairwise decoding constraint.
\[def:pairwise\_prop\] [*Pairwise property of rate assignment.*]{} Consider a set of discrete memoryless sources $X_1, X_2, \dots,
X_n$ and the corresponding rate assignment $\mathbf{R} = (R_1,
R_2, \dots, R_n)$. The rate assignment is said to satisfy the pairwise property if for each source $X_i, i\in [n]$, there exists an ordered sequence of sources $(X_{i_1}, X_{i_2}, \dots,
X_{i_k})$ such that $$\begin{aligned}
R_{i_1} &\geq H(X_{i_1}), \label{eq:pairwise_prop_1}\\
R_{i_j} & \geq H(X_{i_j} | X_{i_{j-1}}) \text{,~~ for $2 \leq j
\leq k$, and } \label{eq:pairwise_prop_2}\\
R_{i} & \geq H(X_{i} | X_{i_{k}}).
\label{eq:pairwise_prop_3}\vspace{-2mm}\end{aligned}$$
Note that a rate assignment that satisfies the pairwise property allows the possibility that each source can be reconstructed at the decoder by solving a sequence of decoding operations at the SW corner points e.g. for decoding source $X_i$ one can use $X_{i_1}$ (since $R_{i_1} \geq H(X_{i_1})$), then decode $X_{i_2}$ using the knowledge of $X_{i_1}$. Continuing in this manner finally $X_i$ can be decoded. A rate assignment $\mathbf{R}$ shall be called pairwise valid (or valid in this section), if it satisfies the pairwise property. In this section, we focus on looking for a valid rate allocation that minimizes the sum rate. An equivalent definition can be given in graph-theoretic terms by constructing a graph called the pairwise property test graph corresponding to the rate assignment. [*Pairwise Property Test Graph Construction*]{}
1. Inputs : the number of nodes $n$, $H(X_i)$ for all $i\in[n]$, $H(X_i | X_j)$ for all $i,j\in [n]^2$ and the rate assignment $\mathbf{R}$.
2. Initialize a graph $G = (V, A)$ with a total of $2n$ nodes i.e. $|V| = 2n$. There are $n$ [*regular*]{} nodes denoted $1, 2, \dots, n$ and $n$ [*starred*]{} nodes denoted $1^*, 2^*, \dots , n^*$.
3. Let $W_A(j\rightarrow
i)$ denote the weight on directed edge $(j\rightarrow i)$. For each $i \in [n]$:
- If $R_i \geq H(X_i)$ then insert edge $(i^* \goes i)$ with $W_A(i^* \goes i)=H(X_i)$.
- If $R_i \geq H(X_i |
X_j)$ then insert edge $(j \goes i)$ with $W_A(j \goes i)=H(X_i |
X_j)$.
4. Remove all nodes that do not participate in any edge.
We denote the resulting graph for a given rate allocation by $G(\mathbf{R}) = (V, A)$. Note that if $\mathbf{R}$ is valid, the graph still contains at least one starred node. Next, based on $G(\mathbf{R})$ we define a set of nodes that are called the parent nodes. $\text{Parent}(\mathbf{R}) = \{i^* | (i^* \goes i)
\in A\}$, i.e., $\text{Parent}(\mathbf{R})$ corresponds to the starred nodes for the set of sources for which the rate allocation is at least the entropy. Mathematically if $i^* \in
\text{Parent}(\mathbf{R})$, then $R_{i} \geq H(X_i)$. We now demonstrate the equivalence between the pairwise property and the construction of the graph above.
\[lem:Noiseless PP2path\] Consider a set of discrete correlated sources $X_1, \dots X_n$ and a corresponding rate assignment $\mathbf{R} = (R_1, \dots, R_n)$. Construct $G(\mathbf{R})$ based on the algorithm above. The rate assignment $\mathbf{R}$ satisfies the pairwise property if and only if for all regular nodes $i \in V$ there exists a starred node $j^* \in \text{Parent}(\mathbf{R})$ such that there exists directed path from $j^*$ to $i$ in $G(\mathbf{R})$.
*Proof:* Suppose that $G(\mathbf{R})$ is such that for all regular nodes $i \in V$, there exists a $j^* \in
\text{Parent}(\mathbf{R})$ so that there is a directed path from $j^*$ to $i$. We show that this implies the pairwise property for $X_i$. Let the path from $j^*$ to $i$ be denoted $j^* \goes j
\goes \alpha_1 \dots \goes \alpha_k \goes i$. We note that $R_j
\geq H(X_j)$ by construction. Similarly edge $(\alpha_l \goes
\alpha_{l+1})$ exists in $G(\mathbf{R})$ only because $R_{\alpha_{l+1}} \geq H(X_{\alpha_{l+1}} | X_{\alpha_{l}})$ and likewise $R_i \geq H(X_{i} | X_{\alpha_{k}})$. Thus for source $i$ we have found the ordered sequence of sources $(X_j, X_{\alpha_1},
\dots, X_{\alpha_k})$ that satisfy properties (\[eq:pairwise\_prop\_1\]), (\[eq:pairwise\_prop\_2\]) and (\[eq:pairwise\_prop\_3\]) in definition \[def:pairwise\_prop\].
Conversely, if $\mathbf{R}$ satisfies the pairwise property, then for each $X_i$, there exists an ordered sequence $(X_{i_1}, \dots,
X_{i_k})$ that satisfies properties (\[eq:pairwise\_prop\_1\]), (\[eq:pairwise\_prop\_2\]) and (\[eq:pairwise\_prop\_3\]) from definition \[def:pairwise\_prop\]. This implies that there exists a directed path from $i_1^*$ to $i$ in $G(\mathbf{R})$, since $(i_1^* \goes i_1) \in A$ because $R_{i_1} \geq H(X_{i_1})$ and furthermore $(i_{j-1} \goes i_j) \in A$ because $R_{i_j} \geq
H(X_{i_j} | X_{i_{j-1}})$, for $j=2, \ldots, k$.
We define another set of graphs that are useful for presenting the main result of this section.
[*Specification of $G_{i^*}(\mathbf{R})$.*]{} Suppose that we construct graph $G(\mathbf{R})$ as above and find $\text{Parent}(\mathbf{R})$. For each $i^* \in
\text{Parent}(\mathbf{R})$ we construct $G_{i^*}(\mathbf{R})$ in the following manner: For each $j^* \in \text{Parent}(\mathbf{R})
\backslash \{i^*\}$ remove the edge $(j^* \goes j)$ and the node $j^*$ from $G(\mathbf{R})$.
For the next result we need to introduce the concept of an arborescence [@kleinbergT05].
An *arborescence (also called directed spanning tree)* of a directed graph $G=(V,A)$ rooted at vertex $r \in V$ is a subgraph $T$ of $G$ such that it is a spanning tree if the orientation of the edges is ignored and there is a path from $r$ to all $v \in V$ when the direction of edges is taken into account.
\[theo:noiseless1\] Consider a set of discrete correlated sources $X_1, \dots, X_n$ and let the corresponding rate assignment $\mathbf{R}$ be pairwise valid. Let $G(\mathbf{R})$ be constructed as above. There exists another valid rate assignment $\mathbf{R}^{'}$ that can be described by the edge weights of an arborescence of $G_{i^*}(\mathbf{R})$ rooted at $i^*$ where $i^* \in
\text{Parent}(\mathbf{R})$ such that $R^{'}_j \leq R_j$, for all $j \in [n]$.
*Proof:* We shall show that a new subgraph can be constructed from which $\mathbf{R}^{'}$ can be obtained. This shall be done by a series of graph-theoretic transformations.
Pick an arbitrary starred node $j^* \in \text{Parent}(\mathbf{R})$ and construct $G_{j^*}(\mathbf{R})$. We claim that in the current graph $G_{j^*}(\mathbf{R})$ there exists a path from the starred node $j^*$ to all regular nodes $i \in [n]$. To see this note that since $\mathbf{R}$ is pairwise valid, for each regular node $i$ there exists a path from some starred node to $i$ in $G(\mathbf{R})$. If for some regular node $i$, the starred node is $j^*$, the path is still in $G_{j^*}(\mathbf{R})$. Now consider a regular node $i_1$ and suppose there exists a directed path $k^{*}
\goes k \goes \beta_1 \dots \goes i_1$ in $G(\mathbf{R})$ where $k^* \in \text{Parent}(\mathbf{R}), k^* \neq j^*$. Since $k^* \in
\text{Parent}(\mathbf{R})$, $R_k \geq H(X_k) \geq H(X_k | X_l)
\text{~~ } \forall l \in [n]$. This implies that edge $(l \goes
k)$ is in $G_{j^*}(\mathbf{R}),\forall l \in [n]$, in particular, $(j \goes k)\in G_{j^*}(\mathbf{R})$. Therefore, in $G_{j^*}(\mathbf{R})$ there exists the path $j^* \goes j \goes
k\goes \beta_1 \dots \goes i_1$. This claim implies that there exists an arborescence rooted at $j^*$ in $G_{j^*}(\mathbf{R})$ [@kleinbergT05].
Suppose we find such one such arborescence $T_{j^*}$ of $G_{j^*}(\mathbf{R})$. In $T_{j^*}$ every node except $j^*$ has exactly one incoming edge (by the property of an arborescence [@kleinbergT05]). Let $inc(i)$ denote the node such that $(inc(i) \goes i) \in T_{j^*}$. We define a new rate assignment $\mathbf{R}^{'}$ as $$\begin{aligned}
R_{i}^{'} &= W_A(inc(i) \goes i) = H(X_i | X_{inc(i)}) \text{~~(for $i \in [n]$ and $i\neq j$), and}\\
R_{j}^{'} &=W_A(j^*\goes j)= H(X_j).\end{aligned}$$ The existence of edge $(j^*\goes j)\in G(\mathbf{R})$ implies $R_{j}^{'} = H(X_j)\leq R_j$. Similarly, we have $R_i^{'}\leq R_i$ for $i \in [n]\backslash \{j\}$. And it is easy to see that $\mathbf{R}^{'}$ is a valid rate assignment.
Thus, the above theorem implies that valid rate assignments that are described on arborescences of the graphs $G_{i^*}(\mathbf{R})$ are the best from the point of view of minimizing the sum rate. Finally we have the following theorem that says that the valid rate assignment that minimizes the sum rate can be found by finding minimum weight arborescences of appropriately defined graphs. For the statement of the theorem we need to define the following graphs.
- The graph $G^{tot} = (V^{tot}, A^{tot})$ is such that $V^{tot}$ consists of $n$ regular nodes $1, \dots, n$ and $n$ starred nodes $1^*, \dots , n^*$, $|V^{tot}| = 2n$. The edge set $A^{tot}$ consists of edges $(i^* \goes i), W_A(i^* \goes i) =
H(X_i)$ for $i\in[n]$ and edges $(i \goes j), W_A(i \goes j) =
H(X_j | X_i)$ for all $i, j\in[n]^2$.
- For each $i = 1,
\dots, n$ we define $G_{i^*}$ as the graph obtained from $G^{tot}$ by deleting all edges of the form $(j^* \goes j)$ for $j \neq i$ and all nodes in $\{1^*, \dots, n^*\} \backslash \{i^*\}$.
\[theo:noiseless 2\] Consider a set of sources $X_1, \dots, X_n$. Suppose that we are interested in finding a valid rate assignment $\mathbf{R} = (R_1,
\dots, R_n)$ for these sources so that the sum rate $\sum_{i=1}^n
R_i$ is minimum. Let $\mathbf{R}^{i^*}$ denote the rate assignment specified by the minimum weight arborescence of $G_{i^*}$. Then the optimal valid rate assignment can be found as R\_[opt]{} = \_[i {1, …, n}]{} \_[j=1]{}\^n R\_[j]{}\^[i\^\*]{}
*Proof.* From Theorem \[theo:noiseless1\] we have that any valid rate assignment $\mathbf{R}$ can be transformed into new rate assignment that can be described on an arborescence of $G_{i^*}(\mathbf{R})$ rooted at $i^*$ and suitable weight assignment. It is component-wise lower than $\mathbf{R}$. This implies that if we are interested in a minimum sum rate solution, it suffices to focus our attention on solutions specified by all solutions that can be described by all possible arborescences of graphs of the form $G_{i^*}(\mathbf{R})$ over all $i^* = 1^*,
\dots, n^*$ and all possible valid rate assignments $\mathbf{R}$.
Now consider the graph $G_{i^*}$ defined above. We note that all graphs of the form $G_{i^*}(\mathbf{R})$ where $\mathbf{R}$ is valid are subgraphs of $G_{i^*}$. Therefore finding the minimum cost arborescence of $G_{i^*}$ will yield us the best rate assignment possible within the class of solutions specified by $G_{i^*}(\mathbf{R})$. Next, we find the best solutions $\mathbf{R}^{i^*}$ for all $i \in [n]$ and pick the solution with the minimum cost. This yields the optimal rate assignment.
Noisy case {#sec:noisy-case}
==========
In this section we consider the case when the sources are connected to the terminal by orthogonal noisy channels. In this case, the objective is to minimize the sum power. Therefore the optimum rate allocation within a pair of sources may not be at the corner points of SW region. We want some node pairs working at corner points while some others working on the slope of the SW region. Taking this into account, we generalize the concept of pairwise property.
For a given rate assignment $\mathbf{R}$, we say that $X_i$ is *initially decodable* if $R_i \geq H(X_i)$, or together with another source $X_j$, $(R_i, R_j)\in SW_{ij}$. If $R_i \geq
H(X_i)$, it can be decoded by itself. If $(R_i, R_j)\in SW_{ij}$, SW codes can be designed for $X_i,X_j$ and they can be recovered by joint decoding. In addition, if we take advantage of previously decoded source data to help decode other sources as we did in the noiseless case, starting with an initially decodable source, more sources can potentially be recovered.
\[def:ged pairwise\_prop\] [*Generalized pairwise property of rate assignment.*]{} Consider a set of discrete memoryless sources $X_1, \dots, X_n$ and the corresponding rate assignment $\mathbf{R} = (R_1, \dots,
R_n)$. The rate assignment is said to satisfy the generalized pairwise property if for each $X_i, i\in [n]$, $X_i$ is initially decodable, or there exists an ordered sequence of sources $(X_{i_1}, X_{i_2}, \dots, X_{i_k})$ such that $$\begin{aligned}
X_{i_1}& \text{is initially decodable}, \label{eq:ged pairwise_prop_1}\\
R_{i_j}& \geq H(X_{i_j} | X_{i_{j-1}}), \text{~~ for $2 \leq j
\leq k$.} \label{eq:ged pairwise_prop_2}\\
R_{i}& \geq H(X_{i} | X_{i_{k}}) \label{eq:ged pairwise_prop_3}\end{aligned}$$
A rate assignment $\mathbf{R}$ shall be called generalized pairwise valid (or valid in this section), if it satisfies the generalized pairwise property and for every rate $R_i\in
\mathbf{R}$, $Q_i(R_i)\leq P_{max}$. A valid rate assignment allows every source to be recovered at the sink. A power assignment $\mathbf{P}=(P_1, P_2, \dots, P_n)$ shall be called valid, if the corresponding rate assignment is valid.
We shall introduce generalized pairwise property test graph. The input and initialization are the same as pairwise property test graph construction. Then, for each $i \in [n]$:
- If $R_i \geq H(X_i)$ then insert directed edge $(i^*
\goes i)$ with weight $W_A(i^* \goes i)=Q_i(H(X_i))$.
- If $R_i \geq H(X_i | X_j)$ then insert directed edge $(j \goes i)$ with weight $W_A(j \goes i)=Q_i(H(X_i|X_j))$.
- If $(R_i, R_j)\in SW_{ij}$, then insert undirected edge $(i,j)$ with weight $W_E(i,j)=Q_i(R_{ij}^*(i))+Q_j(R_{ij}^*(j))=P_{ij}^*(i)+P_{ij}^*(j)$. Note that as pointed out in Section \[sec:formulation\], $(P_{ij}^*(i), P_{ij}^*(j), R_{ij}^*(i), R_{ij}^*(j))$ are the optimum rate-power allocation between node pair $(i,j)$ given by [@roumyG07jour].
Finally, remove all nodes that do not participate in any edge. We denote the resulting graph for a given rate allocation by $G_M(\mathbf{R}) = (V,E,A)$, where $E$ is undirected edge set and $A$ is directed edge set. Denote the regular node set as $V_R\subset V$.
\[lem:Noise GPP2path\] Consider a set of discrete correlated sources $X_1, \dots X_n$ and a corresponding rate assignment $\mathbf{R} = (R_1, \dots, R_n)$. Suppose that we construct $G_M(\mathbf{R})$ based on the algorithm above. The rate assignment $\mathbf{R}$ is generalized pairwise valid if and only if, $\forall R_i \in \mathbf{R}, Q_i(R_i) \leq
P_{max}$, and for all regular nodes $i \in V_R$, at least one of these conditions holds:
1. $i$ participates in an undirected edge $(i, i^{'})$, $i'\in
V_R$;
2. There exists a starred node $i^*$ and an directed edge $(i^*\rightarrow i)$;
3. There exists a starred node $j^*$ such that there is a directed path from $j^*$ to $i$;
4. There exists a regular node $j$ participating in edge $(j, j^{'})$, $j'\in V_R$ such that there is a directed path from $j$ to $i$;
The proof of this lemma is very similar to that of Lemma \[lem:Noiseless PP2path\]. If one of the conditions 1) and 2) holds, $X_i$ is initially decodable, and vice versa. If one of the conditions 3) and 4) holds, $X_i$ can be decoded in a sequence of decoding procedures which starts from an initially decodable source $X_j$, and vice versa. Next, we introduce some definitions crucial to the rest of the development.
\[def:head\] Given a mixed graph $G=(V,E,A)$, if $e=(i\rightarrow j)\in A$, $i$ is the tail and $j$ is the head of $e$. If $e=(i,j)\in E$, we call both $i$ and $j$ the head of $e$. For a node $i\in V$, $h_G(i)$ denotes the number of edges for which $i$ is the head.
\[def:UUG\] The *underlying undirected graph* of a mixed graph $G$ denoted by $UUG(G)$ is the undirected graph obtained from the mixed graph by forgetting the orientations of the directed edges, i.e., treating directed edges as undirected edges.
As pointed out previously, we want some nodes to work at corner points of two-dimensional SW region and others to work on the slope. Thus, we need to somehow combine the two concepts of arborescence and matching. The appropriate concept for our purpose is the notion of a matching forest first introduced in the work of Giles [@Giles1].
\[def:mf\] Given a mixed graph $G=(V,E,A)$, a subgraph $F$ of $G$ is called a *matching forest* [@Giles1] if $F$ contains no cycles in $UUG(F)$ and any node $i\in V$ is the head of at most one edge in $F$, i.e. $\forall i\in V, h_F(i)\leq1$.
In the context of this section we also define a strict matching forest. For a mixed graph $G$ containing regular nodes and starred nodes, a matching forest $F$ satisfying $ h_F(i)=1, \forall i\in
V_R$ (i.e. every regular node is the head of exactly one edge) is called a *strict matching forest(SMF)*. In the noisy case, the SMF plays a role similar to the arborescence in the noiseless case. Now, we introduce a theorem similar to Theorem \[theo:noiseless1\].
\[theo:n1\] Given a generalized pairwise valid rate assignment $\mathbf{R}$ and corresponding power assignment $\mathbf{P}$, let $G_M(\mathbf{R})$ be constructed as above. There exists another valid rate assignment $\mathbf{R}^{'}$ and power assignment $\mathbf{P}^{'}$ that can be described by the edge weights of a strict matching forest of $G_M(\mathbf{R})$ such that $\sum_{i=1}^n P_i^{'} \leq \sum_{i=1}^n P_i$.
*Proof.* In order to find such a SMF, we first change the weights of $G_M(\mathbf{R})$, yielding a new graph $G_M^{'}(\mathbf{R})$. Let $W_A^{'}(i\rightarrow j), W_E^{'}(i,j)$ denote weights in $G_M^{'}(\mathbf{R})$. Let $\Lambda$ be a sufficiently large constant. We perform the following weight transformation on all edges. W\_E\^[’]{}(i,j) = 2-W\_E(i,j), W\_A\^[’]{}(ij) = -W\_A(ij). Denote the sum weight of a subgraph $G^{'}$ of graph $G_M^{'}(\mathbf{R})$ as $Wt_{G_M^{'}(\mathbf{R})}(G^{'})$. Next, we find a maximum weight matching forest of $G_M^{'}(\mathbf{R})$.which can be done in polynomial time [@Giles2].
\[lem:extSMF\] The maximum weight matching forest $F_M$ in $G_M^{'}(\mathbf{R})$ is a strict matching forest, i.e., it satisfies: $\forall i \in V_R, h_{F_M}(i)=1$.
*Proof.* See Appendix.
Note that each regular node is head of exact one edge in $F_M$. The power allocation is performed as follows. Any $i\in V_R$ is the head of one of three kinds of edges in $F_M$ corresponding to three kinds of rate-power assignment:
1. If $\exists (i^*\rightarrow i) \in F_M$, then set $P_i^{'}=Q_i(H(X_i))$ and $R_i^{'}=H(X_i)$. The existence of edge $(i^*\rightarrow i)$ in $G_M(\mathbf{R})$ means that $R_i\geq
H(X_i)$, so $R_i^{'}\leq R_i$ and $P_i^{'}\leq P_i\leq P_{max}$.
2. If $\exists (i,j) \in F_M$, set $P_i^{'}=P_{ij}^*(i)$, $R_i^{'}=R_{ij}^*(i)$ and $P_j^{'}=P_{ij}^*(j)$, $R_j^{'}=R_{ij}^*(j)$. The existence of edge $(i,j)$ in $G_M(\mathbf{R})$ means that $R_i$ and $R_j$ are in the SW region, $P_i\leq P_{max}$ and $P_j\leq P_{max}$. We know that $P_{ij}^*(i),P_{ij}^*(j)$ is the minimum sum power solution for node $i$ and $j$ when the rate allocation is in SW region and the power allocation satisfies $P_{max}$ constraints. So $P_i^{'}+P_j^{'} \leq P_i+P_j$, $P_i^{'}\leq P_{max}$, $P_j^{'}\leq P_{max}$.
3. If $\exists (j\rightarrow i)\in F_M$, set $P_i^{'}=Q_i(H(X_i|X_j))$ and $R_i^{'}=H(X_i|X_j)$ . The existence of edge $(j\rightarrow i)$ in $G_M(\mathbf{R})$ means that $R_i\geq H(X_i|X_j)$, so $R_i^{'}\leq R_i$ and $P_i^{'}\leq
P_i\leq P_{max}$.
Therefore, the new power allocation $\mathbf{P}^{'}$ reduces the sum power. Notice that when we are assigning new rates to the nodes, the conditions in Definition \[def:ged pairwise\_prop\] still hold. So the new rate $\mathbf{R}^{'}$ is also valid. So $\mathbf{P}^{'}$ is a valid power allocation with less sum power.
The following theorem says that the valid power assignment that minimizes the sum power can be found by finding minimum weight SMF of an appropriately defined graph.
The graph $G^{tot}=(V^{tot},A^{tot},E^{tot})$ is such that $V^{tot}$ consists $n$ regular nodes $1,\ldots,n$ and $n$ starred nodes $1^*,\ldots,n^*$, and $|V^{tot}|=2n$. The directed edge set $A^{tot}$ consists of edges $(i^*\rightarrow i),
W_A(i^*\rightarrow i)=Q_i(H(X_i))$ for $\{i:i\in[n] \hbox{ and }
Q_i(H(X_i))\leq P_{max}\}$, and directed edges $(i\rightarrow j),
W_A(i\rightarrow j)=Q_j(H(X_j|X_i))$ for $\{i,j:i,j\in[n]^2 \hbox{
and } Q_j(H(X_j|X_i))\leq P_{max}\}$. The undirected edge set $E^{tot}$ consists of edges $(i,j),W_E(i,j)=P_{ij}^*(i)+P_{ij}^*(j)$ for all $i,j\in[n]^2$.
Assume that $P_{max}$ is large enough so that there exist at least one valid rate-power allocation, the following theorem shows that the optimal rate-power allocation can be found in $G^{tot}$.
Consider a set of sources $X_1,\ldots,X_n$. Suppose that we are interested in finding a valid rate assignment $\mathbf{R}$ and its corresponding power assignment $\mathbf{P}$ for these sources so that the sum power $\sum_{i=1}^n P_i=\sum_{i=1}^n Q_i(R_i)$ is minimum. The optimal valid power assignment can be specified by the minimum weight SMF of $G^{tot}$.
The proof of this theorem is similar to that of Theorem \[theo:noiseless 2\]. Note that matching is a special case of matching forest, and is also a special case of SMF in our problem. Therefore, minimum weight SMF solution is always no worse than minimum matching solution.
We now show that the minimum SMF in $G^{tot}$ can be found by finding maximum matching forest in another mixed graph after weight transformation. We can perform the same weight transformation for $G^{tot}$ as we did for $G_M(\mathbf{R})$. Denote the resulting graph as $G^{tot'}$. Find the maximum weight matching forest $F_M^{'}$ in $G^{tot'}$. Denote the corresponding matching forest in $G^{tot}$ as $F_M$. We claim that both $F_M^{'}$ and $F_M$ are SMFs. To see this, note that since there exists valid rate allocation $\mathbf{R}$, $G_M^{'}(\mathbf{R})$ is a subgraph of $G^{tot'}$. From Lemma \[lem:extSMF\], we know that SMF exists in $G_M^{'}(\mathbf{R})$. Therefore, SMF also exists in $G^{tot'}$. Because in a SMF starred node is not head of any edge and regular node is head of exact one edge, based on weight transformation rules, the weight of a SMF $F_S^{'}$ in $G^{tot'}$ is: \[eq:weightSMF\] Wt\_[G\^[tot’]{}]{}(F\_S\^[’]{}) = n- Wt\_[G\^[tot]{}]{}(F\_S) where $F_S$ is the corresponding SMF in $G^{tot}$. Weight of any non-strict matching forest $F_{NS}$ is $Wt_{G^{tot'}}(F_{NS}^{'}) = m\Lambda -
Wt_{G^{tot}}(F_{NS}), m<n$. Since $\Lambda$ is sufficiently large, $Wt_{G^{tot'}}(F_S^{'})>Wt_{G^{tot'}}(F_{NS}^{'})$, i.e., SMFs in $G^{tot}$ always have larger weights. Therefore, the maximum weight matching forest $F_M^{'}$ in $G^{tot'}$ is SMF. So is the corresponding matching forest $F_M$ in $G^{tot}$. From , it is easy to see in $G^{tot}$ the matching forest corresponding to $F_M^{'}$ (the maximum weight matching forest in $G^{tot'}$) has minimum weight, i.e., $F_M$ is the minimum SMF in $G^{tot}$.
Numerical results {#sec:results}
=================
We consider a wireless sensor network example in a square area where the coordinates of the sensors are randomly chosen and uniformly distributed in $[0,1]$. The sources are assumed to be jointly Gaussian distributed such that each source has zero mean and unit variance (this model was also used in [@CristescuB06]). The off-diagonal elements of the covariance matrix $\mathbf{K}$ are given by $K_{ij} = \exp (-cd_{ij})$, where $d_{ij}$ is the distance between node $i$ and $j$, i.e., the nodes far from each other are less correlated. The parameter $c$ indicates the spatial correlation in the data. A lower value of $c$ indicates higher correlation. The individual entropy of each source is $H_1 = \frac{1}{2} \log(2\pi e \sigma^2) = 2.05$.
Consider the noiseless case first. Because the rate allocation only depends on entropies and conditional entropies, we do not need to care the location of the sink. It is easy to see based on our assumed model that $H(X_i|X_j)=H(X_j|X_i), \forall i,j\in
[n]^2$. Thus, $W_A(i\rightarrow j)=W_A(j\rightarrow i)$. It can be shown that the weights of minimum weight arborescences $G_{i*},
i=1,\ldots, n$ are the same. Therefore, we only need to find minimum weight arborescence on $G_{1^*}$. A solution for a sensor network containing 20 nodes are shown in Fig.\[fig:noiselessMST\]. Since the starred node $1^*$ is virtual in the network, we did not put it on the graph. Instead, we marked node 1 as root in the arborescence, whose transmission rate is its individual entropy $H_1$. Edge $(i\rightarrow j)$ in the arborescence implies that $X_i$ will be decoded in advance and used as side information to help decode $X_j$. The matching solution for the same network is shown in Fig.\[fig:noiselessMat\]. As noted in [@roumyG07jour], the optimum matching tries to match close neighbors together because $H(X_i,X_j)$ decreases with the internode distance. Our arborescence solution also showed similar property, i.e., a node tended to help its close neighbor since the conditional entropies between them are small. In Fig.\[fig:sumrate\], we plot the normalized sum rate $R_{s0}\triangleq \sum_{i=1}^n R_i / H_1$ vs. the number of sensors $n$. If there is no pairwise decoding, i.e., the nodes transmits data individually to the sink, $R_i=H_1$ and $R_{s0}=n$. The matching solution and the minimum arborescence (MA) solution are compared in the figure. We also plotted the optimal normalized sum rate $H(X_1,\ldots, H_n)/H_1$ in the figure. The rate can be achieved theoretically when all sources are jointly decoded together. We observe that if the nodes are highly correlated $(c=1)$, the present solution outperforms the matching solution considerably. Even if the correlation is not high, our MA solution is always better than matching solution. It is interesting to note that even though we are doing pairwise distributed source coding, our sum rate is quite close to the theoretical limit which is achieved by $n$-dimensional distributed source coding.
Next, we consider optimizing the total power when there are AWGN channels between the sources and the sink. The channel gain $\gamma_i$ is the reciprocal of the square of the distance between source $X_i$ and the sink. We assume that the coordinates of the sink are $(0,0)$. An example of the strict matching forest (SMF) solution to a network with 16 sensors is given in Fig.\[fig:noisySMF\]. There is one undirected edge in the SMF implying that the heads of this edge work on the slope of SW region. Other 14 edges are directed edges implying that the tails of the edges are used as side information to help decode their heads. No node is encoded at rate $H_1$. In fact, most minimum SMFs in our simulations exhibit this property, i.e., the minimum SMF contains $1$ undirected edge and $n-2$ directed edges between regular nodes. This fact coincides our intuition: transmitting at a rate of conditional entropy is the most economical way, while transmitting at a rate of individual entropy consumes most power. The matching solution for the same network is given in Fig.\[fig:noisyMat\]. We compare sum powers of the SMF solution with matching solution in Table.\[tab:compareNoise\]. The sum powers were averaged over three realizations of sensor networks. We also found the theoretical optimal sum power when $n$-dimensional distributed source coding is applied by solving the following convex optimization problem.
\_[R\_1,…,R\_n]{} \_[i=1]{}\^n P\_i = \_[i=1]{}\^n (2\^[R\_i]{} - 1)/\_i\
(2\^[R\_i]{} - 1)/\_i P\_[max]{}, i\
(R\_1,…,R\_n) SW\_n where $SW_n$ is the $n$-dimensional Slepian-Wolf region. From the table, we can observe that our strategy always outperforms the matching strategy regardless of the level of correlation, and comes quite close to the theoretical limit that is achieved by $n$-dimensional SW coding.
Conclusion {#sec:conclusion}
==========
The optimal rate and power allocation for a sensor network under pairwise distributed source coding constraint was first introduced in [@roumyG07jour]. We proposed a more general definition of pairwise distributed source coding and provided solutions for the rate and power allocation problem, which can reduce the cost (sum rate or sum power) further. For the case when the sources and the terminal are connected by noiseless channels, we found a rate allocation with the minimum sum rate given by the minimum weight arborescence on a well-defined directed graph. For noisy orthogonal source terminal channels, we found a rate-power allocation with minimum sum power given by the minimum weight strict matching forest on a well-defined mixed graph. All algorithms introduced have polynomial-time complexity. Numerical results show that our solution has significant gains over the solution in [@roumyG07jour], especially when correlations are high.
Future research directions would include extensions to resource allocation problems when joint decoding of three (or more) sources [@LiverisLNXG03] at one time is considered, instead of only two in this paper. Another interesting issue is to consider intermediate relay nodes in the network, which are able to copy and forward data, or even encode data using network coding [@al].
Acknowledgements
================
The authors would like to thank the anonymous reviewers whose comments greatly improved the quality of the paper.
PROOF OF LEMMA \[lem:extSMF\]
We shall first introduce and prove a lemma which facilitates the proof of Lemma \[lem:extSMF\].
\[lem:nopathUUG\] Consider two nodes $i$ and $j$ in a matching forest $F$ such that either $h_F(i)=0$ or $h_F(j)=0$, and they do not have incoming directed edges. Then, there does not exist a path of the form \[eq:UUGpathlem\] i-\_1-\_2--\_k-j in $UUG(F)$.
*Proof.* First consider the case when $h_F(i)=h_F(j)=0$, i.e., $i,j$ only have outgoing directed edge(s). Suppose there is such a path , edge $(i,\alpha_1)$ should directed from $i$ to $\alpha_1$ in $F$ since $h_{F}(i)=0$, similarly, $j\rightarrow \alpha_k$. As depicted in Fig.\[fig:case2a\], at least one node $\alpha_l$ in the path will have $h_{F}(\alpha_l)=2$. But we know that $h_{F}(t)\leq1$ holds for every node $t\in V$ in matching forest $F$. So there is no such path in $UUG(F)$. If $h_F(i)=0,h_F(j)=1$ and $j$ connects to an undirected edge $(j,j')$ in $F$, $i, j$ and $j'$ can only have outgoing directed edge(s). By similar arguments above, we know that at least one node $\alpha_l$ on the path is such that $h_F(\alpha_l)=2$. Similarly, the case when $i$ connects to an undirected edge and $h_F(j)=0$ can be proved.
*Proof of Lemma \[lem:extSMF\]:* We will prove this lemma by contradiction. We shall show that if $h_{F_M}(i)=0$ for a regular node $i$, we can find another matching forest $F^{'}$ in $G_M^{'}(\mathbf{R})$ such that $Wt_{G_M^{'}(\mathbf{R})}(F^{'})>Wt_{G_M^{'}(\mathbf{R})}(F_M)$, i.e., $F_M$ is not the maximum matching forest. Since $F_M$ is a matching forest, it satisfies (a) $h_{F_M}(t)\leq 1$ for every node[^5] $t\in V$ and (b) no cycle exist in $UUG(F_M)$. Suppose $h_{F_M}(i)=0$ for a regular node $i$ in $F_M$. We shall make a set of modifications to $F_M$ resulting in a new matching forest $F^{'}$ and prove that these manipulations will eventually increase the sum weight, make $h_{F^{'}}(i)$ become 1 and ensure that there is no cycle in $UUG(F^{'})$. Also, these modifications should guarantee that $h_{F^{'}}(j)=1$ for $j\in\{j:j\in V_R\backslash \{i\} \text{ and } h_{F_M}(j)=1\}$, i.e. nodes that were previously the head of some edge continue to remain that way. During the proof, we shall use the properties of $G_M^{'}(\mathbf{R})$ given in Lemma \[lem:Noise GPP2path\]. Since $\mathbf{R}$ is valid, regular node $i$ has at least one of those four properties in $G_M^{'}(\mathbf{R})$. We shall discuss these cases in a more detailed manner:
[=0em]{}
[*Case 1*]{}. If there exists a directed edge $(i^*\rightarrow i)$ in $G_M^{'}(\mathbf{R})$, add this edge to $F_M$ to form $F'$. Clearly, $Wt_{G_M^{'}(\mathbf{R})}(F')>Wt_{G_M^{'}(\mathbf{R})}(F_M)$. Since there is only one outgoing edge from $i^*$ and it has no incoming edge, no cycle in $UUG(F^{'})$ is produced in our procedure. And $h_{F^{'}}(t)\leq1$ still holds for every node $t\in V$, so $F^{'}$ is still a matching forest.
[*Case 2*]{}. If there exists an undirected edge $(i,j)$ in $G_M^{'}(\mathbf{R})$, we can include this edge to $F_M$ to increase sum weight. Here, $h_{F_M}(i)=0$ and there are two possibilities for $h_{F_M}(j)$, 0 or 1.
[=0em]{}
[*Case 2a*]{}. If $h_{F_M}(j)=0$, add undirected edge $(i,j)$ to $F_M$, resulting a new subgraph $F^{'}$. Obviously, the sum weight is increased while adding one edge. Since $h_{F_M}(i)=h_{F_M}(j)=0$, by Lemma \[lem:nopathUUG\] there does not exist path with form in $UUG(F_M)$. Thus, adding $(i,j)$ does not introduce cycle in $UUG(F')$. $F^{'}$ is a matching forest.
[*Case 2b*]{}. If $h_{F_M}(j)=1$, we still add $(i,j)$ but need to perform some preprocessing steps. Based on what kind of edge connects to node $j$, we have two cases:
[=0em]{}
[*Case 2$b_1$*]{}. If there exists one directed edge $(j^{'}
\rightarrow j)$ in $F_M$, delete edge $(j^{'} \rightarrow j)$, we have an intermediate matching forest $F^{''}$ such that $h_F{''}(j)=0$. Add the undirected edge $(i,j)$ to obtain $F^{'}$. Note that $F^{'}$ is a matching forest because of arguments in Case 2a and $Wt_{G_M^{'}(\mathbf{R})}(F^{'})>Wt_{G_M^{'}(\mathbf{R})}(F_M)$ because for a sufficient large $\Lambda$, $2\Lambda-W_E(i,j)>\Lambda-W_A(j^{'}\rightarrow j)$.
[*Case 2$b_2$*]{}. If there exists one undirected edge $(j^{'},j)$ in $F_M$, we notice that the existence of $(j^{'},j)$ in $G_M^{'}(\mathbf{R})$ indicates that $(R_{j^{'}},R_j)\in
SW_{j^{'}j}$, so $R_{j^{'}}\geq H(X_{j^{'}}|X_j)$ and $R_j\geq
H(X_j|X_{j^{'}})$ , which implies that there exist directed edges $(j\rightarrow {j^{'}})$ and $({j^{'}}\rightarrow j)$ in $G_M^{'}(\mathbf{R})$. So we can first delete edge $(j^{'},j)$ and then add edges $(i,j)$ and $(j\rightarrow j^{'})$ to form $F^{'}$. Adding $(j\rightarrow j^{'})$ is to make sure $h_{F^{'}}(j^{'})=1$. These modifications are shown in Fig.\[fig:case2b2\]. After removing edge $(j^{'},j)$, we have an intermediate matching forest $F^1$ such that $h_{F^1}(j)=0$ and $h_{F^1}(j^{'})=0$. We add edge $(i,j)$ to obtain $F^2$. Because of Lemma \[lem:nopathUUG\], $F^2$ is still a matching forest and $h_{F^2}(j^{'})=0$. Then we add $(j\rightarrow j^{'})$ to obtain a new subgraph $F^{'}$. From Lemma \[lem:nopathUUG\], we know that $(j\rightarrow j^{'})$ will not introduce cycle. Therefore, $F^{'}$ is still a matching forest. For a large enough $\Lambda$, $(2\Lambda-W_E(i,j))+(\Lambda-W_A(j\rightarrow j^{'}))> 2\Lambda-W_E(j,j^{'})$ holds, so the sum weight will increase.
[*Case 3*]{}. If there exist a path from $h$ to $i$ in $G_M^{'}(\mathbf{R})$:$h\rightarrow \gamma_1\rightarrow
\gamma_2\rightarrow\cdots\rightarrow \gamma_{k_1} \rightarrow i$, where $h$ is a starred node or participates in an undirected edge in $G_M^{'}(\mathbf{R})$, we use the following approach. Note that $\gamma_1,\ldots,\gamma_{k_1}$ may participate in undirected edges. On this path, we find the node $j$ closest to $i$ such that $j$ participates in an undirected edge in $G_M^{'}(\mathbf{R})$ or it is a starred node. $j$ may be the same as $h$ or be some $\gamma_l$. We will focus on the path from $j$ to $i$, denoted by $j\rightarrow \alpha_1\rightarrow
\alpha_2\rightarrow\cdots\rightarrow \alpha_k \rightarrow i $. The basic idea is to add edge $\alpha_k\rightarrow i$ into $F_M$. However, if we just simply add this edge, it may produce cycle in underlying undirected graph. So we need more manipulations.
[=0em]{}
[*Case 3a*]{}. If $j$ is a starred node, denote $j$ as $j^{*}$, we want to add the path $$\label{eq:pathji}
j^{*}\rightarrow \alpha_1\rightarrow
\alpha_2\rightarrow\cdots\rightarrow \alpha_k \rightarrow i$$ to $F_M$. First, in $F_M$, remove all incoming directed edges to $\alpha_l$ ($1\leq l\leq k$), then we have an intermediate matching forest $F^1$. Note that $j^*$, $i$, and $\alpha_l$’s only have outgoing edges, by Lemma \[lem:nopathUUG\], we know that there does not exist undirected path with the form $j^*(\text{or
}\alpha_{l_1})-\beta_1-\beta_2-\cdots-\beta_k-i(\text{or
}\alpha_{l_2})$ in $UUG(F^1)$ where $\beta$’s are nodes outside the path . Therefore, adding path into $F^1$ to form $F^{'}$ will not introduce a cycle. All nodes $\alpha_l (1\leq l\leq k)$ on the path, $h_{F^{'}}(\alpha_l)=1$. $F^{'}$ is a matching forest. Next we shall consider the weights. At some nodes, take $\alpha_l$ for example, although we deleted directed edge $(\alpha_{l^{'}}\rightarrow\alpha_l)$, where $\alpha_{l^{'}}$ is a node outside path , we add another directed edge $(\alpha_{l-1}\goes\alpha_l)$. The weight might decrease by $(\Lambda-W_A(\alpha_{l{'}}\rightarrow\alpha_l))-(\Lambda-W_A(\alpha_{l-1}\goes\alpha_l))$. Suppose we delete and add edges around $d$ nodes:$\alpha_{l_1},\alpha_{l_2},\ldots,\alpha_{l_d}$, the total weight decrease is $\sum_{i=1}^d
W_A(\alpha_{l_i-1}\goes\alpha_{l_i})-W_A(\alpha_{l_i{'}}\rightarrow\alpha_{l_i})$. It may be positive but it does not contain a $\Lambda$ term. At the end, we will add $(\alpha_k \rightarrow i)$ without deleting any edge coming into $i$ since $h_{F_M}(i)=0$, the weight will increase $(\Lambda-W_A(\alpha_k\rightarrow i))$ by this operation. If $\Lambda$ is large enough, the sum weight will finally increase.
[*Case 3b*]{}. If $j$ participates in an undirected edge $(j^{'},j)$ in $G_M^{'}(\mathbf{R})$. Note that $j^{'}\neq
\alpha_1,\ldots,\alpha_k$ since $j$ is the first node in the path that participates in an undirected edge. In this case, if $(j^{'},j)$ is already in $F_M$, we just need to add the path from $j$ to $i$ as we did in the case above to form $F^{'}$. The resulting path is : $j^{'}-j\rightarrow \alpha_1\rightarrow \alpha_2\rightarrow\cdots\rightarrow \alpha_k \rightarrow i$ Note that in $F_M$, $j^{'},j$ do not have directed incoming edges. By similar argument in the previous case, we know that $F^{'}$ is a matching forest. If $(j^{'},j)$ is not in $F_M$, we want to add $(j^{'},j)$ to $F_M$ and then add the path . We have four possibilities, some of which require preprocessing:
[=0em]{}
[*Case $3b_1$*]{}. $h_{F_M}(j)=0$ and $h_{F_M}(j^{'})=0$; we can add $(j^{'},j)$ as we did in Case 2a, and then we add path as we did above.
[*Case $3b_2$*]{}. $h_{F_M}(j)=0$ and $h_{F_M}(j^{'})=1$; we can add $(j^{'},j)$ after some preprocessing as we did in Case 2$b_1$ and Case 2$b_2$, and then we add path as we did above.
Next we discuss cases in which $h_{F_M}(j)=1$. In this case, we only need to consider some directed edge $(j^{''}\rightarrow j)$ comes into $j$ in $F_M$. If there some undirected edge $(j^{''},j)$ connecting $j$ in $F_M$, this case has been discussed in Case $3b$ above, by treating $j^{''}$ as $j^{'}$.
[*Case $3b_3$*]{}. $h_{F_M}(j)=1, (j^{''}\rightarrow j)$, and $h_{F_M}(j^{'})=0$; We can delete $(j^{''}\rightarrow j)$ and add $(j,j^{'})$ as we did in Case 2$b_1$, node $j^{'}$ is regarded as $i$ in Case 2$b_1$, it is guaranteed that the resulting subgraph is a matching forest. And then we add path as we did above.
[*Case $3b_4$*]{}. $h_{F_M}(j)=1, (j^{''}\rightarrow j)$, and $h_{F_M}(j^{'})=1$; For $j^{'}$, it could be head of an undirected edge or a directed edge. If $j^{'}$ is head of an undirected edge $(j^{'},j^{'''})$, we perform operations shown in Fig.\[fig:case3b41\] to get $F'$. The possible weight decrease during our operations around node $j$ is $(W_A(j^{'}\rightarrow
j^{'''})-W_A(j^{''}\rightarrow
j))+((W_E(j,j^{'})-W_E(j^{'},j^{'''}))$. We will add edge $(\alpha_k\rightarrow i)$ on path with weight $\Lambda-W_A(\alpha_k\rightarrow i)$. Since $\Lambda$ is large enough, the sum weight will still increase. If $j^{'}$ is head of a directed edge $(j^{'''}\rightarrow j^{'})$, we perform operations shown in Fig.\[fig:case3b42\] to get $F'$. Similarly, because $\Lambda$ is large enough, the sum weight will increase.
![\[fig:noiselessMST\] Minimum arborescence solution in a WSN with 20 nodes. Noiseless channels are assumed. Correlation parameter $c=1$. Sum rate given by MA equals to 21.96, which is less than sum rate given by matching. The theoretical optimal sum rate is 20.54.](N20c1MST1.eps){width="80mm"}
![\[fig:noiselessMat\] Minimum matching solution in the same WSN as Fig.\[fig:noiselessMST\]. Noiseless channels are assumed. Correlation parameter $c=1$. Sum rate given by matching equals to 30.27. Note that if we do not take advantage of correlation and transmit data individually, the sum rate will be $20\times H_1=40.94$. ](N20c1Mat.eps){width="80mm"}
![\[fig:sumrate\] Normalized sum rate vs. number of sensors ](plotresnew.eps){width="80mm"}
![\[fig:noisySMF\] Minimum strict matching forest solution in a WSN with 16 nodes. AWGN channels are assumed. Correlation parameter $c=1$. Peak power constraint $P_{max}=10$. Sum power given by SMF equals to 16.27. The optimal sum power when we apply $n$-dimensional SW codes is 14.06. ](N16SMF.eps){width="80mm"}
![\[fig:noisyMat\] Minimum matching solution in the same WSN as Fig.\[fig:noisySMF\]. AWGN channels are assumed. Correlation parameter $c=1$. Peak power constraint $P_{max}=10$. Sum power given by matching equals to 27.12. Note that if we do not take advantage of correlation and transmit data individually, the sum power will be 47.11. ](N16c1Mat.eps){width="80mm"}
4 8 12
-- ---------- ------ ------- -------
SMF 5.57 7.49 11.17
Matching 6.20 10.71 16.99
Optimal 5.45 7.06 9.93
SMF 6.22 16.72 21.15
Matching 6.30 17.81 23.79
Optimal 6.17 16.44 20.60
SMF 9.68 18.65 25.14
Matching 9.92 18.91 25.83
Optimal 9.67 18.56 24.96
: \[tab:compareNoise\] Comparison of sum powers between minimum strict matching forest and matching solution. $P_{max}=10$.
![\[fig:case2a\] Case 2a: When $h_{F_M}(i)=0, h_{F_M}(j)=0$, path $i-\alpha_1-\alpha_2-\cdots - j$ can not exists in $UUG(F_M)$ because it will cause at lease one node $\alpha_l$, $h_{F_M}(\alpha_l)=2$.](Fig1.eps){width="80mm"}
![\[fig:case2b2\] Case 2$b_2$ When $h_{F_M}(i)=0, h_{F_M}(j)=1, (j,j^{'})\in F_M$, by introducing two intermediate matching forest $F^{1}$, $F^{2}$, we can find a new matching forest $F^{'}$ with larger sum weight.](Fig2.eps){width="80mm"}
![\[fig:case3b41\] Case $3b_{4-1}$: When $h_{F_M}(j)=h_{F_M}(j^{'})=1, (j^{'},j)\in G_M^{'}(\mathbf{R}), (j^{''}\goes j)\in F_M,(j^{'},j^{'''})\in F_M$, remove $(j^{''}\goes j)$ to form an intermediate matching forest $F^{1}$ where $h_{F^{1}}(j)=0, h_{F^{1}}(j^{'})=1, \text{ and } (j^{'},j^{'''})\in F^{1}$. Then apply the same operations as case$(2b_2)$, resulting another matching forest $F^{2}$. Finally add the path from $j$ to $i$ to get $F^{'}$.](Fig3.eps){width="80mm"}
![\[fig:case3b42\] Case $3b_{4-2}$: when $h_{F_M}(j)=h_{F_M}(j^{'})=1, (j^{'},j)\in G_M^{'}(\mathbf{R}), (j^{''}\goes j)\in F_M,(j^{'''}\goes j^{'})\in F_M$, remove $(j^{''}\goes j)$ to form an intermediate matching forest $F^{1}$ where $h_{F^{1}}(j)=0, h_{F^{1}}(j^{'})=1, \text{ and } (j^{'''}\goes j^{'})\in F^{1}$. Then apply the same operations as case$(2b_1)$, resulting another matching forest $F^{2}$. Finally add the path from $j$ to $i$ to get $F^{'}$.](Fig4.eps){width="80mm"}
[^1]: The material in this work was presented in part at the IEEE Intl. Symp. on Info. Th. 2008.
[^2]: This research was supported in part by NSF grant CNS-0721453.
[^3]: We shall use terminal and sink interchangably throughout this paper.
[^4]: A mixed graph has both directed and undirected edges
[^5]: Actually, for a star node $i^{*}\in V\backslash V_R$, $h_{F}(i^{*})=0$ in all matching forest $F$ of $G_M^{'}(\mathbf{R})$ because there is no incoming edge to $i^{*}$ and $i^{*}$ does not participate in any undirected edge.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
<span style="font-variant:small-caps;">René Kempen</span>\
*Institut für Mathematik, RWTH Aachen,*\
*Templergraben 55, D-52052 Aachen, Germany*\
<[email protected]>\
\
<span style="font-variant:small-caps;">Stanislaus Maier-Paape</span>\
*Institut für Mathematik, RWTH Aachen,*\
*Templergraben 55, D-52052 Aachen, Germany*\
<[email protected]>
date:
title: ' Survey on log-normally distributed market-technical trend data '
---
> [[**Abstract**]{}]{} In this survey, a short introduction in the recent discovery of log-normally distributed market-technical trend data will be given. The results of the statistical evaluation of typical market-technical trend variables will be presented. It will be shown that the log-normal assumption fits better to empirical trend data than to daily returns of stock prices. This enables to mathematically evaluate trading systems depending on such variables. In this manner, a basic approach to an anti cyclic trading system will be given as an example. [[**Keywords**]{}]{} log-normal, market-technical trend, MinMax-process, trend statistics, market analysis, empirical distribution, quantitative finance
Introduction
============
The concept of a trend has been fundamental in the field of technical analysis since Charles H. Dow introduced it in the late 19th century. Following Rhea [@RHEA], Dow said e.g. concerning the characterization of *up-trends*:
*Successive rallies penetrating preceding high points, with ensuing declines terminating above preceding low points, offer a bullish indication.*
In Figure \[fig:trend\].(a) an example of the inverse situation is given, i.e. a *down-trend* in a historical setup like Dow used it. Although this is so far “just” a geometrical idea and clearly not precise at all, it is widely accepted among many market participants. Therefore, this geometric idea is fixed by the following market-technical definition of a Dow-trend as which it is also used in this article:
\[trend\] A market is in *up/down-trend* if and only if (at least) the two last relevant *lows* (denoted by P$1$ and P$3$ in an up-trend) and *highs* (denoted by P$2$ in an up-trend) are monotonically increasing/decreasing (see Figure \[fig:trend\].(b)). Otherwise, the market is temporarily *trendless*. In case of an up-trend the phase between a low and the next high is called the *movement*. In the same manner, the phase between a high and the following low is called the *correction*. In case of a down-trend, movement and correction are defined in the exact opposite way.
It is the authors desire to analyze these Dow-trends as they occur in real world markets within a statistical framework. In order to do so, however, a mathematical exact method on how to determine relevant lows and highs of price data is needed.
While the task to detect the extreme points in Figure \[fig:trend\].(b)) is trivial, it is not as easy when a real price chart is considered (see Figure \[fig:minmax\]). This can be explained by the continuous price fluctuations which make the extreme points $P1-P3$ not obvious to detect. The issue of detection is rooted in the subjectivity of the distinction between usual fluctuations and new extreme points. So to say: the significance of an extreme point has to be evaluated in an algorithmic way to make automatic detection possible. Therefore, we review in section \[sec:1\] the framework which is necessary for automatic trend-detection going back to Maier-Paape [@SMP_automatic123]. The trend-detection in turn is rooted in automatic recognition of relevant minima and maxima in charts. With that at hand empirical studies of trend data can be made as seen in [@EMP123] and [@LEADLAG]. On the one hand, in [@EMP123] Hafizogullari, Maier-Paape and Platen have collected several statistics on the performance of Dow-trends. On the other hand, in [@LEADLAG] Maier-Paape and Platen constructed a geometrical method on how to detect lead and lag when two markets are directly compared – also based on the automatic detection of relevant highs and lows.
In this article, however, we want to pursue a different path. We are interested in several specific trend data such as the retracement and the relative movement and correction. Since these trend data are central for the whole paper, we here give a precise definition.
The first random variable describing trend data which will be important in the following is the *retracement* denoted by $X$. The retracement is defined as the size-ratio of the correction and the previous movement, i.e. $$\label{eqn_retr}
X:=\frac{Correction}{Movement}.$$ Hence, in case of an up-trend this is given by: $$X=\frac{P2-{P3}_{\text{new}}}{P2-P3}.$$ Another common random variable is the *relative movement* which for an up-trend is defined by the ratio of the movement and the last low, i.e. $$\label{eqn_bew}
M:=\frac{Movement}{last\,Low}=\frac{P2-P3}{P3}$$ and the *relative correction* which for an up-trend is defined as the ratio of the correction and the last high, i.e. $$\label{eqn_kor}
C:=\frac{Correction}{last\,High}=\frac{P2-P3_{\text{new}}}{P2}.$$ In case of a down-trend all situations are mirrored, such that: $$X=\frac{{P3}_{\text{new}}-P2}{P3-P2},\quad
M:=\frac{Movement}{last\, High}=\frac{P3-P2}{P3},\quad
C:=\frac{Correction}{last\,Low}=\frac{P3_{\text{new}}-P2}{P2}.$$ The main scope of this survey is to collect and extend results on how the above defined trend variables (plus several other related ones) can be statistically modeled. By doing this the log-normal distribution occurs frequently. Evidently, the log-normal distribution is very well known in the field of finance. We start off, in section \[sec:2\], by giving a mathematical model of the retracement during Dow-trends and the delay of their recognition. Furthermore, the duration of the retracements and their joint distribution with the retracement will be evaluated. The results on relative movements and relative corrections during trends will be presented in section \[sec:3\]. In section \[sec:4\] it will be demonstrated how the so far gained distributions of trend variables may be used to model trading systems mathematically.
It will be evident, that the described trend data mostly fit very well to the log-normal distribution model, although there are significant aberrations for the duration of retracements (see subsection \[sec:delay\]). In the past there have been several attempts to match the log-normal distribution model to the evolution of stock prices. It already started in 1900 with the PhD Thesis of Louis Bachelier ([@BACHELIER]) and the approach to use the geometric Brownian motion to describe the evolution of stock prices. This yields log-normally distributed daily returns of stock prices. Nowadays, the geometric Brownian motion is widely used to model stock prices (see [@HULL]) especially as part of the Black-Scholes model ([@BLACKSCHOLES]). Nevertheless, it has to be noted that empirical studies have shown that the log-normal distribution model does not fit perfectly to daily returns (e.g. see Fama [@FAMA63], [@FAMA] who refers to Mandelbrot [@MANDELBROT]).
Overall, we got the impression that most of the trend data we describe here fit better to the log-normal distribution model than daily returns of stock prices, although it would be beyond the scope of this paper to do a formal comparison. In any case, the here observed empirical facts of trend data contribute to a complete new understanding of financial markets. Furthermore, with the relatively easy calculations based on the link of the log-normal distribution model to the normal distribution, actually complex market processes can now be discussed mathematically (e.g. with the truncated bivariate moments, see Lemma 1.21 in [@MT_Kempen]).
Detection of Dow-trends {#sec:1}
=======================
The issue of automatic trend-detection has been addressed by Maier-Paape [@SMP_automatic123]. Clearly, the detection of relevant extreme points is a necessary step to detect Dow-trends. Fortunately, the algorithm introduced by Maier-Paape allows automatic detection of relevant extreme points in any market since it constructs so called *MinMax-processes*.
An alternating series of (relevant) highs and lows in a given chart is called a *MinMax-process*.
In Figure \[fig:minmax\] two automatically constructed MinMax-processes are visualized by the corresponding indicator line. The construction is based on SAR-processes (top nd everse).
An indicator is called a *SAR-process* if it can only take two values (e.g. $-1$ and $1$ which are considered to indicate a $down$ and an $up$ move of the market respectively).
Generally speaking, Maier-Paape’s algorithm looks for relevant highs when the SAR-process indicates an up move and searches for relevant lows when the SAR-process indicates a down move. Thus, the relevant extrema are “fixed” when the SAR-process changes sign. By choosing a specific SAR-process one can affect the sensitivity of the detection while the actual detection algorithm works objectively without the need of any further parameter. For more information see [@SMP_automatic123]. Maier-Paape also explains how to handle specific exceptional situations, e.g. when a new significant low suddenly appears although the SAR-process is still indicating an up move.
It is shown by Theorem $2.13$ in [@SMP_automatic123] that for any combination of SAR-process and market there exists such a MinMax-process which can be calculated “in real time” by the algorithm of Maier-Paape. Based on any MinMax-process in turn it is easy to detect market-technical trends as defined in Definition \[trend\] and then use this information for automatic trading systems as outlined in Figure \[fig:minmax\_struc\].
![General concept of the automatic detection of Dow-trends.[]{data-label="fig:minmax_struc"}](images/minmax_struc){width="0.6\linewidth"}
Calculating the MinMax-process “in real time” means that as time passes and the chart gets more and more candles, the extrema of the MinMax-process are constructed one by another. Besides the most recent extremum which is being searched for, all extrema found earlier are fixed from the moment of their detection, i.e. when the SAR-process changed sign.
Thus, applying the algorithm in real time also reveals some time *delay* in detection. Obviously, the algorithm can not predict the future progress of the chart it is applied to. Consequently, some delay is needed indeed to evaluate the significance of a possible new extreme value. This circumstance is crucial when considering automatic trading systems based on market-technical trends. Therefore, it also has to affect any mathematical model of such a trading system. An approach to this issue can be made by considering the delay as an inevitable slippage. This means, not the time aspect of the delay but more likely the effect it has to the entry or exit price in any market-technical trading system will be evaluated. In particular, the absolute value of the delay $d_{abs}$ is given by $$\label{def_delay}
d_{abs}=|P[0]-C[0]|$$ with $P[0]$ indicating the last detected extreme value and $C[0]$ the close value of the current bar when this extreme value got detected.
For this article the MinMax-process together with the *integral MACD SAR-process* (oving verage onvergence ivergence, see p. $166$ in [@MACD]) was used. The integral MACD SAR (Definition $2.2$ in [@SMP_automatic123]) basically is a normal MACD SAR which in turn indicates an up move if the so called *MACD line* is above the so called *signal line*. Otherwise, it indicates a down move. The MACD line is given by the difference of a fast and a slow (exponential) moving average. The signal line then is an (exponential) moving average of the MACD line.
Consequently, the MACD usually takes three parameters for the fast, slow and signal line (standard values are: fast=12, slow=26, signal=9). To reduce the number of needed parameters from three to one *scaling parameter* only, the ratios of the standard parameters are fixed and consequently scaled by the scaling parameter. In particular, a MACD with scaling parameter $2$ denotes a usual MACD with the parameters $(24/52/18)$. This way, the sensitivity of the MinMax-process solely corresponds to one scaling parameter (see Figure \[fig:minmax\]).
For a given MinMax-process it is easy to decide the start and end of a market-technical trend for the candle the trend is initialized and ends in, respectively. The computation of several trend variables such as the retracement is then obvious.
The automatic detection of Dow-trends and in particular the possibility to deduce a MinMax-process out of any market given by candle data enables to create a large dataset of empirical trend variables. The model will be based on empirical data acquired by the MinMax-process based on the integral MACD with scaling variables 1, 1.2, 1.5, 2 and 3 applied on all stocks of the current $S\&P100$ and $Eurostoxx50$ in the period from January $1989$ until January $2016$.
Retracements {#sec:2}
============
Distribution of the Retracement
-------------------------------
For all combinations of the regarded scalings and markets, the measured retracement data shows the same characteristic distribution as seen in Figures \[fig:histo\_retr\] and \[fig:histo\_retr\_1\]. Indeed, they show the typical asymmetric characteristic of a log-normal distribution which density is given by: $$f(x;\mu,\sigma)=\frac{1}{\sqrt{2\pi}\sigma
x}\exp{\left (-\frac{(\ln{(x)}-\mu)^2}{2\sigma^2}\right) },\quad x>0$$ for the retracement $X$ and with the (true) parameters $\mu$ and $\sigma$. It is well known how to calculate moments of log-normally distributed random variables. In this particular context, the median of the distribution $X$ equals $e^\mu$ and the mean is given by $${\ensuremath{\mathbb{E}}}(X)=e^{\mu+\frac{\sigma^2}{2}}.$$
To evaluate this distribution assumption the maximum-likelihood-estimators (MLE) denoted by $(\hat{\mu},\hat{\sigma})$ for the log-normal distribution are computed: $$\hat{\mu}:=\frac{1}{n}\sum_{i=1}^n \ln{x_i},\quad
\hat{\sigma}^2:=\frac{1}{n}\sum_{i=1}^n\left(\ln{(x_i)}-\mu\right)^2$$ with $x_i$ denoting the $n$ measured retracements. Furthermore, the p-value calculated with the Anderson-Darling test (recommended EDF test by Stephens in [@Stephens], chapter “Test based on EDF statistics”) being applied to the logarithmic transformed data is checked. The such obtained values are summarized in Table \[table:retr\].
The inconsistent p-values reveal that the log-normal model does not fit perfectly to the measured retracement data. In fact, all histograms show a slightly sharper density for the measured data with different intensities. Consequently, for higher retracement values the log-normal model predicts slightly less values than actually observed. Besides this small systematical aberrations the log-normal model maps the measurement very well – especially for the $Eurostoxx50$. On top of that, the log-normal model obviously fits much better to the retracement distribution than to for instance daily returns of stock prices (see Fama [@FAMA63]).
To conclude the evaluation of the retracement alone, a fundamental observation about the retracement can be made (based on the log-normal assumption and the fit values derived from the data).
\[beob\_retr\] The parameters $\mu$ and $\sigma$ of the log-normal distribution are more or less scale invariant for the retracement. In case of an up-trend, the parameters are also market invariant.
Furthermore, the parameter $\mu$ is affected by the trend direction. It is larger for down-trends, i.e. the retracements in down-trends are overall more likely to be larger as in up-trends. In spite of that, the parameter $\sigma$ is more or less invariant of the trend direction.
Delay after a Retracement {#sec:delay}
-------------------------
As already mentioned, the delay of the MinMax-process is inevitable. Therefore, it will be evaluated in the same way as the retracement. In order to be able to compare the delay $d_{abs}$ after a retracement is recognized (as defined in (\[def\_delay\])) with the retracement $X$ itself, both must have the same unit. So, the delay will also be considered in units of the last movement. It will be denoted as random variable $D_X$: $$D=D_X=\frac{d_{abs}}{Movement}.$$ It should be noted that (at this point) there is no statement made on whether or not $D_X$ may somehow depend on the preceding retracement $X$. The notation with subscript $X$ is only used to denote the delay after a retracement and to distinguish it from other delays to come.
Again, the measured delay data shows the characteristic of a log-normal distribution for each combination of scaling and market as exemplarily shown in Figure \[fig:delay\].
![Measured and log-normal-fit density of the delay in an up-trend with scaling $1$ for $S\&P100$ stocks. The data is visualized with a histogram from $0$ to $5$ with a bin size of $0.11$.[]{data-label="fig:delay"}](images/hist_delay.pdf){width="0.7\linewidth"}
However – as expressed by the significant p-values in Table \[table:delay\] – the log-normal assumption is definitively wrong.
The histograms show a systematical deviation in regard to skewness. The measured delays have a less positive skewness than predicted by the model.
Besides this systematical aberration the log-normal model maps the characteristic of the measured delay well enough such that it will be used for the following analysis.
The retracement and the delay can be considered as one sequence. We therefore look for a combined log-normal distribution of retracement and delay. In this context it is important to evaluate the estimator of the correlation $\rho$ between the logarithm of the two variables, i.e. $$\hat{\rho}_{\ln{X},\ln{D}}=\frac{\frac{1}{n}\sum_{i=1}^n
(\ln({x_i})-\hat{\mu}_X)(\ln{(d_i)}-\hat{\mu}_D)}{\hat{\sigma}_X\cdot
\hat{\sigma}_D}$$ for measured values $(x_i,d_i)$. The estimated values of $\hat{\rho}$ are given in Table \[table:cor\].
It shows, that the retracement and the following delay are indeed positively correlated (regarded in the same units). This way it is possible to give a joint bivariate log-normal distribution for the retracement $X$ and the delay $D$ (both in units of the preceding movement) by virtue of its density function. $$\begin{aligned}
\label{eqn_xd}
&\,&f_{X,D}\left(x,d;\mu_X,\mu_{D},\sigma_X,\sigma_{D},\rho\right)=\frac{1}{2\pi
x d \sigma_X \sigma_{D} \sqrt{1-\rho^2}}*\\
&*&\exp\left[-\frac{1}{2(1-\rho^2)}\left(\frac{(ln(x)-\mu_X)^2}{\sigma_X^2}+\frac{(ln(d)-\mu_{D})^2}{\sigma_{D}^2}-2\rho\frac{(ln(x)-\mu_X)(ln(d)-\mu_{D})}{\sigma_X\sigma_{D}}\right)\right].
\nonumber\end{aligned}$$ For calculations based on this distribution, the (true) parameters $\mu_X,\mu_{D},\sigma_X,\sigma_{D}$ and $\rho$ must be replaced by their estimators.
Finally, the concluding observation regarding the retracement can be expanded by the delay part.
\[beob\_retr\_delay\] The parameters $\mu$ and $\sigma$ of the log-normal distribution are more or less scale invariant for the retracement and the delay. In case of an up-trend, the parameters are also market invariant.
Furthermore, the parameter $\mu$ is affected by the trend direction. In case of the retracement it is larger for down-trends whereas it is smaller for down-trends in case of the delay. In spite of that, the parameter $\sigma$ is more or less invariant of the trend direction.
Finally, the correlation between the logarithms of the retracement and the delay are close to scale and market invariant while the correlation in up-trends is significantly larger than in down-trends.
Fibonacci Retracements
----------------------
A propagated idea in the field of technical analysis for dealing with retracements is the concept of so called *Fibonacci Retracements*. Based on specific retracement levels derived from several powers of the inverse of the golden ratio one wants to make a priori predictions for future retracement values. Obviously, this assumes that there are such significant retracement values. However, the evaluation of the retracement above reveals that there are no levels with a great statistical significance but the retracements follow a continuous distribution overall. Even a finer histogram as shown in Figure \[fig:fibo\] does not reveal any significant retracements.
On the assumption that there are specific values with statistical significance in some regard, then the $100\%$-level would be most significant. For a closer look on significant retracement levels see [@IFTA_FIBO].
![More detailed (finer) histogram of Figure \[fig:histo\_retr\] with scaling $1$ from $0$ to $2$ with a bin size of $0.01$.[]{data-label="fig:fibo"}](images/hist_fibo.pdf){width="0.7\linewidth"}
Duration of the Retracement
---------------------------
Beside the retracement, the *duration of a trend correction* denoted by $Y$ is also of interest. It is given by the difference in trading days between the last $P2$ and the new $P3$ (see Figure \[fig:trend\].(b)). The distributions of the retracement duration overall show the asymmetric log-normally-like behavior as exemplarily shown in Figure \[fig:histo\_duration\]. However, the goodness of the log-normal assumption is obviously worse as in the case of the retracement itself. In particular, the measured densities of the retracement duration in a down-trend all show significant aberrations from the log-normal model.
Since every retracement value is associated with a duration, the joint distribution of the retracement and its duration can be examined (see Tables \[table:retr\_dur\] and \[table:retr\_dur\_cor\]).
Figure \[fig:butterfly\] exemplarily shows that the retracement in down-trends tends to have higher values as in up-trends. This was already seen in Table \[table:retr\] and Observation \[beob\_retr\]. However, Figure \[fig:butterfly\] exemplarily also shows that the retracement in down-trends has larger durations compared to the retracement in up-trends.
![Contour plot of the joint density of the retracement value and its duration in up- and down-trends (left and right resp.) with scaling $1$ for $S\&P100$ stocks.[]{data-label="fig:butterfly"}](images/butterfly){width="0.75\linewidth"}
Movement and Correction {#sec:3}
=======================
Distribution of Relative Movements and Corrections
--------------------------------------------------
As before, all of the measurements show the same characteristic distribution – whether relative movement (\[eqn\_bew\]) or relative correction (\[eqn\_kor\]). As before, the histograms conclude the log-normal assumption (see Figure \[fig:histo\_bew\]).
Again, the log-normal model does not match the measured data perfectly, but often fails to map the sharp peaks and fat tails. This observation is confirmed by the fluctuating p-values (see Table \[table:bew\] and \[table:kor\]).
Consequently, based on the results of Table \[table:bew\] and \[table:kor\] a fundamental observation can be made which differs from the retracement’s one.
\[beob\_bew\] The parameters of the log-normal distribution $\mu$ and $\sigma$ are market invariant for the relative movement and correction. Furthermore, the $\sigma$ parameter is more or less scale invariant while $\mu$ increases for increasing scaling for the relative movement and correction, i.e. the relative movement and correction are more likely to be larger for higher scaling parameter.
The parameters $\mu$ and $\sigma$ are also more or less trend direction invariant for the relative movement. In case of a relative correction, however, the direction of a trend affects these parameters: Both are larger in case of a down-trend.
The dependency between the $\mu$ parameter and the scaling has already been expected as outlined above. Obviously, higher scalings yield more significant movements and corrections. Therefore, to reflect this, the $x$-position of the density peak has to increase when the scaling increases. The dependency of the $\mu$ parameter on the trend direction has already been observed for the retracement (see Observation \[beob\_retr\]).
Delay after relative Movements and Corrections
----------------------------------------------
As before, the delay $d$ also has to be taken into account. Its absolute value is given by $$d_{abs}=|(\text{new extremum})-(\text{Close when new extremum is subsequently
detected})|$$ as defined in (\[def\_delay\]). Here, the unit for the delay is the last extreme value. This means for up-trends, the delay after the relative movement is given by $$D_M:=\frac{d_{abs}}{last Low}$$ while the delay after the relative correction is given by $$D_C:=\frac{d_{abs}}{last High}.$$ In both cases, it is sometimes abbreviated as the *relative delay* and has the same unit as the relative movement (\[eqn\_bew\]) and relative correction (\[eqn\_kor\]), respectively.
Eventually, as shown in Figure \[fig:bew\_delay\] the relative delay inherits the same characteristics as known from the delay for the retracement.
As before, the model’s skewness is slightly too positive. Consequently, the conclusion is also the same. The model matches the measurements well enough to be the base for further analysis.
The evaluation results for the relative delay are shown in Tables \[table:bew\_delay\] to \[table:cor\_kor\]. It reveals the same behavior of the model parameters as already seen for the relative movement and correction. Furthermore, the knowledge of the correlations (Tables \[table:cor\_bew\] and \[table:cor\_kor\]) enables joint considerations of the relative movement/correction and the relative delay with the joint distribution (\[eqn\_xd\]). In sum, this leads to the following expansion of Observation \[beob\_bew\].
\[beob\_bew\_delay\] The parameters of the log-normal distribution $\mu$ and $\sigma$ are market invariant for the relative movement and correction as well as their corresponding relative delays. Additionally, for these trend variables the $\sigma$ parameter is more or less scale invariant while $\mu$ increases for increasing scaling.
The parameters $\mu$ and $\sigma$ are also more or less trend direction invariant for the relative movement (Table \[table:bew\]). In case of the relative correction, however, the direction of a trend affects these parameters: Both are larger in case of a down-trend (Table \[table:kor\]). This is also true for the relative delay after a correction (Table \[table:kor\_delay\]). For the relative delay after a movement $\mu$ is also larger whereas $\sigma$ is smaller for down-trends (Table \[table:bew\_delay\]).
Finally, the correlation between the logarithms of the relative movement/correction and the corresponding relative delay is close to scale invariant.
Period of Movements and Corrections
-----------------------------------
The dependency between $\mu$ and the scaling parameter has already been explained with their connection to the level of trend significance (Observation \[beob\_bew\]). One attribute of trend significance is the duration of a single trend period, hence the time difference between two lows and two highs within the up- and down-trend, respectively. It is called the *period* $T$ of a trend. Figure \[fig:lambda\_scaling\].(a) shows the evolution of $T$ regarding different scaling parameters. Here, for any scaling the $T$ value is the arithmetical mean of all time differences between two consecutive lows and highs within up- and down-trends, respectively.
The period $T$ shows a linear behavior which has already been observed in [@EMP123] for EUR-USD Charts. The fit parameters are similar for both evaluated markets but differ in regard to the type of trend (see Figure \[fig:lambda\_scaling\].(b)). Due to the linear model it is evident how to set the scaling parameter to emphasize a specific period. Consequently, it is also easy to map any of the three different trend classes introduced by Dow – namely the primary, secondary and tertiary trend (see Murphy, chap. “Dow Theory” in [@MURPHY]).
Mathematical Model of Trading Systems {#sec:4}
=====================================
Based on the log-normal distribution model of the retracement an anti cyclic trading system can be modeled for instance. With the joint density of the retracement and delay the return of a basic anti cyclic trading system as shown in Figure \[fig:hs\] can be calculated:
Let an anti cyclic trading system as shown in Figure \[fig:hs\].(a) with entry in the correction at retracement level $a$, target $t$ be given. As soon as the end of the correction is recognized the position is closed with delay $d$. Furthermore, the return (in units of the last movement) for a trade with retracement $x$ and delay $d$ denoted by $R(x,d)$ is given by $$R(x,d)=
\begin{cases}
x-a-d,\text{ if }a\leq x<t \quad\text{(retracement does not reach target)}\\
t-a,\text{ if }x\geq t \quad\text{(retracement reaches target)}
\end{cases}.$$ Moreover, the distribution of the random variable for the retracement $X$ and for the delay after the the retracement $D=D_X$ are known from section \[sec:2\]. That is the reason why, the expected value of this return, considering only retracements where the trade is opened (condition $X\geq a$), is given by $$\begin{aligned}
{\ensuremath{\mathbb{E}}}(R(X,D)|X\geq a)=&\,&{\ensuremath{\mathbb{E}}}(X|X\geq
a)-(a+{\ensuremath{\mathbb{E}}}(D|X\geq
a))\\
&+&\frac{1-F_X(t)}{1-F_X(a)}\left[t+{\ensuremath{\mathbb{E}}}(D|X\geq
t)-{\ensuremath{\mathbb{E}}}(X|X\geq
t)\right]\nonumber\end{aligned}$$ with $F_X(x)={\ensuremath{\mathbb{P}}}(X\leq x)$ denoting the distribution function of the retracement $X$ (see section \[sec:2\]).
It should be noted, that $a$ and $t$ are parameters which have to be given in the same units as the retracement, i. e. units of the last movement.
For a proof see [@MT_Kempen].
In the same manner, several other key figures such as variance of the return can be calculated analytically assuming the empirically observed distributions as real. In this way, a bad chance to risk ratio for the anti cyclic trading model has been revealed (see [@MT_Kempen]). This is, however, in accordance with empirical observations of backtests of that strategy.
Conclusion and Outlook
======================
In this survey the applications of the log-normal distribution model on market-technical trend data is introduced. On the one hand, it is remarkable that the log-normal model obviously fits better to the trend data presented here than to daily returns of stock prices. In contrast to the approach to daily returns of stock prices, however, there has not been found an explanation for this observation yet. In particular, it has not yet been clarified whether the log-normal distribution is a result of a limit process or can be explained with the log-normal model for the daily returns of stock prices. As far as applications in the direction of modeling of trading systems are concerned, we introduced a simple model for an anti cyclic trading setup based on log-normally distributed data.
On the other hand, trend following, i.e. pro cyclic trading systems are more widely used than anti cyclic ones. In fact, empirical backtests have already shown the profitability of such trading systems. Consequently, there is a need for a mathematical model. Unfortunately, pro cyclic trading usually implies holding a position over several iterations of movement and correction as outlined in Figure \[fig:hs\].(b). This makes the problem far more complicated in mathematical terms since the joint distribution of a random number of relative movements and corrections – with possible correlations – has to be considered. Nevertheless, the log-normal model for the trend data represents a promising approach to this issue as well.
[99]{}
Maier-Paape, S. Automatic one two three. , [*15*]{}, 247–260. DOI: [10.1080/14697688.2013.814922](http://dx.doi.org/10.1080/14697688.2013.814922).
Hafizogullari, Y. ; Maier-Paape, S. ; Platen, A. ; Institute for Mathematics, RWTH Aachen, Report No. 61, 21 pp., 2013
Maier-Paape, S. ; Platen, A. ; Institute for Mathematics, RWTH Aachen, Report No. 79, 22 pp., 2015
Fama, E. F. Mandelbrot and the stable paretian hypothesis , [*36*]{}, 420–429.
Fama, E. F. The behavior of stock-market prices , [*38*]{}, 34–105.
Mandelbrot, B. The variation of certain speculative prices , [*36*]{}, 394–419.
Kempen, R. Fibonaccis are human (made) , 4–9.
Bachelier, L. ; University of Paris, Doctoral Dissertation, 1900 (English translation in: P.H. Cootner (Ed.), [*The Random Character of Stock Market Prices*]{}, MIT Press, Cambridge, MA, 1964, pp. 17–75).
Kempen, R. ; RWTH Aachen University, Master’s Thesis, 2015
Black, F. ; Scholes, M. The pricing of options and corporate liabilities , [*81*]{}, 637–654.
D’Agostino, R.B. ; Stephens, M.A. ; Marcel Dekker, 1986.
Murphy, J.J. ; New York Institute of Finance, 1999.
Rhea, R. ; Fraser Publishing Company, 1993.
Russel, R. ; Snowball Publishing, 1961.
Hull, J. ; Prentice Hall, 2009.
Appel, G. ; Financial Times Prentice Hall, 2005.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'F. Aharonian'
- 'A.G. Akhperjanian'
- 'A.R. Bazer-Bachi'
- 'B. Behera'
- 'M. Beilicke'
- 'W. Benbow'
- 'D. Berge [^1]'
- 'K. Bernlöhr'
- 'C. Boisson'
- 'O. Bolz'
- 'V. Borrel'
- 'I. Braun'
- 'E. Brion'
- 'A.M. Brown'
- 'R. Bühler'
- 'T. Bulik'
- 'I. Büsching'
- 'T. Boutelier'
- 'S. Carrigan'
- 'P.M. Chadwick'
- 'L.-M. Chounet'
- 'A.C. Clapson'
- 'G. Coignet'
- 'R. Cornils'
- 'L. Costamante'
- 'B. Degrange'
- 'H.J. Dickinson'
- 'A. Djannati-Ataï'
- 'W. Domainko'
- 'L.O’C. Drury'
- 'G. Dubus'
- 'J. Dyks'
- 'K. Egberts'
- 'D. Emmanoulopoulos'
- 'P. Espigat'
- 'C. Farnier'
- 'F. Feinstein'
- 'A. Fiasson'
- 'A. Förster'
- 'G. Fontaine'
- 'Y. Fukui'
- 'Seb. Funk'
- 'S. Funk'
- 'M. Fü[ß]{}ling'
- 'Y.A. Gallant'
- 'B. Giebels'
- 'J.F. Glicenstein'
- 'B. Glück'
- 'P. Goret'
- 'C. Hadjichristidis'
- 'D. Hauser'
- 'M. Hauser'
- 'G. Heinzelmann'
- 'G. Henri'
- 'G. Hermann'
- 'J.A. Hinton [^2]'
- 'A. Hoffmann'
- 'W. Hofmann'
- 'M. Holleran'
- 'S. Hoppe'
- 'D. Horns'
- 'A. Jacholkowska'
- 'O.C. de Jager'
- 'E. Kendziorra'
- 'M. Kerschhaggl'
- 'B. Khélifi'
- 'Nu. Komin'
- 'K. Kosack'
- 'G. Lamanna'
- 'I.J. Latham'
- 'R. Le Gallou'
- 'A. Lemière'
- 'M. Lemoine-Goumard'
- 'J.-P. Lenain'
- 'T. Lohse'
- 'J.M. Martin'
- 'O. Martineau-Huynh'
- 'A. Marcowith'
- 'C. Masterson'
- 'G. Maurin'
- 'T.J.L. McComb'
- 'R. Moderski'
- 'Y. Moriguchi'
- 'E. Moulin'
- 'M. de Naurois'
- 'D. Nedbal'
- 'S.J. Nolan'
- 'J-P. Olive'
- 'K.J. Orford'
- 'J.L. Osborne'
- 'M. Ostrowski'
- 'M. Panter'
- 'G. Pedaletti'
- 'G. Pelletier'
- 'P.-O. Petrucci'
- 'S. Pita'
- 'G. Pühlhofer'
- 'M. Punch'
- 'S. Ranchon'
- 'B.C. Raubenheimer'
- 'M. Raue'
- 'S.M. Rayner'
- 'O. Reimer [^3]'
- 'M. Renaud'
- 'J. Ripken'
- 'L. Rob'
- 'L. Rolland'
- 'S. Rosier-Lees'
- 'G. Rowell [^4]'
- 'B. Rudak'
- 'J. Ruppel'
- 'V. Sahakian'
- 'A. Santangelo'
- 'L. Saugé'
- 'S. Schlenker'
- 'R. Schlickeiser'
- 'R. Schröder'
- 'U. Schwanke'
- 'S. Schwarzburg'
- 'S. Schwemmer'
- 'A. Shalchi'
- 'H. Sol'
- 'D. Spangler'
- '[Ł]{}. Stawarz'
- 'R. Steenkamp'
- 'C. Stegmann'
- 'G. Superina'
- 'T. Takeuchi'
- 'P.H. Tam'
- 'J.-P. Tavernet'
- 'R. Terrier'
- 'C. van Eldik'
- 'G. Vasileiadis'
- 'C. Venter'
- 'J.P. Vialle'
- 'P. Vincent'
- 'M. Vivier'
- 'H.J. Völk'
- 'F. Volpe'
- 'S.J. Wagner'
- 'M. Ward\'
date:
- 'Received / Accepted'
- 'Received / Accepted'
title: '**Discovery of very high energy gamma-ray emission coincident with molecular clouds in the (G6.4$-$0.1) field**'
---
Introduction: W 28 and surroundings
===================================
The study of shell-type supernova remnants (SNRs) at $\gamma$-ray energies is motivated by the long-held idea that they are the dominant sites of hadronic Galactic cosmic-ray (CR) acceleration to energies approaching the *knee* ($\sim 10^{15}$ eV) (e.g. Ginzburg & Syrovatskii [@Ginzburg:1], Blandford & Eichler [@Blandford:1]). CRs (hadrons and electrons) are injected into the SNR shock front, and are then accelerated via the diffusive shock acceleration (DSA) process (for a review see Drury [@Drury:2]). Subsequent $\gamma$-ray production from the interaction of these CRs with ambient matter and/or electromagnetic fields is a tracer of such non-thermal particle acceleration, and establishing the hadronic or electronic nature of the parent CRs in any $\gamma$-ray source remains a key issue. Two SNRs, RX J1713.7$-$3946 and RX J0852.0$-$4622, have so far established shell-like morphology in VHE $\gamma$-rays (Aharonian [ ]{}[@HESS_RXJ1713; @HESS_VelaJnr; @HESS_RXJ1713_II; @HESS_VelaJnr_II; @HESS_RXJ1713_III]), with spectra extending to 20 TeV and beyond. In particular for RX J1713.7$-$3946, particle acceleration up to at least 100 TeV is inferred from the H.E.S.S. observations. Although a hadronic origin of the VHE $\gamma$-ray emission is highly likely in the above cases (Aharonian [ ]{}[@HESS_RXJ1713_II; @HESS_VelaJnr_II], Berezhko & Völk [@Berezhko:1], Berezhko, Pühlhofer & Völk [@Berezhko:2]), an electronic origin is not ruled out.
Disentangling the electronic and hadronic components in TeV SNRs may be made easier by studying: (1) SNR $\gamma$-ray spectra well beyond $\sim$10 TeV, an energy regime where electrons suffer strong radiative energy losses and due to Klein-Nishina effects the resulting inverse-Compton spectra tend to show a cut-off; (2) older SNRs (age approaching 10$^5$ yr) in which accelerated electrons have lost much of their energy through radiative cooling and do not reach multi-TeV energies; (3) SNRs interacting with adjacent molecular clouds of very high densities $n> 10^3$ cm$^{-3}$. It is the latter regard especially (and to a certain degree the second) which makes the SNR W 28 (G6.4$-$0.1) an attractive target for VHE $\gamma$-ray studies. In this paper we outline the discovery of VHE $\gamma$-ray emission from several sites in the W 28 field and briefly discuss their relationship with molecular clouds, W 28, and other potential particle accelerators in the region.
W 28 (G6.4$-$0.1) is a mixed-morphology SNR, with dimensions 50$^\prime$x45$^\prime$ and an estimated distance between 1.8 and 3.3 kpc (eg. Goudis [@Goudis:1], Lozinskaya [@Lozinskaya:1]). It is an old-age SNR (age 35000 to 150000 yr; eg. Kaspi [ ]{}[@Kaspi:1]), thought to have entered its radiative phase of evolution (eg. Lozinskaya [@Lozinskaya:1]) in which much of its CRs have escaped into the surrounding interstellar medium (ISM). We note also that the evolutionary status (Sedov and/or radiative) of shell-type SNRs may depend on the density of their surroundings (see eg. Blondin [ ]{}[@Blondin:1]).
W 28 is distinguished by its interaction with a molecular cloud (Wootten [@Wootten:1]) along its north and northeastern boundaries. This interaction is traced by the high concentration of 1720 MHz OH masers (Frail [ ]{}[@Frail:2], Claussen [ ]{}[@Claussen:1; @Claussen:2]), and also the location of very high-density ($n>10^3$ cm$^{-3}$) shocked gas (Arikawa [ ]{}[@Arikawa:1], Reach [ ]{}[@Reach:1]). The shell-like radio emission (Long [ ]{}[@Long:1], Dubner [ ]{}[@Dubner:1]) peaks at the northern and northeastern boundaries where interaction with the molecular cloud is established. Further indication of the influence of W 28 on its surroundings is the expanding HI void at a distance $\sim$1.9 kpc (Velázquez [ ]{}[@Velazquez:1]). The X-ray emission, which overall is well-explained by a thermal model, peaks in the SNR centre but has local enhancements in a region overlapping the northeastern SNR/molecular cloud interaction (Rho & Borkowski [@Rho:2]).
In the neighbourhood of W 28 are the radio-bright HII regions M 20 (Trifid Nebula at $d \sim$1.7 kpc Lynds [ ]{}[@Lynds:1] – with open cluster NGC 6514), M 8 (Lagoon Nebula at $d\sim 2$ kpc Tothill [ ]{}[@Tothill:1] — containing the open clusters NGC 6523 and NGC 6530) and the ultra-compact HII region W 28A2, all of which are representative of the massive star formation taking place in the region. Further discussion concerning the active star formation in this region may be found in van den Ancker [ ]{}([@Ancker:1]) and references therein. Additional SNRs in the vicinity of W 28 have also been identified: G6.67$-$0.42 and G7.06$-$0.12 (Yusef-Zadeh [ ]{}[@Yusef:1]), G5.55+0.32, G6.10+0.53 and G7.20+0.20 (Brogan [ ]{}[@Brogan:1]). The pulsar PSR J1801$-$23 with spin-down luminosity $\dot{E} \sim 6.2\times 10^{34}$ erg s$^{-1}$ and distance $d = 13.5$ kpc (based on its dispersion measure) is at the northern radio edge (Kaspi [@Kaspi:1]). More recent discussion (Claussen [ ]{}[@Claussen:3]) assigns a lower limit of 9.4$\pm$2.4 kpc for the pulsar distance.
W 28 has also been linked to $\gamma$-ray emission detected at $E>300$ MeV by COS-B (Pollock [@Pollock:1]) and $E>100$ MeV by EGRET (Sturner & Dermer [@Sturner:1], Esposito [ ]{}[@Esposito:1], Zhang [ ]{}[@Zhang:1]). The EGRET source, listed in the 3rd catalogue (Hartman [ ]{}[@Hartman:1]) as 3EG J1800$-$2338, is positioned at the southern edge of the radio shell. We have also performed an analysis of EGRET data, with additional data not included in the 3rd catalogue, and results are discussed later in this paper.
Previous observations of the W 28 region at VHE energies by the CANGAROO-I telescope revealed no evidence for such emission (Rowell [ ]{}[@Rowell:1]) and upper limits at the $\sim$0.2 to 0.5 Crab-flux level for energies $E>1.5$ TeV (1.1 to 2.9$\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$) were set for various regions.
Results at VHE and $E>$100 MeV $\gamma$-ray energies
====================================================
H.E.S.S. VHE analysis and results
---------------------------------
The High Energy Stereoscopic System (H.E.S.S.) was used to observe the W 28 region. Operating in the Southern Hemisphere, H.E.S.S. consists of four identical 13 m diameter Cherenkov telescopes (Bernlohr [ ]{}[@Bernlohr:1]). H.E.S.S. employs the stereoscopic imaging atmospheric Cherenkov technique, and is sensitive to $\gamma$-rays above an energy threshold of $\sim$0.1 TeV (Funk [ ]{}[@Funk:1]). An angular resolution of 5$^\prime$ to 6$^\prime$ (Gaussian standard deviation) on an event-by-event basis is achieved, and the large field of view (FoV) with full width at half maximum $\mathrm{FWHM}\sim 3.5^\circ$ permits survey coverage in a single pointing. A point source sensitivity approaching 0.01 Crab flux ($\sim10^{-13}$ erg cm$^{-2}$s$^{-1}$ at 1 TeV) is achieved for a 5$\sigma$ detection after $\sim$25 hr observation. Further details concerning H.E.S.S. can be found in Hinton ([@Hinton:1]) and references therein.
The total observation time covering the W 28 region amounts to $\sim$42 hr in a series of runs (with typical duration $\sim$28 min) spread over the 2004, 2005 and 2006 seasons. Runs were accepted for analysis if they met quality control criteria based on the recorded rate of isotropic CR background events, the number of malfunctioning pixels in each camera, the calibration and the tracking performance (see Aharonian [ ]{}[@HESS_Calibration] for details).
Data were analysed using the moment-based Hillas analysis procedure, the same used in the analysis of the inner Galactic Plane Scan datasets (Aharonian [ ]{}[@HESS_GalScan; @HESS_GalScan_II]). Observations covered a range of zenith angles leading to energy thresholds of $\sim 320$ GeV with [*hard cuts*]{} (Cherenkov image integrated intensity or [*size*]{} $>$200 photoelectrons) and $\sim$150 GeV for [*standard cuts*]{} ([*size*]{} $>$ 80 photoelectrons). [*Hard cuts*]{} were used in VHE $\gamma$-ray images, source location studies and energy spectra. In addition, [*Standard cuts*]{} were used in energy spectra in order to increase the energy coverage of extracted spectra. Generally consistent results were obtained using an alternative analysis based on a model of Cherenkov image parameters (de Naurois [@Mathieu:1]), which also utilises an independent calibration and lower cut on image [*size*]{} of $>$60 photoelectrons. A forthcoming paper will highlight results in detail from this analysis, which achieves improved sensitivities at lower thresholds compared to the pure Hillas-based analysis.
The VHE $\gamma$-ray image (Fig. \[fig:tevskymap\]) reveals two sites of VHE $\gamma$-ray emission in the direction of the northeastern and southern boundaries of the W 28 SNR.
{width="75.00000%"}
The colour scale in this figure depicts the Gaussian-smoothed VHE excess counts above a CR background estimate according to the [*template*]{} model (Rowell [@Rowell:2]), along with significance contours obtained after integrating events within a radius of 0.1$^\circ$ from each bin centre (appropriate for pointlike source searching). Similar images were obtained using alternative CR background models. A smoothing radius of $4.2^\prime$ was used to sufficiently smooth out random fluctuations in the image. An assessment of the VHE post-trial significances was made from our original search for marginally extended sources, which employed an [*a priori*]{} integration radius $\theta=0.2^\circ$. Under this scheme we applied $\sim 2.2\times 10^5$ trials (a very conservative value applied to these data) accumulated in searching for sources in the inner Galactic Plane (as in Aharonian [@HESS_GalScan]). The pre-trial significance of the VHE sources, at $\geq +7\sigma$, is therefore converted to a post-trial significance of $\geq +5\sigma$.
Based on the significance contours in Fig. \[fig:tevskymap\], we assign labels to the northeastern source, HESS J1801$-$233, and to the complex of sources to the south, HESS J1800$-$240, according to their best fit positions (fitting a 2D Gaussian and ellipse respectively to the unsmoothed excess map). Three components of HESS J1800$-$240 are identified, labeled here A, B and C from East to West. These components represent local peaks $\sim 2\sigma$ above their surrounds. Although not convincingly resolved under this analysis these components may comprise separate sources (or at least in part) due to their possible relationship with distinct multiwavelength counterparts (discussed later).
Differential photon energy spectra were extracted from HESS J1801$-$233 and all three components of HESS J1800$-$240. Spectra were well-fit by pure power laws ($dN/dE = k (E/1 {\rm TeV})^{-\Gamma}$) with photon indices $\Gamma \sim 2.5$ to 2.7 in the energy range $\sim$0.3 to $\sim$5 TeV (see Table \[tab:locations\] for results). Spectral fits were obtained using fluxes from a combination of [*hard*]{} and [*standard*]{} cuts to maximise the energy coverage. Spectral analysis employed the [*reflected background*]{} model (Berge [ ]{}[@Berge:1]), in which control regions reflected through each tracking position (taking care to avoid known VHE $\gamma$-ray sources) were used to estimate the CR background. Within the statistical and systematic errors, the photon indices appear consistent throughout HESS J1800$-$240. Except for HESS J1800$-$240C, all of the VHE sources appear extended with intrinsic radii of $\sim$10$^\prime$. At a distance of 2 kpc, the VHE source luminosities in the energy range 0.3 to 3 TeV would be on the order of $10^{33}$ erg s$^{-1}$.
EGRET $E>100$ MeV analysis and results
--------------------------------------
We have also analysed EGRET data for the W 28 region, using CGRO observation cycles (OC) 1 to 6. This slightly expands on the dataset of the 3rd EGRET catalogue (using OCs 1 to 4; Hartman [ ]{}[@Hartman:1]), which revealed the pointlike source, 3EG J1800$-$2338 ($E>100$ MeV). Our analysis confirms the presence of a pointlike $E>100$ MeV source in this region, here labeled GRO J1801$-$2320 (for $E>100$ MeV). GRO J1801$-$2320 appears slightly shifted ($\sim$0.2$^\circ$) with respect to the 3EG position. The 3EG position refers to a $E>$100 MeV determination based on the diffuse model as of Hunter [ ]{}([@Hunter:1]). Our dedicated analysis of archival EGRET data comprises different analysis compared to the 3EG catalogue. We first employed the finalised EGRET instrumental responses, which were made available by 2001 and are considered mandatory for investigating an EGRET source under conditions applicable from the end of OC 4 (narrow field of view modus; rapidly deteriorating spark chamber efficiency; and other issues). Second, we restricted the analysis both in narrowing the data selection to pointing angles with respect to our region of interest, which avoids the need to invoke a wide-angle point spread function (PSF). Thirdly, the imprecision of the interstellar emission model was countered via adjustments on analysis parameters [gmult]{} and [gbias]{} to account for local deviations from the large-scale diffuse emission model in the region of interest. The 68% and 95% location contours of GRO J1801$-$2320 are plotted in Fig. \[fig:tevskymap\], and match well the location of HESS J1801$-$233. Since however the EGRET degree-scale PSF easily encompasses both of the VHE sources, we cannot rule out a relationship with HESS J1800$-$240. For the energy spectrum of GRO J1801$-$2320, we have used the flux points extracted at the position of 3EG J1800$-$2338 as negligible differences were found between ours and that obtained at the nominal 3EG position. Fitting a pure power law we obtained a spectral index of $\Gamma = 2.16\pm0.10$, quite consistent with the published value from Hartman [ ]{}([@Hartman:1]). Comparisons are made with the VHE spectrum of HESS J1801$-$233 and HESS J1800$-240$ in §\[sec:discussion\].
[llllllll]{} & & & & &\
\
Name & R.A. \[deg\] & Dec \[deg\] & $^1$$\sigma_{\rm src}$ \[deg\] & $^2$$S$ \[$\sigma$\] (evts) & $^3 k$ & $^4 \Gamma$ & $^5 L$\
\
HESS J1801$-$233 & 270.426 $\pm$ 0.031 & $-$23.335 $\pm$ 0.032 & 0.17 $\pm$ 0.03 & +7.9 (281) & 7.50 $\pm$ 1.11 $\pm$ 0.30 & 2.66 $\pm$ 0.27 & 1.5\
HESS J1800$-$240A$^\S$ & 270.491 $\pm$ 0.001 & $-$23.962 $\pm$ 0.001 & 0.15 & +6.0 (180) & 7.65 $\pm$ 1.01 $\pm$ 0.50 & 2.55 $\pm$ 0.18 & 1.5\
HESS J1800$-$240B$^\S$ & 270.110 $\pm$ 0.002 & $-$24.039 $\pm$ 0.009 & 0.15 & +7.8 (236) & 7.58 $\pm$ 0.90 $\pm$ 0.15 & 2.50 $\pm$ 0.17 & 1.4\
HESS J1800$-$240C & 269.715 $\pm$ 0.014 & $-$24.052 $\pm$ 0.006 & 0.02 $\pm$ 0.15 & +4.5 (71) & 4.59 $\pm$ 0.89 $\pm$ 0.20 & 2.31 $\pm$ 0.35 & 0.8\
HESS J1800$-$240$^{\S\S}$ & 270.156 $\pm$ 0.044 & $-$23.996 $\pm$ 0.022 & 0.32$^{\rm RA}$ $\pm$ 0.05& +10.3 (652) & 18.63 $\pm$ 1.85 $\pm$ 1.20 & 2.49 $\pm$ 0.14 & 3.6\
& & & 0.17 $^{\rm Dec}$ $\pm$ 0.03 &\
GRO J1801$-$2320 & 270.360 $\pm$ 0.150 & $-$23.340 $\pm$ 0.150 & – & +13.2 & 3.35 $\pm$ 0.52 & 2.16 $\pm$ 0.10 & 480.0\
\[1mm\]\
\
\
\
\
\
\
\
\
\
\[tab:locations\]
NANTEN and other observations of Molecular Clouds {#sec:molclouds}
=================================================
In searching for molecular cloud counterparts to the VHE sources, we analysed $^{12}$CO ($J$=1–0) molecular line observations taken by the 4-meter mm/sub-mm NANTEN telescope, at Las Campanas Observatory, Chile (Mizuno & Fukui [@Mizuno:1]). The NANTEN Galactic Plane Survey data of 1999 to 2003 (see Matsunaga [ ]{}([@Matsunaga:1]) and references therein for details) were used, and for the W 28 region, the survey grid spacing was 4$^{\prime}$.
Figure \[fig:co\_tev\] (upper left panel) shows the $^{12}$CO ($J$=1–0) image integrated over the Local Standard of Rest velocity ($V_{\rm LSR}$) range 0 to 10 km s$^{-1}$, while the right panel shows the image integrated over the range $V_{\rm LSR}$=10 to 20 km s$^{-1}$. Two prominent $^{12}$CO features representing molecular clouds centred at ($l, b$)=(6.7$^\circ$, $-$0.3$^\circ$) and ($l, b$)=(5.9$^\circ$, $-$0.4$^\circ$) spatially correspond with the VHE $\gamma$-ray emission. As shown in Fig. \[fig:co\_tev\], these molecular clouds span both $V_{\rm LSR}$ ranges. According to the Galactic rotation model of Brand & Blitz ([@Brand:1]), these $V_{\rm LSR}$ ranges formally correspond to kinematic distances of approximately 0 to $\sim$2.5 kpc (overlapping the Sagittarius arm), and 2.5 to $\sim$4 kpc (reaching the Scutum-Crux arm) respectively. Given the uncertainties in rotation models close to the Galactic centre, such $V_{\rm LSR}$ ranges would cover the distance estimates for W 28, the most prominent SNR in the region. Much discussion has centred on the systemic velocity (SV) of W 28 (and hence its distance), and how much W 28 has influenced matter in the region. H$\alpha$ (Radhakrishman [ ]{}[@Radhakrishman:1]) and HI absorption features (Lozinskaya [ ]{}[@Lozinskaya:1]) have suggested SV$\sim$18 km s$^{-1}$. Claussen [ ]{}([@Claussen:1]) have pointed to SV$\sim$17 km $s^{-1}$. More recent HI studies by Velázquez [ ]{}([@Velazquez:1]) suggest SV=+7 km s$^{-1}$ (which leads to the distance estimate for W 28 at $\sim$1.9 kpc). They also suggest a HI shell may also extend over the $V_{\rm LSR}=$ $-$25 to +38 km s$^{-1}$ range, giving rise to a shock speed of $\sim$30 km s$^{-1}$. Torres [ ]{}([@Torres:1]) and Reach [ ]{}([@Reach:1]) have also studied the large-scale $^{12}$CO(J=1-0) emission for this region using the survey data of Dame [ ]{}([@Dame:1]), suggesting that the parent molecular cloud under the influence of W 28 is presently centred at $V_{\rm LSR}\sim$19 km s$^{-1}$. The Galactic longitude-velocity (l-v) diagram (bottom panels of Fig. \[fig:co\_tev\]) from our NANTEN data integrated over the Galactic latitude ranges $b=-0.125^\circ$ to $-0.5^\circ$ and $b=-0.125^\circ$ to $-0.7^\circ$ shows the distribution of molecular material in relation to the SV of W 28 from the HI studies of Velázquez. The wider, latter $b$ range shows the effect of including the cloud component overlapping HESS J1800$-$240A. A void or dip in CO emission appears at a similar $V_{\rm LSR}$ range as found in the HI data, with much of the molecular material appearing to surround the void in positive $V_{\rm LSR}$ values with respect to the SV of W 28. A similar longitude-velocity picture was revealed by Torres [ ]{}([@Torres:1]) (see their Fig. 22).
The $V_{\rm LSR}$=0 to 10 km s$^{-1}$ component of the northeast cloud overlapping HESS J1801$-$233 is already well studied (see Reach [ ]{}[@Reach:1] and references therein). Shocked $^{12}$CO($J$=3–2) molecular gas as indicated by a broad wing-like line dispersion (Arikawa [ ]{}[@Arikawa:1] — hereafter A99; using the James Clerk Maxwell Telescope (JCMT); in 15$^{\prime\prime}$ grid steps) and a high concentration of OH masers (Claussen [ ]{}[@Claussen:1]), suggests material here has been compressed by the SNR shock in W 28. The line dispersion, $\Delta V \leq70$ km s$^{-1}$, is an indicator of the SNR shock speed in this particular region. The unshocked gas was also mapped by A99 via $^{12}$CO($J$=1–0) observations with the Nobeyama 45 m telescope (in 34$^{\prime \prime}$ grid steps for $V_{\rm LSR}=+4$ to +9 km s$^{-1}$). The shocked and unshocked gas extends to the northeast and northern boundaries of W 28 (see Fig. 3 of A99), and it appears just their northeastern components are positionally coincident with the VHE emission of HESS J1801$-$233. A99 estimate the mass and average density of the shocked gas at $M\sim 2\times 10^3$ M$_\odot$ and $n\sim10^4$ cm$^{-3}$ respectively. For the unshocked gas, A99 obtained $M\sim 4\times 10^3$ M$_\odot$ and $n\sim 10^3$ cm$^{-3}$ respectively. The $V_{\rm LSR}$=10 to 20 km s$^{-1}$ range in our NANTEN data also reveals additional molecular clouds along the line of sight that could contribute to the VHE emission.
The southern cloud overlaps all components of HESS J1800$-$240, with a dominant fraction of the cloud overlapping components A and B. The component of this cloud visible in the $V_{\rm LSR}$ = 0 to 10 km s$^{-1}$ range coincides well with HESS J1800$-$240B and the HII region W 28A2. The strongest CO temperature peak of this component at ($l, b$)=(5.9$^\circ$, -0.4$^\circ$) is within $0.02^\circ$ of W 28A2, and is likely the dense material surrounding this HII region. Moreover the peak’s velocity at $V_{\rm LSR}$ 9–10 km s$^{-1}$ (with dispersion of $\sim$15 km s$^{-1}$), suggests a distance ($\sim$2.4 kpc) similar to that of W 28A2 ($\sim$2 kpc; Acord [ ]{}[@Acord:1]), and also W 28. In the $V_{\rm LSR}$ = 10 km s$^{-1}$ to 20 km s$^{-1}$ range, molecular material appears to coincide with all three VHE components of HESS J1800$-$240. In particular, HESS J1800$-$240A and C have molecular cloud overlaps only in this latter $V_{\rm LSR}$ range.
Using the relation between the hydrogen column density $N(\rm H_{2})$ and the $^{12}$CO($J$=1–0) intensity (the X-factor) $W(^{12}$CO), $N({\rm H_2}) = 1.5 \times 10^{20}\ [W({\rm ^{12}CO})/{\rm (K\ km/s)}]\ {\rm (cm^{-2})}$ (Strong [ ]{}[@Strong:1]), we estimate a total mass for the northeastern cloud from our NANTEN data at $\sim 5 \times 10^4$ M$_\odot$ for $d=$2 kpc within an elliptical region of diameter 0.2$^{\circ} \times 0.4^\circ$ (7$\times$14 pc; centred on HESS J1801$-$233) for the velocity range 0–25 km s$^{-1}$. An average density (for neutral hydrogen) of $\sim$1.4$\times$10$^{3}$ cm$^{-3}$ is also derived. Similarly the total mass of the southern cloud is estimated at $\sim$1.0$\times 10^{5}$ $M_{\odot}$ for $d$=2 kpc and combining clouds from a circular area of radius 0.15$^{\circ}$ (5 pc) for the velocity range 12–20 km s$^{-1}$, and area 0.3$^{\circ} \times 0.6^\circ$ (10.5$\times$21 pc) in diameter for the velocity range 0–12 km s$^{-1}$ (both regions are centred on HESS J1800$-$240B). The corresponding average density is $\sim$1.0$\times$10$^{3}$ cm$^{-3}$. By integrating over the rather broad 0–20 km s$^{-1}$ and 0–25 km s$^{-1}$ ranges we assume that the molecular material along this line of sight is physically connected at the same distance (for example $d\sim$2 kpc) and possibly distrupted or shocked by a local energy source. Systematic effects in the mass estimates arise from the velocity crowding in this part of the Galactic plane, and also the broad velocity range for which X-factor used above may not necessary apply. In the latter case, the X-factor may underestimate the cloud mass since an appreciable fraction of gas may be heated under the assumption of distrupted and/or shock-heated gas. One must allow for $\sim$4 kpc distances for some or even all of the $V_{\rm LSR}>$10 km s$^{-1}$ cloud components, and therefore the conclusion that they are not related to W 28 and other interesting objects at $d\sim 2$ kpc. If the clouds are related, W 28 could play a disrupting role. The level of this disruption is however unclear since several other plausible candidates related to the star formation (discussed later) in this region could also contribute. Some other molecular cloud complexes have also been discussed as possibly disrupted by adjacent SNRs and/or energetic sources (eg. Yamaguchi [ ]{} [@Yamaguchi:1], Moriguchi [ ]{}[@Moriguchi:1]). In table \[tab:masses\], we present a full summary of cloud masses and densities (for regions centered on the VHE source coordinates as in table \[tab:locations\]) for various combiniations of cloud components and distances of 2 and 4 kpc. Velocity separation of cloud components are based on their apparent distribution in Fig. \[fig:co\_tev\] (bottom panels).
Radio to X-ray views
====================
Figure \[fig:mwl\] compares the radio (left panel), infrared and X-ray views (right panel) of the W 28 region with the VHE significance contours. The Very Large Array (VLA) 90 cm continuum radio image from Brogan [ ]{}([@Brogan:1]) illustrates the shell-like SNR morphology peaking strongly along the northern and eastern boundaries. HESS J1801$-$233 can be seen to overlap the northeastern shell of the SNR, coinciding with a strong peak in the 90 cm continuum emission. We note that a thermal component is likely present in this peak, given its spectral index $\alpha \sim -0.2$ (for $S \propto \nu^\alpha$) between 90 and 20 cm (Dubner [ ]{}[@Dubner:1]). Outlines of the SNRs traced by non-thermal radio emission, G6.67$-$0.42 and (Yusef-Zadeh [ ]{}[@Yusef:1], Helfand [ ]{}[@Helfand:1], labelled as G6.51$-$0.48 and G7.0$-$0.1 by Brogan [ ]{}[@Brogan:1]) are also indicated. In addition, Brogan [ ]{}notes that the non-thermal radio arc , which overlaps well with HESS J1800$-$240C, could be a partial shell and therefore an SNR candidate. The distances to G6.67$-$0.42 and G5.71$-$0.08 are presently unknown. Directly south of W 28, the ultracompact HII region W 28A2 is a prominent radio source, and is positioned within $0.1^\circ$ of the centroid of HESS J1800$-$240B. The other HII regions G6.1$-$0.6 (Kuchar & Clark [@Kuchar:1]) and 6.225$-$0.569 (Lockman [@Lockman:1]) are also associated with radio emission.
The X-ray morphology as shown (Fig. \[fig:mwl\] right panel) in the ROSAT PSPC (0.5 to 2.4 keV) image from Rho & Borkowski ([@Rho:2]) reveals the central concentration of X-ray emission, which is predominantly thermal in nature with characteristic temperatures in the range $kT \sim$0.4 to 2 keV. An X-ray peak or [*Ear*]{} lies at the northeastern boundary and just outside the 4$\sigma$ significance contour of HESS J1801$-$233. A non-thermal component to the [*ear*]{} emission (3$\times$1.5$^\prime$) (2.1$\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$ at 1 keV) with a power-law index $\Gamma$=1.3 has been suggested by Ueno [ ]{}([@Ueno:1]) based on XMM-Newton observations in the 0.5 to 10 keV energy range. The total kinetic energy of the SNR is estimated at $\sim 4 \times 10^{50}$ erg, which could be a lower limit due to the possible break-out of the SNR along the southern edge away from the molecular cloud to the north and east (Rho & Borkowski [@Rho:2]). The HII regions, W 28A2 and G6.1$-$0.6 are prominent in the 8.28 m image (Fig. \[fig:mwl\] right panel) from the Midcourse Space Experiment (MSX), showing that a high concentration of heated dust still surrounds these very young stellar objects.
Discussion {#sec:discussion}
==========
Our discovery of VHE $\gamma$-ray emission associated with dense ($n \ge 10^3$ cm$^{-3}$) molecular clouds in the W 28 field adds to the list of such associations after the detection of diffuse $\gamma$-ray emission from the Galactic Ridge (Aharonian [ ]{}[@HESS-Diffuse]), the association of HESS J1834$-$087 with the old-age SNR W 41 (Lemiére [ ]{}[@Lemiere:1], Albert [ ]{}[@Albert:1]) and VHE emission discovered from IC 443 (Albert [ ]{}[@Albert:2]). The VHE/molecular cloud association could indicate a hadronic origin for the parent multi-TeV particles where the $\gamma$-ray emission (multi-GeV to TeV energies) arises from the decay of neutral pions resulting from the interaction of accelerated protons (and higher $Z$ nuclei) with ambient matter of density $n$. In this case the $\gamma$-ray flux would scale with cloud mass or density, and the total energy in accelerated particles or CRs penetrating the cloud(s). We note that a perfect correlation between the VHE and molecular cloud morphologies is not expected due to complex time and energy-dependent propagation of CR to and within the cloud (see Gabici [ ]{}[@Gabici:1] for a discussion). Projection effects are also likely to be important for the examples discussed here since the VHE emission could have contributions from clouds at different velocities, not necessarily physically connected to one another. For example the relationship between HESS J1801$-$233 and the W 28/molecular cloud interaction is not entirely clear due to the overlapping molecular cloud components at $V_{\rm LSR}>$10 km s$^{-1}$.
One should also consider accelerated electrons as the source of $\gamma$-ray emission, via inverse-Compton (IC) scattering of ambient soft photon fields and/or non-thermal Bremsstrahlung from the interaction of electrons with dense ambient matter. Maximum electron energies however may be considerably lower (factor $\sim$10 or more than that of protons) due to synchrotron cooling in magnetic fields and low shock speeds, in the absence of strong electron replenishment. An assessment of the role of accelerated electrons requires consideration of the non-thermal radio and X-ray emission (where a convincing measurement of the latter is so far lacking), and also magnetic fields in this region. Such observations will also provide constraints on synchrotron emission expected from secondary electrons resulting from primary hadron interactions with ambient matter (as discussed above). Relatively high magnetic fields $B\sim 100 (n/10^4\,{\rm cm^{-3}})^{0.5}$ G are inferred in dense molecular clouds (Crutcher [ ]{}[@Crutcher:1]). In addition, higher values are indicated from Zeeman splitting measurements in the compact areas (arcsecond scale) surrounding the 1720 MHz OH masers of the northeastern interaction region (Hoffman [ ]{}[@Hoffman:1]), coinciding with HESS J1801$-$233. To the north of W 28, another potential source of particle acceleration is PSR J1801$-$23, where the VHE emission may arise in an asymmetric pulsar-wind-nebula (PWN) scenario (a primarily leptonic scenario), similar to HESS J1825$-$137 (Aharonian [ ]{}[@hessj1825]). However with a spin-down power of $\dot{E} \sim 6.2\times 10^{34}$ erg s$^{-1}$ at distance $d>9.4$ kpc, this pulsar appears unlikely to power any of the $\gamma$-ray sources observed in the region. A PWN scenario would therefore require a so far undetected energetic pulsar.
[llllll]{}\
VHE Source & $V_{\rm LSR}$ & $d$ & $^\dagger M$ & $^\ddagger n$ & $^\S k_{\rm CR}$\
& (km s$^{-1}$) & (kpc) & & &\
\
HESS J1801$-$233 & 0-25 & 2.0 & 0.5 & 1.4 & 13\
HESS J1801$-$233 & 0-12 & 2.0 & 0.2 & 2.3 & 32\
HESS J1801$-$233 & 13-25 & 4.0 & 1.1 & 0.6 & 23\
HESS J1800$-$240 & 0-20 & 2.0 & 1.0 & 1.0 & 18\
HESS J1800$-$240A & 12-20 & 4.0 & 1.0 & 0.7 & 28\
HESS J1800$-$240B & 0-12 & 2.0 & 0.4 & 2.3 & 18\
HESS J1800$-$240B & 12-20 & 4.0 & 1.5 & 1.2 & 19\
\[1mm\]\
\
\
In the case of a hadronic origin and following Eq.10 of Aharonian ([@Aharonian:1]), we can estimate the CR density enhancement factor $k_{\rm CR}$ in units of the local CR density required to explain the VHE emission, given an estimate for the cloud masses and assumptions on distance. Converting the VHE energy spectra in Table \[tab:locations\] to an integral value for $E>1$ TeV, assuming distances of 2 and 4 kpc for the various cloud components, and that all the VHE emission in each source is associated with the cloud component under consideration, we arrive at values for $k_{\rm CR}$ in the range 13 to 32 (Table \[tab:masses\]).
Overall, these levels of CR enhancement factor would be expected in the neighbourhood of CR accelerators such as SNRs. If the clouds were all at $\sim$2 kpc, an obvious candidate for such particle acceleration is the SNR W 28, the most prominent SNR in the region. Despite its old age, multi-TeV particle acceleration may still occur in W 28 (Yamazaki [ ]{}[@Yamazaki:1]), with protons reaching energies of several 10’s of TeV depending on various SNR shock parameters such as speed, size and ambient matter density. In addition, CRs produced at earlier epochs have likely escaped and diffused throughout the region, a situation discussed at length in Aharonian & Atoyan ([@Aharonian:2]). Aharonian & Atoyan show for slow diffusion (diffusion coefficient at 10 GeV $D_{10}\sim$10$^{26}$ cm$^2$ s$^{-1}$ as might be expected in dense environments) CR enhancement factors in the required range could be found in the vicinity (within 30 pc – note that if at 2 kpc distance, HESS J1800$-$240 would lie $\sim$10 pc from the southern circular boundary of W 28) of a canonical SNR as an impulsive accelerator up to $\sim10^5$ yr after the SN explosion (see their Fig.1). In this sense, W 28 as a source of CRs in the region could be plausible scenario.
The W 28 field however is a rich star formation region, and several additional/alternative sources of CR acceleration may be active. The SNR G6.67$-$0.42 is positioned directly to the southeast of HESS J1801$-$233 (Fig. \[fig:mwl\] left panel) while the SNR G7.06$-$0.12 is situated $\sim 0.25^\circ$ north of HESS J1801$-$233 and on the west side of the HII region M 20. M 20 itself may also be an energy source for the molecular clouds in this region. The SNR candidate G5.71$-$0.08 (Brogan [ ]{}[@Brogan:1]) may also be responsible in some way for HESS J1800$-$24C given the good positional overlap between the two. These radio SNR/SNR candidates are without a distance estimate making it unclear as to how they relate to the molecular clouds in the region. The morphology of HESS J1800$-$240 displays several peaks, perhaps resulting from changes in cloud density and/or the presence of additional particle accelerators and local conditions. For HESS J1800$-$24B, a potential energy source is the unusual ultra-compact HII region W 28A2 (G5.89$-$0.39), representing a massive star in a very young phase of evolution. W 28A2 exhibits very energetic bipolar molecular outflows (Harvey & Forveille [@Harvey:1], Acord [ ]{}[@Acord:1], Sollins [ ]{}[@Sollins:1]) which may arise from the accretion of matter by the progenitor star. The outflow ages are estimated at between $\sim 10^3$ to $10^4$ yr. Recent observations (Klaassen [ ]{}[@Klaassen:1]) suggest both outflows extend over a combined distance of $\sim 2^\prime$ (or $\sim$1.2 pc at $d=2$ kpc), with total kinetic energy of $3.5\times 10^{46}$ erg. Surrounding the outflows is a very dense ($>10^4$ cm$^{-3}$) molecular envelope of diameter 0.5$^\prime$ to 1$^\prime$. Despite the lack of any model to explain multi-TeV particle acceleration in such HII regions, its kinetic energy budget and its spatial overlap with a VHE source makes W 28A2 a tempting candidate for such acceleration. Already, there are two examples of VHE emission possibly related to the environments of hot, young stars — TeV J2032+4130 (Aharonian [ ]{}[@HEGRA_TEVJ2032]) and HESS J1023$-$575 (Aharonian [ ]{}[@HESS_WR20A]). In this context, the HII regions G6.1$-$0.6 and 6.225$-$0.569 may also play a similar role in HESS J1800$-$24A. Among the prominent open clusters in the area, NGC 6523 and NGC 6530 $\sim 0.5^\circ$ southeast of HESS J1800$-$240, and NGC 6514 associated with M 20 $\sim 0.7^\circ$ north of HESS J1801$-$233 may also provide energy for CR production. Finally, if the VHE emission is associated with truly distant cloud components approaching the Scutum-Crux arm at $\sim$4 kpc, undetected background particle accelerators would then play a role.
Fig. \[fig:spectrum\] also compares the EGRET and VHE spectra. Given the degree-scale EGRET PSF, GRO J1801$-$2320 remains unresolved at scales of the VHE sources. Although the peak of the EGRET emission coincides with HESS J1801$-$233, we therefore cannot rule out unresolved MeV/GeV components from HESS J1800$-$240. Observations with GLAST will be required to determine the MeV/GeV components of the VHE sources.
![Energy fluxes of HESS J1801$-$233 and HESS J1800$-$240 (for regions defined in Tab. \[tab:locations\]) compared to the $E>100$ MeV counterpart GRO J1801$-$2320. The power law fits and data points (summarised in Tab. \[tab:locations\]) are also indicated: HESS J1801$-$233 (solid blue line and points); HESS J1800$-$240 (open red points and solid line); GRO J1801$-$232 (solid black points and grey 1$\sigma$ confidence band).[]{data-label="fig:spectrum"}](spectrum_hessj1801-233_III.eps){width="9.5cm"}
Conclusions {#sec:conclusion}
===========
In conclusion, our observations with the H.E.S.S. $\gamma$-ray telescopes have revealed VHE $\gamma$-ray sources in the field of W 28 which positionally coincide well with molecular clouds. HESS J1801$-$233 is seen toward the northeast boundary of W 28, while HESS J1800$-$240 situated just beyond the southern boundary of W 28 comprises three components. Our studies with NANTEN $^{12}$CO(J=1-0) data show molecular clouds spanning a broad range in local standard of rest velocity $V_{\rm LSR}=$5 to $\sim$20 km s$^{-1}$, encompassing the distance estimates for W 28 and various star formation sites in the region. If connected, and at a distance $\sim$2 kpc, the clouds may be part of a larger parent cloud possibly disrupted by W 28 and/or additional objects related to the active star formation in the region. Cloud components up to $\sim$4 kpc distance ($V_{\rm LSR}>$10 km s$^{-1}$) however, remain a possibility.
The VHE/molecular cloud association could indicate a hadronic origin for the VHE sources in the W 28 field. Under assumptions of connected cloud components at a common distance of 2 kpc, or, alternatively, separate cloud components at 2 and 4 kpc, a hadronic origin for the VHE emission implies cosmic-ray densities $\sim$10 to $\sim$30 times the local value. W 28 could provide such densities in the case of slow diffusion. Additional and/or alternative particle accelerators such as HII regions representing very young stars, other SNRs/SNR candidates and/or several open clusters in the region may also be contributors. Alternatively, if cloud components at $V_{\rm LSR}>$10 km s$^{-1}$ are at distances $d\sim4$ kpc, as-yet undetected particle accelerators in the Scutum-Crux arm may be responsible. Detailed modeling (beyond the scope of this paper), and further multiwavelength observations of this region are highly recommended to assess further the relationship between the molecular gas and potential particle accelerators in this complex region, as well as the nature of the acclerated particles. In particular, further sub-mm observations (eg. at high CO transitions) will provide more accurate cloud mass estimates, and allow to search for disrupted/shocked gas towards the southern VHE sources. Such studies will be valuable in determining whether or not W 28 and other energetic sources have disrupted molecular material at line velocities $>$10 km s$^{-1}$.
The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS, the U.K. Particle Physics and Astronomy Research Council (PPARC), the IPNP of the Charles University, the Polish Ministry of Science and Higher Education, the South African Department of Science and Technology and National Research Foundation, and by the University of Namibia. We appreciate the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment. The NANTEN project is financially supported from JSPS (Japan Society for the Promotion of Science) Core-to-Core Program, MEXT Grant-in-Aid for Scientific Research on Priority Areas, and SORST-JST (Solution Oriented Research for Science and Technology: Japan Science and Technology Agency). We also thank Crystal Brogan for the VLA 90 cm image and the referee for valuable comments.
[99]{}
Acord J.M., Walmsley C.M., Churchwell E. 1997 ApJ, 475, 693 Aharonian F. 1991 Ap&S.S.. 180, 305 Aharonian F., Atoyan A.M. 1996, A&A 309, 917 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2004a Nature 432, 75 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2004b Astropart. Phys. 22, 109 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2005a Science 307, 1938 Aharonian F. [ ]{}(HEGRA Collab.) 2005b A&A 431, 197 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2005c A&A 437, L7 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2006a ApJ 636, 777 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2006b A&A 449, 223 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2006c Nature 439, 695 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2006d A&A 460, 365 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2007a A&A 464, 235 Aharonian F. [ ]{}2007b ApJ 661, 236 Aharonian F. [ ]{}(H.E.S.S. Collab.) 2007c A&A 467, 1075 Albert J., Aliu E., Anderhub H. [ ]{}2006 ApJ 643, L53 Albert J., Aliu E., Anderhub H. [ ]{}2007 ApJ 664, L87 van den Ancker M.E., Thé P.S., Feinstein A. [ ]{}1997 A&A (Supp) 123, 63 Arikawa Y., Tatematsu K., Sekimoto Y., Takahashi T. 1999, PASJ 51, L7 Berge D., Funk S., Hinton J. 2007 A&A 466, 1219 Berezhko E.G., Völk H.J. 2006, A&A 451, 981 Berezhko E.G., Pühlhofer G., Völk H.J. 2007, Proc. 30th ICRC (Merida), arXiv:astro-ph/0707.4662 Bernlohr K., [ ]{}Astropart. Phys. 2003 20, 111 Blandford R.D., Eichler D. 1987, Phys. Rep. 154, 1 Blondin J.M., Wright E.B., Borkowski K.J. [ ]{}1998 ApJ 500, 342 Brand J., Blitz L. 1993 A&A 275, 67 Brogan C.L., Gelfand J.D., Gaensler B.M. [ ]{}2006 ApJ 639, L25 Claussen M.J., Frail D.A., Goss W.M., Gaume R.A. 1997 ApJ 489, 143 Claussen M.J., Goss W.M., Frail D.A., Desai K. 1999 ApJ 522, 349 Claussen M.J., Goss, W.M., Desai K.M., Brogan C.L. 2002 ApJ 580, 909 Crutcher R.M. 1999 ApJ 520, 706 Dame T.M., Hartman D., Thaddeus P. 2001 ApJ 547, 792 Drury L.O’C 1983 Rep. Prog. Phys. 46, 973 Dubner G.M., Velázquez P.F., Goss W.M., Holdaway M.A., 2000 AJ 120, 1933 Esposito J.A., Hunter S.D., Kanbach G., Sreekumar P. 1996, ApJ 461, 820 Frail D.A., Goss W.M., Slysh V.I. 1994 ApJ 424, L111 Funk S., Hermann G., Hinton J.[ ]{}2004 Astropart. Phys. 22, 285 Ginzburg V.L. & Syrovatskii, S.I. 1964, The Origin of Cosmic Rays (New York: Macmillan) Gabici S., Aharonian F.A., Blasi P. 2006 In Proc. ’Multi-messenger approach to high energy gamma-rays” Barcelona June2006 (Ap.& SS arXiv:astro-ph/0610032) Goudis C., 1976, Ap&SS 40, 91 Hartman R.C., Bertsch D.L., Bloom S.D., [ ]{}1999, ApJS 123, 79 Harvey P.M., Forveille T. 1988, A&A 197, L19 Helfand D.J., Becker R.H., White R.L. [ ]{}2006 AJ 131, 2525 Hinton J.A. 2004 New Astron. Rev. 48, 331 Hoffman I.M., Goss W.M., Brogan C.L., Claussen M.J. 2005 ApJ 620, 257 Hunter S.D., Bertsch, D.L., Catelli, J.R. [ ]{}1997 ApJ 481, 205 Kaspi V.M., Lyne A.G., Manchester R.N., [ ]{}1993, ApJ 409, L57 Klaassen P.D., Plume R., Ouyed R. [ ]{}ApJ 648, 1079 Koyama, K., Petre, R., Gotthelf, E.V., [ ]{}1995 Nature 378, 255 Koyama, K., Kinugasa, K., Matsuzaki, K. 1997 PASJ 49, L7 Kuchar T.A., Clark F.O. 1997 ApJ 488, 224 Lemiére A. [ ]{}(H.E.S.S. Collab.) in Proc. 29th ICRC (Pune) 4, 105 Li T., Ma Y. 1983, ApJ 272, 317 Lockman F.J. 1989 ApJ (Supp) 71, 469 Long K.S., Blair W.P., White R.L., Matsui Y., 1991, ApJ 373, 567 Lozinskaya T.A. 1981 Sov. Astron. Lett. 7, 17 Lynds B.T., O’Neill E.J. Jr. 1985 ApJ 294, 578 Matsunaga K., Mizuno N., Moriguchi Y. [ ]{}2001 PASJ 53, 1003 Mattox J.R., Bertsch D.L., Chiang J. [ ]{}1996 ApJ 461, 396 Mizuno A., Fukui Y. 2004, ASP Conf. Proc. 317, 59 Moriguchi Y., Yamaguchi N., Onishi T. [ ]{}2000 PASJ 53, 1025 de Naurois M. 2006 arXiv:astro-ph/0607247 Pollock A.M.T. 1985 A&A 150, 339 Radhakrishman V., Goss W.M., Marray J.E. [ ]{}1972 ApJ(Supp) 24, 49 Reach W.T., Rho J., Jarrett T.H. 2005 ApJ 618, 297 Rho J.H., Borkowski K. 2002 ApJ 575, 201 Rowell G.P, Naito T., Dazeley S.A. [ ]{}2000 A&A 359, 337 Rowell G.P. 2003 A&A 410, 389 Sollins P.K., Hunter T.R., Battat J., [ ]{}2004 ApJ 616, L35 Strong A.W., Moskalenko I.A.W., Reimer O. A&A [ ]{}2004 422, L47 Sturner S.J., Dermer C.D. 1995 A&A 293, L17 Torres D.F., Romero G., Dame T.M. [ ]{}2003 Phys. Rep. 382, 303 Tothill N.F.H. White G.J., Matthews H.E. [ ]{}2002 ApJ 580, 285 Ueno M., Bamba A., Koyama K. 2003a In Proc. 28th ICRC (Tsukuba, Japan), 2401 Velázquez P.F., Dubner G.M., Goss W.M., Green A.J. 2002 AJ 124, 2145 Wootten A. 1981, ApJ 245, 105 Yamaguchi N., Mizuno N., Moriguchi Y. [ ]{}1999 PASJ 51, 765 Yamazaki R., Kazunori K, Yoshida T., Tsuribo T. 2006 MNRAS 371, 1975 Yusef-Zadeh F., Shure M., Wardle M., Kassim N. 2000 ApJ 540, 842 Zhang I., Cheng K.S. 1998, A&A 335, 234
[^1]: now at CERN, Geneva, Switzerland
[^2]: now at School of Physics & Astronomy, University of Leeds, Leeds LS2 9JT, UK
[^3]: now at Stanford University, HEPL & KIPAC, Stanford, CA 94305-4085, USA
[^4]: now at School of Chemistry & Physics, University of Adelaide, Adelaide 5005, Australia
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Power losses reduction is one of the main targets for any electrical energy distribution company. In this paper, we face the problem of joint optimization of both topology and network parameters in a real smart grid. We consider a portion of the Italian electric distribution network managed by the ACEA Distribuzione S.p.A. located in Rome. We perform both the power factor correction (PFC) for tuning the generators and the distributed feeder reconfiguration (DFR) to set the state of the breakers. This joint optimization problem is faced considering a suitable objective function and by adopting genetic algorithms as global optimization strategy. We analyze admissible network configurations, showing that some of these violate constraints on current and voltage at branches and nodes. Such violations depend only on pure topological properties of the configurations. We perform tests by feeding the simulation environment with real data concerning hourly samples of dissipated and generated active and reactive power values of the ACEA smart grid. Results show that removing the configurations violating the electrical constraints from the solution space leads to interesting improvements in terms of power loss reduction. To conclude, we provide also an electrical interpretation of the phenomenon using graph-based pattern analysis techniques.'
address:
- 'Dept. of Information Engineering, Electronics, and Telecommunications, SAPIENZA University of Rome, Via Eudossiana 18, 00184 Rome, Italy'
- 'Dept. of Computer Science, Ryerson University, 350 Victoria Street, Toronto, ON M5B 2K3, Canada'
author:
- Francesca Possemato
- Maurizio Paschero
- Lorenzo Livi
- Antonello Rizzi
- Alireza Sadeghian
bibliography:
- 'Bibliography.bib'
title: On the impact of topological properties of smart grids in power losses optimization problems
---
Distribution feeder reconfiguration; Power factor correction; Power losses minimization; Smart grid; Graph-based pattern analysis.
Introduction {#sec:intro}
============
In recent years, the global warming has led people and companies to demand for cleaner energy suppliers. Thus, more and more electricity is generated from alternative and heterogeneous sources: wind, solar, biofuel, and geothermal plants. This phenomenon is called Distributed Generation (DG) [@ghosh2010optimal; @moradi2012combination; @kumar2014reliability]. A Smart Grid (SG) [@5535240] constitutes the improvement of a traditional electrical distribution system, which is conceived to overcome the problem of the wide diffusion and high penetration of DGs. A SG can be seen as an intelligent network able to integrate all users (i.e., producers and consumers) with the ultimate purpose of distributing the electrical power in a safe, efficient, and sustainable fashion [@Dahu_2011; @Farhangi_2010; @venayagamoorthy2011dynamic; @gentile2014reactive; @pagani2014power]. With the advent of SGs, the customers of electrical networks become also energy suppliers and the load flow in distribution feeders becomes bidirectional. Moreover, a large number of sensors are installed on the network to obtain a complete information on the instantaneous status of the infrastructure – information that could be exploited for predicting faults [@occ_sg_enricods__arxiv; @zhang2011fault; @saha2011fault].
The reduction of power losses is one of the main objectives of energy electrical distribution companies. In the literature [@Chandramohan_2010] it is possible to identify two mainstream approaches: *Power Factor Correction* (PFC) and *Distributed Feeder Reconfiguration* (DFR). The PFC tries to reduce the amount of reactive power present in the network in order to (i) minimize the Joule losses, (ii) increase the capacity of the network, and (iii) increase the quality of service. The DFR, instead, relies on switching a certain number of breakers, physically modifying the topological structure of the network and improving its operating conditions. In doing so, operational constraints on the network must be satisfied, such as ensuring that no loops are formed and the totality of the loads are supplied. Altering the network configuration affects the power losses and relieves overload in the network; thus the DFR problem can be conceptualized as the task of choosing the status of the network breakers resulting in the configuration with minimum power losses, yet still satisfying the operational constrains. The main drawback of DFR is that it results in a complex combinatorial optimization problem, since the status of the switches is non-differentiable. This makes the optimization problem related to DFR very hard to solve. Many researchers have proposed interesting solutions in the past. @Civanlar_1988 propose a heuristic method based on a formula that expresses the change in losses between network configurations before and after the reconfiguration process. Moreover in the paper the authors suggest a method for filtering configurations that yield lower reduction in power losses. Other heuristic approaches are presented in Refs. [@Nara_1992; @Merlin_1975]. All such mentioned methods consist in approximate solutions or local optimum of the respective optimization problems. To overcome this issue, @McDermott_1999 use a genetic algorithm providing a good compromise between computational burden and quality of the optimization result. More recently, novel meta-heuristic methods based on evolutionary optimization algorithms are introduced for the same context, showing good experimental results [@olamaei2008application; @niknam2009efficient; @malekpour2013multi; @mazza2014optimal; @rao2013power].
One of the main technical difficulties in dealing with the DFR problem using evolutionary algorithms is the so-called “radiality constraint”. The fulfillment of this constraint makes inappropriate most of the network configurations achievable by switching the available breakers. For this reason, it is necessary to conceive a procedure able to select, among all possible configurations, those fulfilling the radiality constraint. A first solution to this problem for the SG of ACEA has been proposed by @Storti_2014. The authors conclude that due to the high complexity of the DFR problem, a desirable method should be able to reduce as much as possible the solution space, eliminating undesirable switching options a priori. Such configurations are critical with respect to both topological and electrical constraints. In [@caschera_2014], the authors propose a heuristic method to compare the admissible network configurations in a purely topological manner, facilitating the optimization algorithm in finding the desired solution. All such studies highlight the need to identify undesirable network configurations, in order to reduce the required convergence time for the faced optimization problem. In fact, undesirable configurations, which cause violations of one or more electrical constraints, introduce a significant and unnecessary increase of the computational demand for the simulation. This is due to the fact that the solver tries to perform PFC and DFR using also configurations that are intrinsically critical for particular power loads profiles.
Concerning other works dealing with the Joule losses minimization problem on real Smart Grids, in Ref. [@Mahdad_2008] a genetic algorithm with fuzzy logic rules is employed to face the optimal power flow problem in the Algerian electric network as a Flexible AC Transmission System, while in Ref. [@Amrane_2014] is proposed a particle swarm optimization method for solving the optimal reactive power dispatch (ORPD) problem on the Algerian electric power system. Although not concerning the Joule losses minimization problem, it is worth to cite the work of @Corsi_2004, where the hierarchical voltage control system presently applied on the Italian transmission grid is described in details. Moreover, in Ref. [@Senac_2014] the use of capacitors and static reactive power compensators to ensure the voltage stability on the electrical network serving the South-West region of France is studied, aiming to develop an advanced system for the control of the reactive power compensation.
In our previous works [@Storti_2013; @Possemato_2013; @Storti_2013_b], we faced both the problems of PFC and DFR over a portion of the ACEA electrical grid (ACEA is the company managing the entire distribution grid of Rome, Italy), using genetic algorithms as optimization strategy. In this paper we elaborate over such studies by first analyzing how undesirable configurations affect the optimization process in terms of both quality of the optimization result and running time of the optimization procedure. Then we give an interpretation of the results from an electrical point of view, exploiting a graph-based pattern analysis technique. Notably, we study two prototype graphs representing two classes of typical network configurations identified in our data. Our work is framed in the research area concerning the application of genetic algorithms to face the joint PFC and DFR problem. As concerns similar works, in Refs. [@Kalantar_2006], [@Madeiro_2011], [@Diaz_2003] active power losses minimization is faced by simultaneous capacitor placement and feeder reconfiguration by genetic algorithms. With respect to our approach, PFC is solved by installing capacitor banks to compensate the losses produced by reactive currents, while in the SG of ACEA the PFC problem is faced by regulating the power factors of the distributed generators in the network. In fact, the telecontrol system of ACEA is capable to modify remotely the set points of each renewable energy generator. In Ref. [@Singh_2011] the reader can find an interesting review on optimal placement approaches of DGs in power systems for optimizing different objective functions (such as power losses minimization), including some algorithms based on evolutionary computation and swarm intelligence (e.g., genetic algorithms, ant colony optimization, particle swarm optimization). Finally, we stress that in the SG of ACEA the position of the DGs cannot be relocated, since we are dealing with a real network whose physical characteristics cannot be changed.
The paper is structured as follows. In Section \[sec::ProblForm\], we first describe the optimization problem (Section \[sec::opt\_procedure\]) and then we introduce the problem of admissibility for a network configuration (Section \[sec::AdmNetConf\]). Section \[sec:acea\_opt\] provides the essential technical details (in Section \[sec:Acea\_SG\]) of the electrical distribution network under analysis (the ACEA SG) and the related power loss optimization problem (in Section \[sec::Opt\_Prob\_ACEA\]). In Section \[sec:dis\_FF\] we analyze the admissible configurations and we introduce the concept of constraint compliant configuration. In Section \[sec::ExpRes\] we present and discuss the results of the optimization, providing also an electrical interpretation aimed at providing a justification for them. Finally, in Section \[sec:conclusions\] we draw the conclusions pointing at the future directions.
Power Loss Minimization Problem {#sec::ProblForm}
===============================
In this paper, we consider the joint PFC and DFR problem for minimum power losses, satisfying constraints on nodes voltage and branches current as well as system operating constraints.
Optimization Problem {#sec::opt_procedure}
--------------------
In this section we formulate the problem of active power losses minimization in SGs through PFC and DFR. The problem consists in finding the optimal network parameters and the topological configuration that minimize the value of the power losses in the network, considering the constraints imposed on voltages and currents due to safety or quality of service issues as well as physical topological constraints. Consider an admissible set $E$ of the network parameters and a suitable cost function $J:E\rightarrow\mathbb{R}$ that associates a real number to each element in $E$. Formally, the problem consists in minimizing the function $J$ in $E$. Mathematically, we can express the cost function $J \in [0,1)$ as follows: $$J(\mathbf{k}) = \frac{P_{\mathrm{loss}}(\mathbf{k})}{P_{\mathrm{gen}}(\mathbf{k})}=\frac{P_{\mathrm{gen}}(\mathbf{k})-P_{\mathrm{load}}}{P_{\mathrm{gen}}(\mathbf{k})},
\label{eq::J}$$ where $\mathbf{k}\in E$ represents an instance of the network parameters, $P_{\mathrm{gen}}(\mathbf{k}) \in [P_{\mathrm{load}}, \infty)$ is the total power generated by all sources, $P_{\mathrm{load}}$ is the total power absorbed by the loads, and their difference $P_{\mathrm{loss}}(\mathbf{k})$ represents the total power losses in the network. Notice that in the present formulation of the problem, $P_{\mathrm{load}}$ is independent by the network parameters $\mathbf{k}$ because ACEA can provide only a description of the loads based on the profiles of active and reactive power measured by the meters installed in the network.
Let us consider a generic SG characterized by $n$ real parameters, $m$ integer parameters, and $p$ nominal parameters. We can express the domain of the ordinal parameters as: $$\label{eq::A'}
A^{\mathrm{ord}} = \left\{ \mathbf{k}^{\mathrm{ord}} \in \mathbb{R}^{n} \times \mathbb{Z}^{m} \ \ :\ \mathbf{k}^{\mathrm{ord}}_{\mathrm{min}} \leq \mathbf{k}^{\mathrm{ord}} \leq \mathbf{k}^{\mathrm{ord}}_{\mathrm{max}} \right\},$$ in which $\mathbf{k}^{\mathrm{ord}}_{\mathrm{min}}$ and $\mathbf{k}^{\mathrm{ord}}_{\mathrm{max}}$ represent the vectors of the minimum and maximum values of the network ordinal parameters, $\mathbf{k}^{\mathrm{ord}}$. Concerning the nominal parameters, $\mathbf{k}^{\mathrm{nom}}$, the domain is a set $A^{\mathrm{nom}}$ of all possible admissible elements for such parameters: $$\label{eq::A''}
A^{\mathrm{nom}} = \left\{ \mathbf{k}^{\mathrm{nom}} \in \mathbb{X}_{1} \times \cdots \times \mathbb{X}_{p} \right\},$$ in which $\mathbb{X}_i$ is a generic nominal set with $i\in\{1,...,p\}$. The overall domain $A$ is defined as $A = A^{\mathrm{ord}} \times A^{\mathrm{nom}}$; accordingly $\mathbf{k} = [\mathbf{k}^{\mathrm{ord}}, \mathbf{k}^{\mathrm{nom}}]\in A$. In order to be valid, a solution $\mathbf{k}$ must satisfy the constraints on voltages and currents defined below: $$\label{eq::BC}
\begin{aligned}
B &= \big\{ \mathbf{k} \in A : V_{i}^{\mathrm{min}} \leq V_i(\mathbf{k}) \leq V^{\mathrm{max}}_i ,i=1,...,N \big\} \\
C &= \big\{ \mathbf{k} \in A : |I_j(\mathbf{k})|\leq I^{\mathrm{max}}_j, j=1,...,R \big\},
\end{aligned}$$ where $V_i(\mathbf{k})$ is the voltage magnitude of the $i$-th node for a fixed instance of parameters $\mathbf{k}$, $N$ is the total number of nodes, $V^{\mathrm{min}}_i$ and $V^{\mathrm{max}}_i$ are the voltage limits for the $i$-th node, while $|I_j(\mathbf{k})|$ represents the current magnitude of the $j$-th branch for a particular instance of parameters $\mathbf{k}$, $R$ the number of branches, and finally $I^{\mathrm{max}}_j$ the current upper bound for the $j$-th branch. The definitions given above allow to define the admissible set $E$ as follows: $$E = A \cap B \cap C .
\label{eq::E}$$
Since it is not practically possible to derive expression in closed-form as a function of $\mathbf{k}$, in the following we will employ a “standard” GA (a well-known derivative-free approach) as global optimization algorithm. Moreover, since it is also not practically possible to derive closed-forms for $V_i(\mathbf{k})$ and $I_j(\mathbf{k})$ in , we introduce a new function $\Gamma(\mathbf{k})$ used in the optimization procedure as a measure of the violation of voltage and current constraints. Therefore, the constrained optimization problem defined above is actually faced by defining the objective function as a convex combination of the following two conflicting terms: $$F(\mathbf{k}) = \alpha J(\mathbf{k}) + (1-\alpha) \Gamma(\mathbf{k}),
\label{eq::alpha}$$ where $\alpha\in[0, 1]$ is a parameter used to adjust the relative weight of the power losses term, $J(\mathbf{k})$, over the constraints violation term, $\Gamma(\mathbf{k})$. Thus, our objective becomes minimizing the function $F(\mathbf{k})$ in the domain $A$ defined through and , instead of $J(\mathbf{k})$ in $E$ . Note that is meaningful if and only if both $J(\mathbf{k})$ and $\Gamma(\mathbf{k})$ vary in the same range, otherwise the optimization problem is not well-posed and it is not guaranteed that minimizing $F(\mathbf{k})$ in the domain $A$ gives approximately the same result as minimizing $J(\mathbf{k})$ in $E$ .
The function $\Gamma(\mathbf{k})$ is defined as follows: $$\Gamma(\mathbf{k})= (1-\beta) \Gamma_I(\mathbf{k}) + \beta \Gamma_V(\mathbf{k}),
\label{eq::beta}$$ in which $\beta\in[0, 1]$ is a parameter used to adjust the relative weight of the violation of current constraints $\Gamma_I(\mathbf{k})$ with respect to the term related to voltages violation $\Gamma_V(\mathbf{k})$. In order to make the constraint violation value $\Gamma(\mathbf{k})$ of the same order as the cost function value $J(\mathbf{k})$, the functions $\Gamma_I(\mathbf{k})$ and $\Gamma_V(\mathbf{k})$ are defined as follows: $$\label{eq::gamma}
\begin{aligned}
\Gamma_V(\mathbf{k})= & \max_{i\in\{0,..., N\}} \left\lbrace G_V\left( V_i(\mathbf{k})/V^{\mathrm{nom}}_i \right) \right\rbrace, \\
\Gamma_I(\mathbf{k})= & \max_{j\in\{0,..., R\}} \left\lbrace G_I\left( I_j(\mathbf{k})/I^{\mathrm{max}}_j \right) \right\rbrace,
\end{aligned}$$ where $V^{\mathrm{nom}}_i$ indicates the voltage nominal value on the $i$-th node. The penalty functions used in (\[eq::gamma\]), that is, $G_V(\cdot)$ and $G_I(\cdot)$ are graphically shown in Figure \[fig::penalty\_functions\].
-------------------------------------------------------------------------------------------------------------------------
![Penalty functions: (a) $G_V(\cdot)$, (b) $G_I(\cdot)$.[]{data-label="fig::penalty_functions"}](FitnessVoltage "fig:")
(a)
![Penalty functions: (a) $G_V(\cdot)$, (b) $G_I(\cdot)$.[]{data-label="fig::penalty_functions"}](FitnessCurrent "fig:")
(b)
-------------------------------------------------------------------------------------------------------------------------
Further details of the optimization procedure can be found in [@Storti_2013].
Admissible Network Configurations {#sec::AdmNetConf}
---------------------------------
Consider a general SG consisting of several Medium Voltage (MV) feeders, some High Voltage (HV) substations, some DGs, and several loads. In order to perform the power loss minimization through DFR, we decided to represent the SG as a non-oriented graph $\mathcal{G} \langle N, E \rangle $, in which $N$ and $E$ are the nodes and the edges of the real network, respectively. We introduce $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$, the *reduced graph* of the network, to properly describe all possible system reconfigurations satisfying the topology constraints. The reduced graph of the network $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$ does not contain all the information of the original network graph $\mathcal{G} \langle N, E \rangle $, because for our purposes we only need information about the connections of different portions of the network and not their detailed internal structure. As described in [@Storti_2014], $\mathcal{G} \langle N, E \rangle $ is mapped into $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$ through two main steps:
- The nodes $\hat{N}$ of the reduced graph are used to model two different types of original nodes $N$. The first one represents nodes at 150kV providing the energy balance of the active and reactive power in the SG. In the following sections we will refer to it as HV node. The second one can represent a single MV real substation connected to loads and DGs, or a set of MV substations, powered by only a single HV substation (virtual MV). In both cases, we call this kind of nodes as MV nodes.
- Edges $\hat{E}$ of the reduced graph are used to model the topology reconfiguration. The series of two switches, installed between two consecutive MV substations, are mapped into a single edge (virtual breaker) of the reduced graph $\hat{\mathcal{G}}$. Each edge is associated with a label representing its state, i.e., close or open.
Figure \[fig::graph\] shows an example of the representation of the SG through the reduced graph $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$.
(Start) [HV]{};
\(A) \[right of = Start\] ; (B) \[right of = A\] ; (C) \[right of = B\] ; (D) \[right of = C\] ; (E) \[right of = D\] ; (F) \[right of = E\] ; (G) \[right of = F\] ; (H) \[right of = G\] ; (I) \[right of = H\] ; (L) \[right of = I\] ; (M) \[right of = L\] ; (N) \[right of = M\] ; (O) \[right of = N\] ; (P) \[right of = O\] ;
(End) [HV]{};
(Start.east) edge (A.west); (A.east) edge (B.west); (B.east) edge (C.west); (C.east) edge (D.west); (D.east) edge (E.west); (E.east) edge (F.west); (F.east) edge (G.west); (G.east) edge (H.west); (H.east) edge (I.west); (I.east) edge (L.west); (L.east) edge (M.west); (M.east) edge (N.west); (N.east) edge (O.west); (O.east) edge (P.west); (P.east) edge (End.west);
at (O) ; at (H) ; (\[yshift=-1.2ex\]QQ.north) edge (H.south); (\[xshift=-1.2ex\]QQ.east) edge (Q.west); at (P) ; (\[yshift=-1.2ex, xshift = 2mm\]QQQ.north) edge (\[xshift = 2mm\]P.south); (\[xshift=2.2ex\]QQQ.west) edge (Q.east);
at (N) ; at (L) ; (\[yshift=-1.2ex\]RR.north) edge (L.south); (\[xshift=-1.2ex\]RR.east) edge (R.west); at (P) ; (\[xshift=-7mm, yshift = -2mm\]RRR.north) edge (\[xshift = -2mm\]P.south); (\[xshift=-4.7mm\]RRR.west) edge (R.east);
Using the above notation we can introduce the following definitions:
A network topology satisfies the Radial Topology Constraint iff each MV substation is fed by only one HV substation via only one path.
\[def::admConf\] A reduced graph $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$ satisfying the *radial topology constraint* is said to be an admissible configuration of the network.
The graph representation $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$ is used to execute an algorithm that performs an exhaustive search of all admissible configurations of the network. The details of the automatic procedure are described in Ref. [@Storti_2014]. The output of such a procedure is a list of binary strings (encoding the admissible configurations) having length equals to the number of edges $ \hat{E}$ of the reduced network. Each bit represents the state of the corresponding edge (virtual breaker). The network topology is specified through a label associated with the string of bits, spanning the rows of the list of all admissible configurations. Because of the nominal nature of the parameter specifying the network topology, the objective function becomes non-differentiable and the respective DFR optimization problem is very challenging. For this reason, in this paper we use a heuristic method [@caschera_2014] based on the Hamming distance between network configurations, to improve the smoothness of the objective function with respect to the variation of the nominal parameter representing the topological configuration. This technique allows to treat during the optimization process the nominal parameter describing the configuration as an ordinal parameter. The reader is referred to Ref. [@caschera_2014] for the details.
The ACEA SG Pilot Network {#sec:acea_opt}
=========================
In this work, we consider a portion of the Italian electric distribution network managed by ACEA Distribuzione S.p.A., located in the west area of Rome. The main ACEA goal is the overall improvement of the (i) quality of service related to the continuity of electricity distribution, (ii) capacity of the network, and (iii) prize of electricity offered to the users.
Network Specifications {#sec:Acea_SG}
----------------------
The main specifics of the ACEA network are listed below:
- $N. 6$ Medium Voltage (MV) feeders ($n. 5$ at $20~kV$ and $n. 1$ at $8.4~kV$);
- $N. 2$ High Voltage (HV) Substations;
- $N. 76$ MV Substations ($n. 29$ at $20~kV$ and $n. 47$ at $8.4~kV$);
- about $70~km$ of MV lines ($31~km$ of underground wires and $38~km$ of air lines);
- about $1200$ Low Voltage (LV) user loads;
- $N. 5$ DGs ($n. 5$ generator sets $n. 1$ photovoltaic generator);
- $N. 106$ three-phase breakers;
- $N. 1$ TVR (Thyristor Voltage Regulator).
In each HV substation there is a transformer that converts the voltage from 150 kV at the primary winding to 20 kV at the secondary winding (HV/MV transformer). The cables, the photovoltaic plant, the MV substations, and the TVR are located in the MV portion of the network, whereas the user loads and the five generator sets are located in the LV portion of the network. The TVR is a series voltage compensation device. It performs a bi-directional voltage regulation that maintains the system voltage within specified ranges. The bi-directional relation between the input and the output voltage is defined as follows: $$V_{\mathrm{out}} = V_{\mathrm{in}} + N_{\mathrm{tap}} \Delta V, \quad N_{\mathrm{tap}} \in\{0,\pm 1,\pm 2,\pm 3\},
\label{eq::T}$$ where the values of $V_{\mathrm{in}}$ and $V_{\mathrm{out}}$ are expressed in kV and the $\Delta V$ is 0.1 kV. The voltage rated value of $V_{\mathrm{in}}$ is 8.4 kV. Each MV substation is equipped with 2 breakers (switches) that allow to connect the substation with the electrical network in different ways. By changing the status of these switches, it is possible to modify the topology of the network. The power quality is a very important issue in an electrical network, which in turn determines the quality of electrical power provided to consumer devices. The correct setting of the electrical limits allows operating electrical systems in a safe way without significant loss of performance. In order to protect the customers, the Authority for Energy and Gas [^1] has imposed constraints on voltage and current to power delivery companies:
- the instantaneous voltage of all the nodes of the network must be comprised in a range of $\pm~10~\%$ of nominal voltage;
- the instantaneous current of all the branches of the network must be lower than a threshold.
These constraints are taken into consideration in the definition of penalty functions showed in Figure \[fig::penalty\_functions\]. If the voltage or current, measured at some nodes and branches of the network, exceed the admissible range, the value of the penalty function increases dramatically.
Power Loss Optimization Problem Customization for the ACEA Network {#sec::Opt_Prob_ACEA}
------------------------------------------------------------------
Using the network specifications described in Section \[sec:Acea\_SG\], we can customize the optimization procedure introduced in Section \[sec::opt\_procedure\]. In particular, we can control the reactive power of the five generator sets through their phase parameter, $\phi$. On the other hand, it is not possible to control the reactive power of the photovoltaic generator. Moreover, it is possible to chose the $N_{\mathrm{tap}}$ value of the TVR and the topological configuration of the network selecting it from the set of admissible ones, previously determined. The phases of the five generator sets, $\phi_1,\phi_2,\phi_3,\phi_4,\phi_5$, will be spanned in a real-valued range specified by the capability functions of the corresponding generator sets. The tap $N_{\mathrm{tap}}$ of the TVR will be spanned in the discrete (normed) range defined in . Finally, according to the list of admissible configurations computed using the procedure described in Section \[sec::AdmNetConf\], the network topology is specified by the index, $N_{\mathrm{conf}}$, spanning the rows of such a list. In particular, in the SG under consideration, the total number of admissible configurations is 390.
Summarizing, a candidate solution vector of the optimization problem reads as $\mathbf{k} = [ \phi_1,\phi_2,\phi_3,\phi_4,\phi_5, N_{\mathrm{tap}}, N_{\mathrm{conf}} ]$ and technically belongs to the set $A$ defined by the following ranges, $$\begin{aligned}
\label{eq::A}
-0.2 \leq & \phi_1,\phi_2 & \leq 0.45 \\ \nonumber
-0.2 \leq & \phi_3 & \leq 0.55 \\ \nonumber
0.0 \leq & \phi_4 & \leq 0.64 \\ \nonumber
-0.32 \leq & \phi_5 & \leq 0.45 \\ \nonumber
-3 \leq & N_{\mathrm{tap}} & \leq 3 \\ \nonumber
N_{\mathrm{conf}} \in & \{1,..., 390\}, \end{aligned}$$ where $\{1,..., 390\}$ is the set of indexes of all admissible configurations. We remind that, due to the heuristic ordering mentioned in Section \[sec::AdmNetConf\], we treat the nominal parameter $N_{\mathrm{conf}}$ as ordinal during the optimization procedure. Moreover, according with the Authority requirements, in order to be valid, a candidate solution $\mathbf{k}$ must satisfy the constraints on voltages and currents defined below: $$\begin{aligned}
\label{eq:vinc}
B &= \big\{ \mathbf{k} \subset A : 0.9 V^{\mathrm{nom}}_i \leq V_i(\mathbf{k})\leq 1.1 V^{\mathrm{nom}}_i , i=1,...,N \big\} \\
C &= \left\{ \mathbf{k} \subset A : |I_j(\mathbf{k})|\leq I^{\mathrm{max}}_j, j=1,...,R \right\},
\end{aligned}$$ in which $N$ and $R$ represent the total number of nodes and branches of the real network, respectively, whereas $V^{\mathrm{nom}}_i$ and $I^{\mathrm{max}}_j$ are the nominal value of the voltage of the $i$-th node and the maximum current allowed in the $j$-th wire, respectively.
Analysis of Admissible Configurations {#sec:dis_FF}
=====================================
According to Def. \[def::admConf\], a reduced graph $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$ satisfying the *Radial Topology Constraint* is an *admissible configuration* for the reference SG. In Section \[sec::opt\_procedure\] we introduced the problem of minimization of power loss $J(\mathbf{k})$ defined in the domain $E$ in terms of the minimization of the convex function $F(\mathbf{k})$ in $A$. It is worth noting that, in order to make the optimization problem well posed, both terms of must be normalized in the same range (e.g., $[0, 1]$). During several preliminary tests, we noticed that for a certain number of runs of the electrical network simulation, the objective function, $F(\mathbf{k})$, assumes values greater than unity, hence violating such a requirement. In these situations the GA seems to have difficulties in minimizing the power losses, $J(\mathbf{k})$, during the optimization process. According to a preliminary analysis of this phenomenon, this behavior seems to be related only to the actual value of the topological parameter $N_{\mathrm{conf}}$. In fact, for a certain selection of the $N_{\mathrm{conf}}$ parameter by the GA, the constraint term $\Gamma(\mathbf{k})$ seems to increase dramatically compared to the $J(\mathbf{k})$ term, causing the power losses minimization procedure to fail. To better understand this aspect, in the following section we perform a numerical analysis of admissible configurations of the ACEA SG in terms of the violation of electrical constraints and we introduce the concept of *Constraint Compliant Configurations*.
Constraint Compliant Configurations
-----------------------------------
First we introduce the concept of Constraint Compliant Configurations (CCC) through the following definition.
A reduced graph $\hat{\mathcal{G}} \langle \hat{N}, \hat{E} \rangle$ whose nodes satisfy the voltage constraints and edges satisfy the current constraint (\[eq:vinc\]) is said to be a CCC.
We performed several experiments to numerically show that in the SG under analysis there exist admissible configurations that inherently violate constraints on voltage at some nodes and/or on current at some branches. This means that, independently of the choice of the parameters $ \phi_1$, $\phi_2$, $\phi_3$, $\phi_4$, $\phi_5$, $N_{\mathrm{tap}}$ of the network, constraints in are violated. Roughly speaking, considering an admissible configuration represented by its reduced graph $\hat{\mathcal{G}}_i \langle \hat{N}, \hat{E} \rangle$, and the corresponding configuration parameter $N_{\mathrm{conf}}=i$, the value of the objective function $F(\mathbf{k})$ associated with it remains of the same order of magnitude as the remaining parameters change. To support this assertion, we analyze all admissible configurations. In particular, once the configuration parameter $N_{\mathrm{conf}}$ is set, we perform a random sampling of the subset $A$ defined in along all other dimensions. More precisely, we randomly choose 2000 points of the subset $A\setminus\mathbb{X}$.
The performed analysis highlights two different behaviors. Here, we report the results for two admissible configurations associated with the two different behaviors. The results obtained for the two configurations labeled $32$ and $81$ are shown in Figure \[fig:campionamentoAlto\_Basso\] parts (a) and (b), respectively.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Objective function $F(\mathbf{k})$ computed at 2000 randomly selected points, maintaining constant the parameter (a) $N_{\mathrm{conf}} = 32$, (b) $N_{\mathrm{conf}} = 81$. The mean value is highlighted in red, while the standard deviation in dotted black lines.[]{data-label="fig:campionamentoAlto_Basso"}](campionamentoAlto "fig:") ![Objective function $F(\mathbf{k})$ computed at 2000 randomly selected points, maintaining constant the parameter (a) $N_{\mathrm{conf}} = 32$, (b) $N_{\mathrm{conf}} = 81$. The mean value is highlighted in red, while the standard deviation in dotted black lines.[]{data-label="fig:campionamentoAlto_Basso"}](campionamentoBasso "fig:")
(a) (b)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
It is worth noting in Figure \[fig:campionamentoAlto\_Basso\] part (b) that the values of the objective function $F(\mathbf{k})$ are distributed in bands due to the influence of the parameter $N_{\mathrm{tap}}$. Comparing part (a) and (b) of Figure \[fig:campionamentoAlto\_Basso\], it can be observed that the mean value (shown as a red line in the figure) of the objective function $F(\mathbf{k})$ is much higher (lower) than unity in the first (second) configuration. This fact shows that, if the parameter $N_{\mathrm{conf}}=32$ is selected, the GA will be probably unable to make $F(\mathbf{k})$ lower then unity. In fact, the order of magnitude of the objective function value depends only on the parameter $N_{\mathrm{conf}}$ and it does not change with the other parameters. Moreover, by computing the ratio $\eta= \sigma / \mu$ between the standard deviation and the mean value for the 32-*th* and 81-*th* configurations we find $\eta = 0.0104 $ and $\eta = 0.0582 $, respectively. This means that in configuration number 32 the variation of the control parameters have a lower influence in the minimization of $F(\mathbf{k})$ with respect to configuration number 81.
To better understand this fact, we analyze the different components of for the 32-*th* network configuration. The value of the two terms, $J(\mathbf{k})$ and $\Gamma(\mathbf{k})$, for all the samples and the respective mean values and standard deviations are shown in Figure \[fig::J\_gamma\_conf32\]. Remember from and that the term $J(\mathbf{k})$ is responsible for the minimization of the power loss, while the term $\Gamma(\mathbf{k})$ guarantees that a configuration violating the constraints on voltages and/or currents is severely penalized by the optimization algorithm itself (insuring a very low fitness value). From Figure \[fig::J\_gamma\_conf32\] it is possible to observe that the increase in the objective function is due to the term $\Gamma(\mathbf{k})$, meaning that for this configuration the constraints are violated, and all parameters (except for $N_{\mathrm{conf}}$) fail to bring the network in a safe condition. We can conclude that the 81-*th* configuration is CCC, while the 32-*th* is not.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Random sampling of the subset $A$, maintaining constant the parameter $N_{\mathrm{conf}}=32$. (a) Power Loss Term $J(\mathbf{k})$, (b) Constraints Term $\Gamma(\mathbf{k})$.[]{data-label="fig::J_gamma_conf32"}](Jconf32 "fig:") ![Random sampling of the subset $A$, maintaining constant the parameter $N_{\mathrm{conf}}=32$. (a) Power Loss Term $J(\mathbf{k})$, (b) Constraints Term $\Gamma(\mathbf{k})$.[]{data-label="fig::J_gamma_conf32"}](gammaConf32 "fig:")
(a) (b)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Influence of NCCCs on the Objective Function
--------------------------------------------
As a consequence of what we have empirically shown in the previous section, during the optimization procedure candidates that are admissible configurations but not CCC will never be solution of the optimization problem . The presence of such configurations in the solution space negatively affects the behavior of the optimization procedure, as we will discuss in detail in Section \[sec::ExpRes\].
Here we give a practical demonstration of the negative influence of those configurations that are not constraint compliant (shortened as NCCC) in the objective function . We know that in the convex combination , the real parameter $\alpha$ is used to adjust the relative weight of the two terms of the expression. Moreover, in order for the optimization problem to be well posed, both $J(\mathbf{k})$ and $\Gamma(\mathbf{k})$ must be normalized in the same range (in this case $[0, 1]$). However, in the previous section we have shown that for all admissible configurations, the terms $J(\mathbf{k})$ and $\Gamma(\mathbf{k})$ are not necessarily normalized in $[0, 1]$. In particular, by defining $J_{\mathrm{MAX}}(\mathbf{k})$ as the maximum value that the power loss can assume and $\Gamma_{\mathrm{MAX}}(\mathbf{k})$ as the maximum value for $\Gamma(\mathbf{k})$, from , we can derive the following expression: $$F(\mathbf{k}) = \alpha J_N(\mathbf{k}) \cdot J_{\mathrm{MAX}}(\mathbf{k}) + (1-\alpha) \Gamma_N(\mathbf{k}) \cdot \Gamma_{\mathrm{MAX}}(\mathbf{k}),
\label{eq::F_normaliz}$$ in which $J_N(\mathbf{k})$ and $\Gamma_{N}(\mathbf{k})$ are the normalized values in $[0, 1]$ of the corresponding functions – normalization is implemented by dividing for the maximum value.
In the following, we derive a value for the weighting parameter $\alpha$, denoted as $\alpha_{\mathrm{eq}}$, such that the convex combination in is safely mapped to the unit interval. Let us define $\alpha_{\mathrm{eq}}$ as the equivalent (i.e., transformed) $\alpha$ coefficient that should be used in $F(\mathbf{k})$ if its terms were normalized. Using simple algebra (details of the calculations are provided in \[sec:appendixA\]) we can derive the analytical expression for $\alpha_{\mathrm{eq}}$: $$\alpha_{\mathrm{eq}} = \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{\alpha J_{\mathrm{MAX}}(\mathbf{k}) + (1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k})}.
\label{eq::alpha_eff}$$
As an example, for the 32-*th* configuration considered in the previous section, the maximum value of the constraint term is $\Gamma_{\mathrm{MAX}}(\mathbf{k}) = 40.38$ and for the power loss is $J_{\mathrm{MAX}}(\mathbf{k}) = 0.0265$. Therefore, assuming that in principle we want to give more weight to the minimization of the power loss, i.e., by setting $\alpha = 0.9$, during the optimization using the effective weight is set to $\alpha_{\mathrm{eq}} \simeq 0.0059$. This means that for all individuals in the GA population with the $N_{\mathrm{conf}}$ parameter set to indexes corresponding to NCCCs, the algorithm is almost entirely devoted to reducing the value of the constraint term instead of the power loss term. However, in the previous sections we have empirically demonstrated that, for such configurations, it is practically impossible to bring the network in a safe condition, regardless of the setting of all other parameters. This fact has a negative impact on the entire optimization process, as we will further discuss in the following section.
Simulation Results {#sec::ExpRes}
==================
In this section, we first compare the GA performances when it is used to solve the optimization problem presented in Section \[sec::opt\_procedure\] for the electrical network presented in Section \[sec:Acea\_SG\], whether in the admissible set $A$ are included only CCCs or all admissible configurations (both NCCCs and CCCs). Successively, we also give an electrical interpretation of the obtained results.
Optimization Results
--------------------
Among the 390 admissible configurations described in Section \[sec::Opt\_Prob\_ACEA\], we first manually extract all configurations that are CCCs. The result is that 372 are uniquely classified: 151 are CCCs, while the remaining 221 are NCCCs. We want to compare the performances of the GA in solving the optimization problem \[sec::Opt\_Prob\_ACEA\] when in the solution space are present or not the NCCCs. Let us refer to the case with all admissible network configurations as “Experiment 1”, while with “Experiment 2” we refer to the case considering CCCs only. In both experiments, we employ the ordering criteria for the network configurations described in Ref. [@caschera_2014]. We use a simulation model of the ACEA network [@Storti_2013] realized using MATLAB and Simulink, together with the GA implemented as described in Ref. [@deep2009real]. To perform all experiments, we consider as input of the network model the power profile of distributed generators and loads registered at 1:00PM (for one hour) on January 1*st*.
In the following, we provide some relevant details about the setting of the GA. The number of individuals in the GA population is set to $20$; the elite individuals are $2$, i.e., only $2$ individuals in the current generation are guaranteed to survive in the next generation; the crossover fraction parameter is $0.8$; the mutation operator is applied to the remaining individuals with rate $0.1$. Furthermore, the $\alpha$ and $\beta$ coefficients used in expressions (\[eq::alpha\]) and (\[eq::beta\]) are set to $0.9$ and $0.2$, respectively. The maximum number of iterations before the algorithm halts is $100$, but the GA might stop if the relative change of the fitness value over $50$ iterations is less than or equal to $10^{-9}$. We execute the GA ten different times with different random initialization seeds; the $j$-th execution considers an equivalent initial population $P_j$ for both series of simulations (i.e. only CCCs or all admissible configurations in the solution space). However, for simulations that consider CCCs only, the individuals of $P_j$, whose $7$-th gene (corresponding to the $N_{\mathrm{conf}}$ variable) specifies a NCCC are replaced by randomly generated individuals, whose $7$-th gene is forced to code for a CCC.
Results of the simulations are shown in Table \[tab::performanceUltimoGA\]. The table reports the mean value and standard deviation of the number of generations ($\#\mathrm{gen}$) required for convergence, the fitness value percentage reduction ($\Delta F$), and the reduction of power loss in the network ($\Delta P_{\mathrm{loss}}$) for both experiment settings. More precisely, the last two indicators compare the fitness value and actual power loss at optimal solution with respect to fitness value and power loss of the best individual in the initial population.
**Experiment 1** **Experiment 2**
-- -------------------- ---------------------
$73 \pm18.3763$ $65\pm 11.6858$
$0.0227\pm 0.0001$ $0.0205 \pm 0.0010$
$1097\pm 6.5015$ $1138\pm 9.6788$
: Mean and standard deviation for number of generations (\# $\mathrm{gen}$) required by the GA converge, reduction of the fitness value expressed in percentage ($\Delta F$), and reduction of the power loss ($\Delta P_{\mathrm{loss}}$) in Experiment 1 (all configurations) and Experiment 2 (CCCs only).
\[tab::performanceUltimoGA\]
By analyzing the results in Table \[tab::performanceUltimoGA\], we can observe that (i) there is no statistically significant difference for the average number of iterations required for convergence and (ii) the mean reduction of the achieved fitness function value in Experiment 2 is slightly less (but statistically significant) than the one observed for Experiment 1. This second fact would led us to assume an inferior mean reduction of the power loss in Experiment 2 as well. However, the power loss reduction achieved in Experiment 2 is significantly higher than the one observed for Experiment 1 – please note that differences are statistically significant and are evaluated with t-test, $p<0.0001$. The reason behind this result can be found in the redefinition of the solution set considered for the optimization problem in Experiment 2, composed by CCCs only. In fact, in the set of network configurations considered in Experiment 1, many candidate solutions systematically violate constraints on voltages and currents causing the problems described in the previous section. Such a behavior is even more accentuated by the fact that we have chosen a small value of the coefficient $\beta$, and configurations that do not satisfy the constraints cause particularly very strong violations of the electrical current constraints. At first, the herein reported overall power loss reduction might appear not very significant. However, we remind that all tests simulate only one hour of one specific day of the year. Projecting the achieved power loss reduction to the entire network over a longer period of time might led to important improvements of the operating condition of the SG managed by ACEA.
In conclusion, we demonstrated that removing from the set of admissible configurations those that are NCCCs causes a significant reduction of the power loss. This fact is particularly important in order to adopt such an optimization approach in a real time control system, where desired solutions must be determined on a hourly basis, usually relying on limited computational resources.
Electrical Interpretation of the Representative Network Configurations {#sec::clustering_analysis}
----------------------------------------------------------------------
As observed in the previous sections, among the admissible configurations it is possible to recognize a subset where current or voltage constraints are systematically violated (NCCCs), regardless of the DGs setting.
In order to provide a meaningful electrical interpretation for this fact, a single representative network configuration for both sets of CCCs and NCCCs is selected. Let $\mathcal{S}$ be a set of $n$ graphs (network configurations in our case). A natural candidate to represent $\mathcal{S}$ is the graph $\hat{\mathcal{G}}^*$ that minimizes the sum of distances (MinSOD) [@delvescovo+livi+rizzi+frattalemascioli2011], which is determined by the following expression: $$\label{eq:minsod}
\hat{\mathcal{G}}^* = {\operatornamewithlimits{arg\ min}}_{\hat{\mathcal{G}}_j\in\mathcal{S}} \sum_{i=1}^n d(\hat{\mathcal{G}}_j, \hat{\mathcal{G}}_i).$$
In Eq. \[eq:minsod\], $d(\cdot, \cdot)$ is a dissimilarity measure between graphs [@gm_survey], which in our case is implemented as the Hamming distance among the adjacency matrix representations of graphs – we remind that we are considering Boolean graphs of the same order, i.e., graphs with the same number of nodes entirely described by the presence or absence of edges.
[c]{}
(Start) [HV]{};
\(A) \[right of = Start\] ; (B) \[right of = A\] ; (C) \[right of = B\] ; (D) \[right of = C\] ; (E) \[right of = D\] ; (F) \[right of = E\] ; (G) \[right of = F\] ; (H) \[right of = G\] ; (I) \[right of = H\] ; (L) \[right of = I\] ; (M) \[right of = L\] ; (N) \[right of = M\] ; (O) \[right of = N\] ; (P) \[right of = O\] ;
(End) [HV]{};
(Start.east) edge (A.west); (A.east) edge (B.west); (B.east) edge (C.west); (C.east) edge (D.west); (D.east) edge (E.west); (E.east) edge (F.west); (F.east) edge (G.west); (G.east) edge (H.west); (H.east) edge (I.west); (I.east) edge (L.west); (L.east) edge (M.west); (M.east) edge (N.west); (N.east) edge (O.west); (O.east) edge (P.west); (P.east) edge (End.west);
at (O) ; at (H) ; (\[yshift=-1.2ex\]QQ.north) edge (H.south); (\[xshift=-1.2ex\]QQ.east) edge (Q.west); at (P) ; (\[yshift=-1.2ex, xshift = 2mm\]QQQ.north) edge (\[xshift = 2mm\]P.south); (\[xshift=2.2ex\]QQQ.west) edge (Q.east);
at (N) ; at (L) ; (\[yshift=-1.2ex\]RR.north) edge (L.south); (\[xshift=-1.2ex\]RR.east) edge (R.west); at (P) ; (\[xshift=-7mm, yshift = -2mm\]RRR.north) edge (\[xshift = -2mm\]P.south); (\[xshift=-4.7mm\]RRR.west) edge (R.east);
\
(a)\
(Start) [HV]{};
\(A) \[right of = Start\] ; (B) \[right of = A\] ; (C) \[right of = B\] ; (D) \[right of = C\] ; (E) \[right of = D\] ; (F) \[right of = E\] ; (G) \[right of = F\] ; (H) \[right of = G\] ; (I) \[right of = H\] ; (L) \[right of = I\] ; (M) \[right of = L\] ; (N) \[right of = M\] ; (O) \[right of = N\] ; (P) \[right of = O\] ;
(End) [HV]{};
(Start.east) edge (A.west); (A.east) edge (B.west); (B.east) edge (C.west); (C.east) edge (D.west); (D.east) edge (E.west); (E.east) edge (F.west); (F.east) edge (G.west); (G.east) edge (H.west); (H.east) edge (I.west); (I.east) edge (L.west); (L.east) edge (M.west); (M.east) edge (N.west); (N.east) edge (O.west); (O.east) edge (P.west); (P.east) edge (End.west);
at (O) ; at (H) ; (\[yshift=-1.2ex\]QQ.north) edge (H.south); (\[xshift=-1.2ex\]QQ.east) edge (Q.west); at (P) ; (\[yshift=-1.2ex, xshift = 2mm\]QQQ.north) edge (\[xshift = 2mm\]P.south); (\[xshift=2.2ex\]QQQ.west) edge (Q.east);
at (N) ; at (L) ; (\[yshift=-1.2ex\]RR.north) edge (L.south); (\[xshift=-1.2ex\]RR.east) edge (R.west); at (P) ; (\[xshift=-7mm, yshift = -2mm\]RRR.north) edge (\[xshift = -2mm\]P.south); (\[xshift=-4.7mm\]RRR.west) edge (R.east);
\
(b)
We computed the MinSOD element for the CCC and NCCC classes separately. The MinSOD graphs are graphically represented in Figure \[fig::minsod\]. First of all, it can be noted that for the representative of NCCCs (\[fig::minsod\] part (b)) all MV stations are fed by a single HV node, whereas the second available HV node is electrically isolated. This configuration results in a very long feeder. As it is well-known, long feeders can suffer of low-voltage problems in the nodes far from the feed bar (HV nodes) and can exhibit over-current issues close to the feed bar due to the large amount of user load connected to it. For this reason, configurations containing very long feeders are very likely to be NCCCs, regardless of the DGs setting. Conversely, by analyzing the representative graph of CCCs (\[fig::minsod\] part (a)) it is possible to note that user loads are distributed between the two available HV nodes, resulting in shorter and more equilibrated feeders having higher probability to fulfil constraints on voltage and current.
This observation suggests that, for each admissible network configuration considered in the optimization problem, the user load should be balanced among all the available HV nodes as much as possible, in order to reduce the typical length of the feeders and the corresponding amount of required electrical current at the feed bar.
Conclusions {#sec:conclusions}
===========
In this paper we have presented an improvement of the control system first described in [@Storti_2013; @Possemato_2013; @Storti_2013_b; @Storti_2014]. We performed an analysis of admissible network configurations to identify undesirable configurations that may slow down the convergence speed of the optimization procedure. In particular, we have shown that there exist few admissible configurations for which it is not possible to avoid voltage/current constraint violation. We performed experiments on real data concerning one hour of power profile of distributed generators and loads for the SG located in the west area of Rome, realized by the company ACEA Distribuzione. Results showed that, for the network under analysis, removing from the solution space of the optimization problem those configurations that are not constraint compliant leads to overall improvements in terms of the power loss reduction. Successively, we used graph-based pattern analysis techniques to identify representative networks of both the constraint compliant and the undesirable configuration classes. We then interpreted the results of such analysis from the electrical point of view. In particular, we noted that only those configurations with user loads suitably balanced among all the available HV nodes can be actual solutions of the considered optimization problem. Accordingly, configurations having very long feeders might be a priori neglected from the search space, without causing loss of performance in the optimization of the system. In future works, we intend to repeat such analysis for an extended portion of the network by considering also different time periods. Moreover, we intend to verify if it is possible, using pattern recognition techniques, to predict the type of configuration without simulating the entire network, improving thus the overall usability of the proposed control system.
Derivation of Eq. \[eq::alpha\_eff\] {#sec:appendixA}
====================================
Given the expression : $$F(\mathbf{k}) = \alpha J_\mathrm{N}(\mathbf{k}) \cdot J_{\mathrm{MAX}}(\mathbf{k}) + (1-\alpha) \Gamma_\mathrm{N}(\mathbf{k}) \cdot \Gamma_{\mathrm{MAX}}(\mathbf{k}), \nonumber$$ we want to obtain a normalized expression of the form $$F(\mathbf{k}) = \alpha_{\mathrm{eq}} J_\mathrm{N}(\mathbf{k}) + (1-\alpha_{\mathrm{eq}}) \Gamma_\mathrm{N}(\mathbf{k}),$$ in which $\alpha_{\mathrm{eq}}$ is the effective parameter of when both $J(\mathbf{k})$ and $\Gamma(\mathbf{k})$ are normalized in $[0,1]$ using the respective maximum values. First, we impose that the relative weighting provided by the user-defined $\alpha$ is preserved in the analytically calculated $\alpha_{\mathrm{eq}}$. This is done by considering the following ratio: $$\begin{aligned}
\label{eq:ratio}
&\frac{\alpha J_{\mathrm{N}}(\mathbf{k}) J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{N}}(\mathbf{k})\Gamma_{\mathrm{MAX}}(\mathbf{k})} = \frac{\alpha_{\mathrm{eq}}J_{\mathrm{N}}(\mathbf{k})}{(1-\alpha_{\mathrm{eq}})\Gamma_{\mathrm{N}}(\mathbf{k})} \\
&\Rightarrow \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k}) } = \frac{\alpha_{\mathrm{eq}}}{1-\alpha_{\mathrm{eq}}}.
\end{aligned}$$
Accordingly, we can compute $\alpha_{\mathrm{eq}}$ by the following manipulations of : $$\begin{aligned}
\alpha_{\mathrm{eq}} &= (1-\alpha_{\mathrm{eq}}) \cdot \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k})}; \nonumber \\
\alpha_{\mathrm{eq}} &= \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k})} - \alpha_{\mathrm{eq}} \cdot \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k})}; \nonumber \\
\alpha_{\mathrm{eq}} &\cdot \frac{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k}) + \alpha \cdot J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k})} = \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k})}; \nonumber \\
\nonumber\alpha_{\mathrm{eq}} &= \frac{\alpha J_{\mathrm{MAX}}(\mathbf{k})}{(1-\alpha)\Gamma_{\mathrm{MAX}}(\mathbf{k}) + \alpha \cdot J_{\mathrm{MAX}}(\mathbf{k})}.\end{aligned}$$
[^1]: http://www.autorita.energia.it/it/inglese/index.htm
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We introduce the most general quartic Poisson algebra generated by a second and a fourth order integral of motion of a 2D superintegrable classical system. We obtain the corresponding quartic ( associative ) algebra for the quantum analog and we extend Daskaloyannis’ construction in obtained in context of quadratic algebras and we obtain the realizations as deformed oscillator algebras for this quartic algebra. We obtain the Casimir operator and discuss how these realizations allow to obtain the finite dimensional unitary irreductible representations of quartic algebras and obtain algebraically the degenerate energy spectrum of superintegrable systems. We apply the construction and the formula obtained for the structure function on a superintegrable system related to type I Laguerre exceptionnal orthogonal polynomials introduced recently.'
author:
- Ian Marquette
title: Quartic Poisson algebras and quartic associative algebras and realizations as deformed oscillator algebras
---
Introduction
============
In litterature a quadratic associative algebra with three generators ($A$, $B$ and $C$) of a superintegrable system refers to the following mathematical object [@Gra1; @Zhe1; @Gra2; @Gra3; @Gra4; @Zhe2; @Das1; @Das2; @Vin1; @Das3; @Das4; @Mil1; @Kre1; @Das5; @Das6; @Que1; @Das7; @Pos1; @Pos2; @Pos3] given by , and
$$[A,B]=C, \label{quad1}$$
$$[A,C]= \alpha A^{2} + \beta \{A,B\} + \gamma A + \delta B + \epsilon ,\label{quad2}$$
$$[B,C]= \nu A^{2} + \xi A -\beta B^{2} -\delta B -\alpha \{A,B\} +\zeta, \label{quad3}$$
where $[,]$ is the commutator $[A,B]=AB-BA$, $\{,\}$ the anticommutator ${A,B}=AB+BA$ and $A$ and $B$ are integrals of motion of second order in momenta of a given Hamiltonian. The structure constants are constrained by the Jacobi identity and can be polynomials of the Hamiltonian in 2D and other integrals of an Abelian subalgebra in application to higher dimensional systems. This algebraic structure is refered as quadratic Racah algebra $QR(3)$ in the particular case $\nu=0$. They is a classical analog i.e. a quadratic Poisson algebra where the comutator is replaced by the Poisson bracket. Let us mention that a quadratic algebra with four generators was discovered earlier in the eigties by Sklyanin [@Skl1; @Skl2] in the context of the Yang Baxter equation. The two Casimir operators were also obtained and the representations studied.
Later, it was shown how the representations of quadratic algebras can be obtained in a systematic manner [@Das4] using deformed oscillator algebras [@Das8; @Que3]. In this approach the finite dimensional unitary irreductible representations of the quadratic algebra are obtained using the structure function of a deformed oscillator algebra and appropriate constraints.
More recently cubic Poisson algebras [@Mar1] and cubic associative algebras [@Mar2; @Mar3] were introduced in context of the superintegrable systems with a second and a third order integrals of motion. They were applied to a class of such systems separable in Cartesian coordinates and used to obtain algebraically the energy spectrum and explain the degeneracies. For a such algebraic structure the modification to the quadratic algebra is minimal and only a cubic term (i.e. $A^{3}$) was added to the right side of equation and it was observed that a such algebraic structure also allow realizations as deformed oscillator algebras.
Recently new approaches to construct polynomial algebras for superintegrable systems were introduced using ladder operators [@Mar4; @Mar5; @Kal1; @Kal2; @Pos4; @Que4; @Que5] for specific families of systems. In the case of systems with separation of variables in Cartesian coordinates it was observed that the resulting polynomial algebra can be realized as deformed oscillator algebras. However, the existence and systematic study of realizations as deformed oscillator algebras for higher order polynomial algebra is an unexplored subject.
Quadratic, cubic and more generally polynomial algebras are rich and interesting object that have applications in context of superintegrable systems, but as other algebraic structure as Lie algebras they could find applications in other context. The purpose of this paper is to investigate the case of quartic Poisson and in particular quartic algebras and the realizations in term of deformed oscillator algebras. Using this approach we intend to provide explicit formula that would allow to construct finite dimensional unitary representations for quartic algebras.
Let us present the organisation of the paper. In Section 2, we present the most general quartic Poisson algebra and calculate the Casimir operator. In Section 3, we obtain the quantum analog, a quartic associative algebra. We calculate the Casimir operator and obtained the realizations as deformed oscillator algebras. These realizations allow to construct the Fock type representation of the quartic algebra. In Section 5, in order to illustrate the application of these results in context of superintegrable systems, we apply this construction on a superintegrable system studied recently and related to Laguerre EOP [@Que4]. We obtain algebraically the energy spectrum and the degeneracies.
Quartic Poisson algebras
========================
We consider a two-dimensional Superintegrable Hamiltonian $H$ with a second and fourth order well defined integrals of motion (respectively A and B).
$$A=\sum_{i+j \leq 2}f_{ik}(x_{1},x_{2})p_{1}^{i}p_{2}^{j}, \label{intA}$$
$$B=\sum_{i+j \leq 4}g_{ik}(x_{1},x_{2})p_{1}^{i}p_{2}^{j}, \label{intB}$$
where $f_{ik}(\vec{x})$ and $g_{ik}(\vec{x})$ are some unknown functions. We thus have the following Poisson Bracket
$$\{H,A\}_{p}=\{H,B\}_{p}=0. \label{commu}$$
The most general quartic Poisson algebra generated by such second order and fourth order integrals has the form :
$$\{A,B\}_{p}=C, \label{quartcl1}$$
$$\{A,C\}_{p}= \tau A^{3} + \alpha A^{2} + 2 \beta AB + \gamma A + \delta B + \epsilon, \label{quartcl2}$$
$$\{B,C\}_{p}= \lambda A^{4} + \mu A^{3} + \nu A^{2} + \xi A + \rho B^{2} +\eta B + 2 \omega A^{2}B + 2 \sigma AB +\zeta. \label{quartcl3}$$
The right side of the equation and are obtained by considering the power on the left side in terms of the momentum. Thus we form the most general polynomial in $A$ and $B$ to this given order. Let us mention however this is not guaranteed that in general a second and fouth order integrals closed in a such algebraic structure. As observed for many example of superintegrable systems, sometime we need to construct higher order integrals and the corresponding higher order polynomial.
From the Jacobi equation we have the constraint
$$\{A,\{B,C\}_{p}\}_{p}=\{B,\{A,C\}_{p}\}_{p}. \label{jacobic}$$
and we have the relations:
$$\omega=-\frac{3}{2}\tau,\quad \sigma=-\alpha,\quad \rho=-\beta,\quad \eta=-\gamma. \label{jacobiccon}$$
The quartic Poisson algebra takes thus the form :
$$\{A,B\}_{p}=C, \label{quartcl1v2}$$
$$\{A,C\}_{p}= \tau A^{3} + \alpha A^{2} + 2 \beta AB + \gamma A + \delta B + \epsilon, \label{quartcl2v2}$$
$$\{B,C\}_{p}= \lambda A^{4} + \mu A^{3} + \nu A^{2} + \xi A -\beta B^{2} -\gamma B -3\tau A^{2}B -2\alpha AB +\zeta, \label{quartcl3v2}$$
with $\tau$, $\lambda$ and $\omega$ are constants and the other parameters are given by $$\alpha=\alpha(H)=\alpha_{0}+\alpha_{1}H,\quad \gamma=\gamma(H)=\gamma_{0}+\gamma_{1}H+\gamma_{2}H^{2}, \quad \delta=\delta(H)=\delta_{0}+\delta_{1}H \label{structureclas}$$ $$\epsilon=\epsilon(H)=\epsilon_{0}+\epsilon_{1}H+\epsilon_{2}H^{2}+\epsilon_{3}H^{3},\quad \mu=\mu(H)=\mu_{0}+\mu_{1}H,\quad \nu=\nu(H)=\nu_{0}+\nu_{1}H+\nu_{2}H^{2}$$ $$\xi=\xi(H)=\xi_{0}+\xi_{1}H+\xi_{2}H^{2}+\xi_{3}H^{3},\quad \rho=\rho(H)=\rho_{0}+\rho_{1}H,\quad \eta=\eta(H)=\eta_{0}+\eta_{1}H+\eta_{2}H^{2}$$ $$\sigma=\sigma(H)=\sigma_{0}+\sigma_{1}H,\quad \zeta=\zeta(H)=\zeta_{0}+\zeta_{1}H+\zeta_{2}H^{2}+\zeta_{3}H^{3}+\zeta_{4}H^{4} .$$
In previous cases the Casimir operator has the form $K=C^{2}+h(A,B)$, where $h(A,B)$ is a polynomial of $A$ and $B$ in a such way that it has the same order as $C^{2}$ in term of momentum. We thus consider Casimir of the form : $$K = C^{2}+2c_{1}A^{3}B +2c_{2}A^{2}B+2c_{3}AB^{2}+2c_{4}AB \label{casimirclas1}$$ $$+c_{5}B^{2}+c_{6}B+c_{7}A^{5}+c_{8}A^{4}+c_{9}A^{3}+c_{10}A^{2}+c_{11}A .$$
Using $\{K,A\}=\{K,B\}=0$ we obtain a set of linear equations in coefficients $c_{i}$ that allows to obtain the solution for the parameter $c_{1}...c_{11}$ in term of the structure constants.
We have the following parameters $$c_{1}= -\tau,\quad c_{2} = -\alpha ,\quad c_{3}= -\beta ,\quad c_{4}= -\gamma , \quad c_{5}= -\delta ,\quad c_{6}= -2 \epsilon, \label{casimirclas2}$$ $$c_{7}= \frac{2 \lambda }{5} , \quad c_{8}= \frac{\mu }{2} ,\quad c_{9}= \frac{2 \nu }{3},\quad c_{10}= \xi ,\quad c_{11}= 2 \zeta .$$
This Casimir operators can also to be written in term of the Hamiltonian as a polynomial
$$K= k_{0}+k_{1}H+k_{2}H^{2}+k_{3}H^{3}+k_{4}H^{4}+k_{5}H^{5}.$$
Quartic algebras
================
In the quantum case, we replace the Poisson bracket by the commutator and the momentum takes the form
$$p_{j}=-i\hbar \frac{\partial}{\partial x_{j}}.$$
The integrals $A$ and $B$ are now well defined algebraically and independent quantum mechanical operators. We thus have
$$[H,A]=[H,B]=0.$$
The most general quartic quantum algebra is thus :
$$[A,B]=C, \label{quart1}$$
$$[A,C]= \tau A^{3} + \alpha A^{2} + \beta \{A,B\} + \gamma A + \delta B + \epsilon , \label{quart2}$$
$$[B,C]= \lambda A^{4} + \mu A^{3} + \nu A^{2} + \xi A + \rho B^{2} +\eta B + \omega \{A^{2},B\} + \sigma \{A,B\} +\zeta. \label{quart3}$$
The Jacobi identity provide also constraint on the structure constants:
$$[A,[B,C]]=[B,[A,C]]. \label{jacobiquant}$$
We obtain the relations between the parameters :
$$\omega=-\frac{3}{2} \tau, \quad \sigma=\frac{\beta \tau}{2}-\alpha,\quad \rho=-\beta,\quad \eta=-\gamma+\frac{\delta \tau}{2}. \label{jacobiquant2}$$
Unlike the cases of quadratic associative and cubic associative algebras the relation obtained differ from the classical case. The quartic algebra takes the form :
$$[A,B]=C, \label{quart1v2}$$
$$[A,C]= \tau A^{3} + \alpha A^{2} + \beta \{A,B\} + \gamma A + \delta B + \epsilon , \label{quart2v2}$$
$$[B,C]= \lambda A^{4} + \mu A^{3} + \nu A^{2} + \xi A -\beta B^{2} +(-\gamma+\frac{\delta \tau}{2})B-\frac{3}{2} \tau \{A^{2},B\} + (\frac{\beta \tau}{2}-\alpha) \{A,B\} +\zeta. \label{quart3v2}$$
In the case of quadratic and cubic algebras the form only differ by a symmetrization/antisymmetrization. Here we see that correction involving $\tau$ appear in lower term. This was observed for the quadratic and cubic case concerning the Casimir operator however for the quartic case correction term appear at the level of the algebra. We recover results for the cubic case by taking $\tau \rightarrow 0$ and $\lambda \rightarrow 0$. We obtain the quadratic case taking furthermore $ \mu \rightarrow 0$ and the quadratic Racah algebra $QR(3)$ by taking also $\nu \rightarrow 0$.
The Casimir operator has the following form:
$$K = C^{2}+c_{1}\{A^{3},B\} +c_{2}\{A^{2},B\}+c_{3}\{A,B^{2}\}+c_{4}\{A,B\} \label{casimirquant1}$$
$$+c_{5}B^{2}+c_{6}B+c_{7}A^{5}+c_{8}A^{4}+c_{9}A^{3}+c_{10}A^{2}+c_{11}A .$$
Using $[K,A]=[K,B]=0$, identities given in Appendix A and taking into account ordering of the operators, we obtain a set of equations that allow to obtain the solution for the parameter $c_{1}...c_{11}$ in term of the structure constants.
We have the following parameters $$c_{1}= -\tau,\quad c_{2} = -\alpha +\frac{3 \beta \tau }{2},\quad c_{3}= -\beta ,\quad c_{4}= -\gamma +\beta(\alpha -\frac{\beta \tau }{2}) , \label{casimirquant2}$$ $$c_{5}= \beta^{2}-\delta ,\quad c_{6}= -2 \epsilon +\beta \gamma - (1/2) \beta\delta \tau ,\quad c_{7}= \frac{2 \lambda }{5} , \quad c_{8}= -\beta \lambda +\frac{\mu }{2} -\frac{9}{4}\tau^{2} ,$$ $$c_{9}= (\frac{8 \beta^{2}}{15}+\frac{2 \delta }{3}) \lambda +\frac{2\beta \mu }{3}+\frac{2 \nu }{3}+3\alpha \tau -\frac{3}{2} \beta \tau^{2},$$ $$c_{10}= \alpha^{2}+(-\frac{2 \beta^{3}}{15}+\frac{\beta \delta }{3}) \lambda -\frac{\beta^{2} \mu }{6}+\frac{\delta \mu }{2}+\frac{\beta \nu }{3}+\xi +(-\alpha \beta +\frac{3\gamma }{2}) \tau +\frac{1}{4} (\beta^{2} -3 \delta ) \tau^{2} ,$$ $$c_{11}= \alpha \gamma +2 \zeta +(-\frac{2 \beta^{2} \delta }{15}-\frac{\delta^{2}}{15}) \lambda -\frac{\beta \delta \mu }{6}+\frac{\delta \nu }{3}+(-\frac{\beta \gamma }{2}-\frac{\alpha \delta }{2}) \tau +\frac{1}{4} \beta \delta \tau^{2} .$$
This Casimir operators can also to be written in term of the Hamiltonian as a polynomial
$$K= k_{0}+k_{1}H+k_{2}H^{2}+k_{3}H^{3}+k_{4}H^{4}+k_{5}H^{5}.$$
Realizations as deformed oscillator algebras
--------------------------------------------
In order to obtain the finite-dimensional unitary representations, we consider realizations of the quartic algebras in terms of deformed oscillators algebras [@Das8] $\{1,N,b^{\dagger},b\}$ satisfying the following equations : $$[N,b^{\dagger}]=b^{\dagger},\quad [N,b]=-b,\quad bb^{\dagger}=\Phi(N+1),\quad b^{\dagger}b=\Phi(N),\label{deforosclalg}$$ where $\Phi(x)$ is a real function called the structure function satisfying $\Phi(x)=0$ and $\Phi(x)>0$ for $x>0$. We have the existence of Fock type representations when we impose the existence of an integer $p$ such that $\Phi(p+1)=0$. In this case the deformed oscillator algebra is a parafermionic algebra.
We impose the realization of the quartic algebra the form
$$A=A(N), B=b(N)+b^{\dagger}\rho(N)+\rho(N)b . \label{realiz}$$
We found from the first relation of the quartic algebra
$$[A,B]=b^{\dagger}\Delta A(N) \rho(N)-\rho(N) \Delta A(N)b \equiv C . \label{realiz1}$$
The second equations of the quartic algebra give us two difference equations to be satisfied by the function $A(N)$ and $b(N)$ : $$(\Delta A(N))^{2}=\beta (A(N+1)+A(N))+\delta, \label{realiz2a}$$ $$\tau A^{3}(N)+\alpha A^{2}(N)+2 \beta A(N) b(N) +\gamma A(N) +\delta b(N) +\epsilon . \label{realiz2b}$$
The following are obtained ( $u$ is a constant determined from the constraints on the structure function )
Case 1: $\beta=0$, $\delta \neq 0$ $$A(N)=\sqrt{\delta}(N+u) , \label{realiztype1A}$$ $$B(N)=-\sqrt{\delta}\tau (N+u)^{3}-\alpha (N+u)^{2}-\frac{\gamma}{\sqrt{\delta}}(N+u)-\frac{\epsilon}{\delta} . \label{realiztype1B}$$
Case 2: $\beta \neq 0$ $$A(N)= (\frac{\beta}{2})(( (N+u)^{2} -(\frac{1}{4})) -( \frac{\delta}{\beta {2}})), \label{realiztype2A}$$ $$b(N)=(-\frac{\beta \tau }{8})((N+u)^{2}-(\frac{1}{4}))^{2}+ (\frac{-2\alpha \beta +3 \delta \tau}{8 \beta })((N+u)^{2} - (\frac{1}{4})) \label{realiztype2B}$$ $$+( \frac{-4 \beta^{2} \gamma +4 \alpha \beta \delta -3 \delta^{2} \tau }{8 \beta^{3}}) -(\frac{-4 \beta^{2} \gamma \delta +2 \alpha \beta \delta^{2}+8 \beta^{3}\epsilon -\delta^{3} \tau }{8 \beta^{5}})$$ $$( \frac{1}{((N+u)^{2} - (\frac{1}{2}))}).$$
From the third equation of the quartic algebra we have 3 difference equations to be satisfied
$$\Delta A(N)- \Delta A(N+1)=-\beta, \label{realiz3a}$$
$$\Delta A(N)(b(N+1)-b(N))=-\beta ( b(N+1)+b(N))+(-\gamma+\frac{\delta \tau}{2})+(A(N+1)+A(N))(-\alpha + \frac{\tau \beta}{2}), \label{realiz3b}$$
$$-\frac{3}{2}\tau (A^{2}(N+1)-A^{2}(N))$$
$$- 2 \Phi(N)\rho^{2}(N-1)\Delta A(N-1)+2 \Phi(N+1) \rho^{2}(N)\Delta A(N)= \label{realiz3c}$$
$$-\beta(\Phi(N)\rho^{2}(N-1)+\Phi(N+1)\rho^{2}(N)) -\beta b^{2}(N) +\lambda A^{4}(N)+\mu A^{3}(N)$$ $$+\nu A^{2}(N) +\xi A(N)+\zeta -3 \tau A^{2}(N)b(N)+(-2 \alpha +\tau \beta )A(N)b(N)+(-\gamma +\frac{\delta}{\tau}{2})b(N).$$
The first two equations i.e. and are satisfied for the two solutions we obtained for $A(N)$ and $b(N)$. This point out how the existence of realizations as deformed oscillators is connected with the fact that generators satisfied the Jacobi identity.
Using the realizations, the Casimir operator given by also provide three equations
$$-\beta (A(N+2)+A(N))+(\beta^{2}-\delta)+\Delta A(N+1) \Delta A(N)=0 ,\label{realiz4a}$$
$$-\tau ( A^{3}(N)+A^{3}(N+1))+(-\alpha +\frac{3}{2}\beta \tau )(A^{2}(N)+A^{2}(N+1))-\beta (A(N)b(N)+b(N) A(N+1) \label{realiz4b}$$
$$+A(N)b(N+1)+b(N+1)A(N+1))+(-\gamma+\beta \alpha -\frac{\beta^{2}\tau }{2})(A(N)+A(N+1))$$ $$+(\beta^{2}-\delta)(b(N)+b(N+1))+(-2\epsilon +\beta \gamma -\frac{\beta \tau \delta}{2})=0,$$
$$K= - \Phi(N) \Delta A^{2}(N-1) \rho^{2}(N-1) - \Phi(N+1) \Delta A^{2}(N) \rho^{2}(N) \label{realiz4c}$$
$$2c_{1} A^{3}(N)b(N) +2 c_{2} A^{2}(N)b(N)+ 2 c_{3} A(N) (\Phi \rho^{2}(N-1) +\Phi(N+1)\rho^{2}(N))+2c_{3}A(N)b^{2}(N)$$ $$2 c_{4} A(N)b(N) + c_{5}( (\Phi(N) \rho^{2}(N-1) +\Phi(N+1)\rho^{2}(N))+b^{2}(N)) +c_{6}b(N)$$ $$+c_{7}A^{5}(N) +c_{8}A^{4}(N) +c_{9}A^{3}(N) +c_{10} A^{2}(N) +c_{11}A(N) .$$
The first two equations i.e. and are satisfied for the two solutions we obtained for $A(N)$ and $b(N)$.
Using the remaining equations and from the third relation of the quartic algebra and the Casimir operator we can found the structure function in term of the arbitrary function $\rho(N)$ that can be chosen in a way that the structure function is a polynomial.
Case 1 : $$\rho(N)=1, \label{rhotype1}$$
$$\label{phitype1}$$
$$\Phi(N)=-\frac{K}{2 \delta }-\frac{\gamma \epsilon }{2 \delta^{3/2}}+\frac{\epsilon ^2}{2 \delta^2}-\frac{\zeta }{2 \sqrt{\delta }}+\frac{\epsilon \tau }{4 \sqrt{\delta }}+\frac{1}{2} N^6 \delta \tau ^2$$ $$+N (-\frac{\gamma ^2}{2 \delta }+\frac{\alpha \gamma }{2 \sqrt{\delta }}+\frac{\gamma \epsilon }{\delta ^{3/2}}-\frac{\alpha \epsilon }{\delta }+\frac{\zeta }{\sqrt{\delta }}-\frac{1}{30} \delta ^{3/2} \lambda +\frac{\sqrt{\delta } \nu }{6}-\frac{\xi }{2}+\frac{\gamma \tau }{4}-\frac{1}{4} \alpha \sqrt{\delta } \tau)$$ $$+N^5 \left(\frac{1}{5} \delta ^{3/2} \lambda +\alpha \sqrt{\delta } \tau -\frac{3 \delta \tau^2}{2}\right)+N^4 (\frac{\alpha^2}{2}-\frac{1}{2} \delta^{3/2} \lambda +\frac{\delta \mu }{4}+\gamma \tau -\frac{5}{2} \alpha \sqrt{\delta } \tau -\frac{9 \delta \tau ^2}{8})$$ $$+N^2 (\frac{\alpha ^2}{2}+\frac{\gamma ^2}{2 \delta }-\frac{3 \alpha \gamma }{2 \sqrt{\delta }}+\frac{\alpha \epsilon }{\delta }+\frac{\delta \mu }{4}-\frac{\sqrt{\delta } \nu }{2}+\frac{\xi }{2}+\frac{3 \gamma \tau }{4}+\frac{1}{4} \alpha \sqrt{\delta } \tau -\frac{3 \epsilon \tau }{2 \sqrt{\delta }}-\frac{3 \delta \tau ^2}{8})$$ $$+N^3 (-\alpha ^2+\frac{\alpha \gamma }{\sqrt{\delta }}+\frac{1}{3} \delta ^{3/2} \lambda -\frac{\delta \mu }{2}+\frac{\sqrt{\delta } \nu }{3}-2 \gamma \tau +\frac{3}{2} \alpha \sqrt{\delta } \tau +\frac{\epsilon \tau }{\sqrt{\delta }}+\frac{\delta \tau ^2}{4}).$$
Case 2 : $$\label{rhotype2}$$ $$\rho(N)^{2}= \frac{1}{\sqrt{3932160 (N+u-1)(N+u)\beta^{10}(1-2(N+u))^{2} }}$$
The structure function is a polynomial of order 12 we present in the Appendix B.
These formula coincide with the appropriate limit with those obtained for quadratic and cubic algebra. The Casimir operator $K$ can be written in terms of the Hamiltonian only. We have a energy dependent Fock space of dimension p+1 if $$\Phi(p+1,u,E)=0, \quad \Phi(0,u,E)=0,\quad \Phi(x)>0, \quad \forall \; x>0 \quad .\label{constraints}$$
The Fock space is defined by $$H|E,n>=E|E,n>,\quad N|E,n>=n|E,n> \quad b|E,0>=0, \label{fock1}$$ $$b^{\dagger}|n>=\sqrt{\Phi(n+1,E)}|E,n+1>,\quad b|n>=\sqrt{\Phi(n,E)}|E,n-1>. \label{fock2}$$ The energy $E$ and the constant $u$ are solutions of the equations obtained by the system given by . They represent the finite-dimensional unitary representations with dimension $p+1$. Let mention that in some cases of systems allowing a cubic algebra, the finite-dimensional unitary representations do not allow to recover all the energy spectrum and the degeneracies [@Que4] and one need to consider higher order polynomial algebra [@Que5]. In such case, a union of finite-dimensional unitary representations need to be considered for a given energy level. Such phenomena would also be observed for quartic associative algebras.
Example
=======
In order to illustrate the application of these formula in context of superintegrable systems let us consider an example. There is no classification of superintegrable systems with a second and third order integrals of motion even for the class of systems with separation of variables in Cartesian coordinates on 2D real Euclidean space. Let us apply the results of the previous section on the following Hamiltonian obtained recently [@Que4]. The Hamiltonian $H$ is given by
$$H_{x}=-\frac{d^{2}}{dx^{2}}+\frac{x^{2}}{4}+\frac{l(l+1)}{x^{2}}+{4}{1+2l+x^{2}}-\frac{8(1+2l)}{(1+2l +x^{2})^{2}}-1, \label{hamilx}$$
$$H_{y}=-\frac{d^{2}}{dy^{2}}+\frac{y^{2}}{4}, \label{hamily}$$
$$H=H_{x}+H_{y}, \label{intH}$$
the second order integral $A$ takes the form
$$A=H_{x}-H_{y}, \label{intA}$$
and the fourth order integral $B$ is
$$B=\frac{1}{16(x+2 l x+x^{3})^{4}}(-256 l^8-256 l^{7}(4+3 x^{2})-128 l^{6}(1+21 x^{2}+7 x^{4})-64 l^{5}(-50-6 x^{2}+42 x^{4}+7 x^{6}) \label{intB}$$
$$-80 l^{4}(-59-96 x^{2}-10 x^{4}+14 x^{6})+16 l^{3}(182+519 x^{2}+380 x^{4}+36 x^{6}+7 x^{10})$$ $$+x^{4} (192-1662 x^{2}+201 x^{4}+16 x^{6}+14 x^{8}+6 x^{10}+x^{12})+8 l^{2}(106+429 x^{2}+648 x^{4}$$ $$+248 x^{6}+33 x^{8}+21 x^{10}+7 x^{12})+4 k (24+126 x^{2}+424 x^{4}-647 x^{6}+66 x^{8}+22 x^{10}+14 x^{12}+3 x^{14})) y$$ $$+\frac{1}{8 x^{2} (1+2 l+x^{2})^{3}}((32 l^{5}+16 l^{4} (5+x^{2})-8 l^{3}(-9-4 x^{2}+6 x^{4})-4 l^{2}(-7-35 x^{2}+18 x^{4}+14 x^{6})$$ $$+x^{2}(30-153 x^{2}+x^{4}-11 x^{6}-3 x^{8})-2 l(-2-62 x^{2}+165 x^{4}+28 x^{6}+11 x^{8})) \frac{\partial}{\partial y} )-\frac{4 l y}{x^{3}}\frac{\partial}{\partial x}$$ $$-\frac{4 l^{2} y}{x^{3}}\frac{\partial}{\partial x}-\frac{64 x^{3} y}{(1+2 l+x^{2})^{3}}\frac{\partial}{\partial x}+\frac{44 x y }{(1+2 l+x^{2})^{2}}\frac{\partial}{\partial x}-\frac{8 l x y}{(1+2 l+x^{2})^{2}}\frac{\partial}{\partial x}$$ $$-\frac{4 x^{3} y }{(1+2 l+x^{2})^{2}}\frac{\partial}{\partial x}+\frac{4 x y}{1+2 l+x^{2}}\frac{\partial}{\partial x}-\frac{l}{x}\frac{\partial}{\partial y}\frac{\partial}{\partial x}-\frac{l^{2}}{x}\frac{\partial}{\partial y}\frac{\partial}{\partial x}$$ $$+\frac{3}{2} x \frac{\partial}{\partial y}\frac{\partial}{\partial x}-l x \frac{\partial}{\partial y}\frac{\partial}{\partial x}-\frac{1}{4} x^{3} \frac{\partial}{\partial y}\frac{\partial}{\partial x}-\frac{12 x^{3}}{(1+2 l+x^{2})^{2}}\frac{\partial}{\partial y}\frac{\partial}{\partial x}$$ $$+\frac{6 x}{1+2 l+x^{2}}\frac{\partial}{\partial y}\frac{\partial}{\partial x}-\frac{4 l x}{1+2 l+x^{2}}\frac{\partial}{\partial y}\frac{\partial}{\partial x}-\frac{2 x^{3}}{1+2 l+x^{2}}\frac{\partial}{\partial y}\frac{\partial}{\partial x}-\frac{3}{2} y \frac{\partial^{2}}{\partial x^{2}}+l y \frac{\partial^{2}}{\partial x^{2}}+\frac{2 l y}{x^2}\frac{\partial^{2}}{\partial x^{2}}+\frac{2 l^{2} y}{x^{2}}\frac{\partial^{2}}{\partial x^{2}}$$ $$+\frac{16 x^{2} y}{(1+2l+x^{2})^{2}}\frac{\partial^{2}}{\partial x^{2}} -\frac{6 y}{1+2 l+x^{2}}\frac{\partial^{2}}{\partial x^{2}}+\frac{4 l y}{1+2 l+x^{2}}\frac{\partial^{2}}{\partial x^{2}}+\frac{2 x^{2} y}{1+2 l+x^{2}}\frac{\partial^{2}}{\partial x^{2}}$$ $$+\frac{3}{2} \frac{\partial}{\partial y}\frac{\partial^{2}}{\partial x^{2}}+x \frac{\partial}{\partial y}\frac{\partial^{3}}{\partial x^{3}}-y \frac{\partial^{4}}{\partial x^{4}}.$$
The structure constants of the quartic algebra are given by
$$\delta=16,\quad \lambda=-\frac{5}{2},\quad \mu=-3H-(10+4l), \label{param}$$
$$\nu=-\frac{3}{2}H^{2}-(15+6l)H-(25+18l),$$ $$\xi=H^{3}-(22 +12 l)H-(35+34l-12l^{2}-8l^{3}),$$ $$\zeta= \frac{3}{4}H^{4}+(5+2l)H^{3}+(3+6l)H^{2}-(20+8l)H-(\frac{1}{4}(175+160-8l^{2}-64l^{3}16l^{4}).$$
We can calculate the Casimir operator and write this operator only in term of the Hamiltonian
$$K=-\frac{1}{2}H^{5}-(5+2l)H^{4}-(14+12l)H^{3}+(-1+2l)^{2}(5+2l)H^{2} \label{casimirhamil}$$
$$+\frac{1}{2}(149 +32l +8l^{2}+64l^{3}+16l^{4})H-(4-3+2l)(1+2l)(5+2l).$$
We can calculate the structure function
$$\Phi(H,u,x)=\frac{1}{64}(2+H-4(x+u))(-1+H-2l+4(x+u)) \label{strufunc1}$$
$$(-3+H+2l+4(x+u))(1+H+2l+4(x+u))(5+H+2l+4(x+u)).$$
Thus
$$\Phi(E,u,x)=-16(x+u-(\frac{2+E}{4}))(x+u-(\frac{1}{4}(-5+E-2l))) \label{strufunc2}$$
$$(x+u-(\frac{1}{4}(-1-E-2l)))(x+u-(\frac{1}{4}(3-E-2l)))(x+u-(\frac{1}{4}(1-E+2l))).$$
From $\Phi(E,u,0)=0$ we obtain
$$u_{1}=\frac{2+E}{4},\quad u_{2}=\frac{1}{4}(-5-E-2l), \quad u_{3}=\frac{1}{4}(-1-E-2l), \label{paramu1}$$
$$u_{4}=\frac{1}{4}(3-E-2l), \quad u_{5}=\frac{1}{4}(1-E+2l) .$$
Using $u_{5}$ the structure function takes the form
$$\Phi(x)=\frac{1}{2}(1+2E-2l-4x)x(-1+2l+2x)(1+2l+2x)(3+2l+2x). \label{strufunc3}$$
Using $\Phi(E,u_{5},p+1)=0$ we found the degenerate energy spectrum
$$E=2p+l+\frac{3}{2}, \label{strufuncenerg}$$
with
$$\Phi=2x(p+1-x)(2x+2l-1)(2x+2l+1)(2x+2l+3). \label{strufunc4}$$
This result corroborate the one obtained using ladder operators [@Que4].
Conclusion
==========
We obtained the most general quartic Poisson and quartic associative algebras respectively for classical and quantum superintegrable systems with a second and a fourth order integrals of motion. In quantum mechanics, this is a deformation of the quadratic Racah algebra $QR(3)$. Unlike for the case of the quadratic and cubic algebras, in the case of quartic algebras the classical and quantum cases differ when we impose the Jacobi identity.
We present the realizations in terms of deformed oscillator algebras of the quantum quartic algebra. We discuss how these results can be used to obtain finite dimensional unitary representations. This allow also to provide a method to obtain the energy spectrum of superintegrable with a second and a fourth order integrals of motion. The classification of superintegrable systems beyond quadratically superintegrable systems is difficult and only specific examples with second and fourth order integrals are known and a classification need to be started for such systems.
Recently, a classification of superintegrable systems and the representation theory of their quadratic algebra was related to various orthogonal polynomials as Racah, Wilson, Hahn, Jacobi, Bessel, Krawtchouk, Meixner-Pollaczek, Gegenbauer, Laguerre, Hermite, Tchebicheff and Charlier. Moreover, specific examples of superintegrable systems with higher integrals of motion were related with exceptional orthogonal polynomial [@Pos3]. These results point out how superintegrable systems and their algebraic structures are closely related with orthogonal polynomials and thus the study of the most general quartic associative algebra could provide new connections with orthogonal polnyomials.
In addition, in recent years various results for quadratic algebras in three dimensions [@Mil2; @Mil3; @Tan1; @Das9; @Das10; @Mar6] and five dimensions with a non Abelian monopole were obtained [@Mar7] and thus the study of polynomial algebras and their realizations could be extended in three dimensions.
These algebraic structures are constructed in context of superintegrability, however they are objects interesting by themselves and as Lie algebra they can found applications in other contexts in physics or mathematical physics. Recently some classes of polynomial algebras found applications in condensed matter [@Zha1; @Zha2; @Lin1].
The quadratic Racah algebra $QR(3)$ algebra can be extended to the quadratic Askey-Wilson algebra denoted by $QAW(3)$ [@Gra1] by replacing the commutator by the deformed commutator $[A,B]_{\omega}=e^{\omega}A-B e^{-\omega}$. A generalization of the Askey-Wilson algebra with cubic term and (p,q)-deformation was studied GAW(3) [@Lav1]. The investigation of q-deformation of these polynomial associative algebras could also be studied.
**Acknowledgments**
The research of I.M. was supported by the Australian Research Council through Discovery Project DP110101414. He thanks Phil Isaac for very interesting discussions.
Appendix A
==========
Let us present the following list of indentities that allow to solve the constraint from the Jacobi identity and calculate the Casimir operator.
1\. $[A,B]=C$\
2. $[ A^{2},B]=\{C,A\}$\
3. $[A^{3},B]= \{C,A^{2}\}+\frac{1}{2}(5)$\
4. $[A^{4},B]=\{C,A^{3}\}+(6)$\
5. $2ACA= \{C,A^{2}\}-\beta \{C,A\}-\delta C $\
6. $ A^{2}CA+ACA^{2}=\{C,A^{3}\}-2\beta \{C,A^{2}\}+(\beta^{2}-\delta)\{A,C\}+\beta \delta C$\
7. $[B,\{A,B\}]=-\{C,B\}$\
8. $[\{A,B\},A]=\{C,A\}$\
9. $[\{A,B\},A^{2}]= - \{C,A^{2}\}-(5)$\
10. $[\{A,B\},A^{3}]=\frac{3}{2}\{C,A^{3}\}+\frac{3}{2}(6)-\frac{\beta}{2}\{C,A^{2}\}-\frac{\beta}{2}(5)-\frac{\delta}{2}\{C,A\}$\
11. $A^{3}CA+ACA^{3}=\{C,A^{4}\}-\delta (3)+\beta (10) $\
12. $[A^{5},B]=\{C,A^{4}\}+(11)+\frac{1}{2}(13)$ 13. $2A^{2}CA^{2}=(11)+\beta (6) -\delta (5)$\
14. $[\{A^{2},B\},A]=-\{C,A^{2}\}$\
15. $[\{A^{3},B\},A]=-\{C,A^{3}\}$\
16. $[B^{2},A]=\{C,B\}$\
17. $\{A,\{B,C\}\}=\{C,\{A,B\}\}+\tau (3) +\alpha (2) -\beta (7) +\gamma C$\
18. $[\{A^{3},B\},B]=\frac{3}{2}(20)-\frac{\beta}{2}(24) -\frac{\delta}{2}\{C,B\}$\
19. $[\{A^{2},B\},B]=(24)$\
20. $\{B,\{C,A^{2}\}\}=\{C,\{A^{2},B\}\}+\beta (21)- (-\gamma +\frac{\delta \tau}{2})(2)-\frac{3\tau}{2}(23) -(-\alpha+\frac{\tau \beta}{2}) (22)$\
21. $[A^{2},B^{2}]=\{C,\{A,B\}\}+\tau (3)+\alpha (2) -\beta (7) + \gamma (1)$\
22. $[A^{2},\{A,B\}]=\{C,A^{2}\}+(5) $\
23. $[A^{2},\{A^{2},B\}]=\{C,A^{3}\}+(6)$\
24. $\{B,\{C,A\}\}=\{C,\{A,B\}\}+\beta \{C,B\}-(-\gamma+\frac{\delta \tau}{2})(1)-\frac{3\tau }{2}(14)+(-\alpha +\frac{\tau \beta}{2})(8)$\
25. $[C^{2},A]=\tau\{C,A^{3}\}-\alpha\{C,A^{2}\}-\beta \{C,\{A,B\}\}-\gamma\{C,A\}-\delta\{C,B\}-2\epsilon C$\
26. $[\{A,B^{2}\},A]=-\{A,\{C,B\}\}=-(17)$\
27. $[C^{2},B]=-\lambda\{C,A^{4}\}-\mu\{C,A^{3}\}-\nu\{C,A^{2}\}-\xi \{C,A\}+\beta \{C,B^{2}\}-(-\gamma+\frac{\delta \tau}{2})\{C,B\}$\
$+\frac{3}{2}\tau \{C,\{A^{2},B\}\}-(-\alpha +\frac{\tau \beta}{2})\{C,\{A,B\}\}-2\zeta C$\
28. $[\{A,B^{2}\},B]=\{C,B^{2}\}$\
29. $[\{A,B\},B]=\{C,B\}$\
Let us denote that many of the commutators can be written only in term of the anticommutator of the form $\{C,A^{i}\}$. We have also the following relations
$$[K,A]=(25)+c_{1}(15)+c_{2}(14)=c_{3}(26)+c_{4}(8)+c_{5}(16)+c_{6}(-(1))$$ $$[K,B]=(27)+c_{1}(18)+c_{2}(19)+c_{3}(28)+c_{4}(21)+c_{7}(12)+c_{8}(4)+c_{9}(3)+c_{10}(2)+c_{11}(1)$$
Appendix B
==========
Let us present the structure function for the second type of realization of the quartic algebra.
$$\Phi(N)=-983040 K \beta ^8+8640 \alpha ^2 \beta ^{10}-46080 \alpha \beta ^9 \gamma -184320 \beta ^8 \gamma ^2+46080 \alpha ^2 \beta ^8 \delta +61440 \alpha \beta ^7 \gamma \delta$$ $$-491520 \beta ^6 \gamma ^2 \delta -30720 \alpha ^2 \beta ^6 \delta ^2+737280 \alpha \beta ^5 \gamma \delta ^2+983040 \beta ^4 \gamma ^2 \delta ^2-245760 \alpha ^2 \beta ^4 \delta ^3$$ $$-983040 \alpha \beta ^3 \gamma \delta ^3+245760 \alpha ^2 \beta ^2 \delta ^4-368640 \alpha \beta ^8 \epsilon +983040 \beta ^7 \gamma \epsilon -983040 \alpha \beta ^6 \delta \epsilon$$ $$-3932160 \beta ^5 \gamma \delta \epsilon +1966080 \alpha \beta ^4 \delta ^2 \epsilon +3932160 \beta ^6 \epsilon ^2+245760 \beta ^9 \zeta -983040 \beta ^7 \delta \zeta$$ $$-3204 \beta ^{13} \lambda -10608 \beta ^{11} \delta \lambda +3968 \beta ^9 \delta ^2 \lambda -50688 \beta ^7 \delta ^3 \lambda -128000 \beta ^5 \delta ^4 \lambda$$ $$-12288 \beta ^3 \delta ^5 \lambda -4680 \beta ^{12} \mu -17280 \beta ^{10} \delta \mu +6400 \beta ^8 \delta ^2 \mu +10240 \beta ^6 \delta ^3 \mu +30720 \beta ^4 \delta ^4 \mu$$ $$+11520 \beta ^{11} \nu +46080 \beta ^9 \delta \nu -20480 \beta ^7 \delta ^2 \nu -81920 \beta ^5 \delta ^3 \nu -46080 \beta ^{10} \xi -122880 \beta ^8 \delta \xi$$ $$+245760 \beta ^6 \delta ^2 \xi -15120 \alpha \beta ^{11} \tau +40320 \beta ^{10} \gamma \tau -66240 \alpha \beta ^9 \delta \tau +153600 \beta ^8 \gamma \delta \tau$$ $$-23040 \alpha \beta ^7 \delta ^2 \tau -184320 \beta ^6 \gamma \delta ^2 \tau +92160 \alpha \beta ^5 \delta ^3 \tau -491520 \beta ^4 \gamma \delta ^3 \tau +307200 \alpha \beta ^3 \delta ^4 \tau$$ $$+491520 \beta ^2 \gamma \delta ^4 \tau -245760 \alpha \beta \delta ^5 \tau +322560 \beta ^9 \epsilon \tau +552960 \beta ^7 \delta \epsilon \tau +737280 \beta ^5 \delta ^2 \epsilon \tau$$ $$-983040 \beta ^3 \delta ^3 \epsilon \tau +5535 \beta ^{12} \tau ^2+5400 \beta ^{10} \delta \tau ^2-115440 \beta ^8 \delta ^2 \tau ^2-264960 \beta ^6 \delta ^3 \tau ^2$$ $$-311040 \beta ^4 \delta ^4 \tau ^2-92160 \beta ^2 \delta ^5 \tau ^2+61440 \delta ^6 \tau ^2$$ $$+N (3932160 K \beta ^8-46080 \alpha ^2 \beta ^{10}+307200 \alpha \beta ^9 \gamma +491520 \beta ^8 \gamma ^2-307200 \alpha ^2 \beta ^8 \delta +491520 \alpha \beta ^7 \gamma \delta$$ $$+1966080 \beta ^6 \gamma ^2 \delta -245760 \alpha ^2 \beta ^6 \delta ^2-2949120 \alpha \beta ^5 \gamma \delta ^2+983040 \alpha ^2 \beta ^4 \delta ^3+983040 \alpha \beta ^8 \epsilon$$ $$-3932160 \beta ^7 \gamma \epsilon +3932160 \alpha \beta ^6 \delta \epsilon -1966080 \beta ^9 \zeta +3932160 \beta ^7 \delta \zeta +12576 \beta ^{13} \lambda +38592 \beta ^{11} \delta \lambda$$ $$-38912 \beta ^9 \delta ^2 \lambda +141312 \beta ^7 \delta ^3 \lambda +450560 \beta ^5 \delta ^4 \lambda +49152 \beta ^3 \delta ^5 \lambda +20640 \beta ^{12} \mu +92160 \beta ^{10} \delta \mu$$ $$+66560 \beta ^8 \delta ^2 \mu +81920 \beta ^6 \delta ^3 \mu -122880 \beta ^4 \delta ^4 \mu -61440 \beta ^{11} \nu -307200 \beta ^9 \delta \nu -163840 \beta ^7 \delta ^2 \nu$$ $$+327680 \beta ^5 \delta ^3 \nu +307200 \beta ^{10} \xi +983040 \beta ^8 \delta \xi -983040 \beta ^6 \delta ^2 \xi +72000 \alpha \beta ^{11} \tau -245760 \beta ^{10} \gamma \tau$$ $$+384000 \alpha \beta ^9 \delta \tau -1105920 \beta ^8 \gamma \delta \tau +522240 \alpha \beta ^7 \delta ^2 \tau +245760 \alpha \beta ^5 \delta ^3 \tau +1966080 \beta ^4 \gamma \delta ^3 \tau$$ $$-1228800 \alpha \beta ^3 \delta ^4 \tau -675840 \beta ^9 \epsilon \tau -1474560 \beta ^7 \delta \epsilon \tau -2949120 \beta ^5 \delta ^2 \epsilon \tau -23400 \beta ^{12} \tau ^2$$ $$-38880 \beta ^{10} \delta \tau ^2+372480 \beta ^8 \delta ^2 \tau ^2+844800 \beta ^6 \delta ^3 \tau ^2+1013760 \beta ^4 \delta ^4 \tau ^2+368640 \beta ^2 \delta ^5 \tau^2)$$ $$+N^2 (-3932160 K \beta ^8+15360 \alpha ^2 \beta ^{10}-552960 \alpha \beta ^9 \gamma +491520 \beta ^8 \gamma ^2+552960 \alpha ^2 \beta ^8 \delta -3440640 \alpha \beta ^7 \gamma \delta$$ $$-1966080 \beta ^6 \gamma ^2 \delta +1720320 \alpha ^2 \beta ^6 \delta ^2+2949120 \alpha \beta ^5 \gamma \delta ^2-983040 \alpha ^2 \beta ^4 \delta ^3+983040 \alpha \beta ^8 \epsilon$$ $$+3932160 \beta ^7 \gamma \epsilon -3932160 \alpha \beta ^6 \delta \epsilon +5898240 \beta ^9 \zeta -3932160 \beta ^7 \delta \zeta +18976 \beta ^{13} \lambda +72512 \beta ^{11} \delta \lambda$$ $$+346112 \beta ^9 \delta ^2 \lambda +473088 \beta ^7 \delta ^3 \lambda -204800 \beta ^5 \delta ^4 \lambda -49152 \beta ^3 \delta ^5 \lambda +19040 \beta ^{12} \mu -30720 \beta ^{10} \delta \mu$$ $$-250880 \beta ^8 \delta ^2 \mu -573440 \beta ^6 \delta ^3 \mu +122880 \beta ^4 \delta ^4 \mu +20480 \beta ^{11} \nu +552960 \beta ^9 \delta \nu +1146880 \beta ^7 \delta ^2 \nu$$ $$-327680 \beta ^5 \delta ^3 \nu -552960 \beta ^{10} \xi -2949120 \beta ^8 \delta \xi +983040 \beta ^6 \delta ^2 \xi +27840 \alpha \beta ^{11} \tau +307200 \beta ^{10} \gamma \tau$$ $$-353280 \alpha \beta ^9 \delta \tau +2580480 \beta ^8 \gamma \delta \tau -1628160 \alpha \beta ^7 \delta ^2 \tau +2949120 \beta ^6 \gamma \delta ^2 \tau -2703360 \alpha \beta ^5 \delta ^3 \tau$$ $$-1966080 \beta ^4 \gamma \delta ^3 \tau +1228800 \alpha \beta ^3 \delta ^4 \tau -1536000 \beta ^9 \epsilon \tau -1474560 \beta ^7 \delta \epsilon \tau +2949120 \beta ^5 \delta ^2 \epsilon \tau$$ $$-21000 \beta ^{12} \tau ^2+96480 \beta ^{10} \delta \tau ^2+433920 \beta ^8 \delta ^2 \tau ^2+814080 \beta ^6 \delta ^3 \tau ^2-92160 \beta ^4 \delta ^4 \tau ^2-368640 \beta ^2 \delta ^5 \tau ^2)$$ $$+N^3 (307200 \alpha ^2 \beta ^{10}-491520 \alpha \beta ^9 \gamma -1966080 \beta ^8 \gamma ^2+491520 \alpha ^2 \beta ^8 \delta +5898240 \alpha \beta ^7 \gamma \delta$$ $$-2949120 \alpha ^2 \beta ^6 \delta ^2-3932160 \alpha \beta ^8 \epsilon -7864320 \beta ^9 \zeta -120448 \beta ^{13} \lambda -367616 \beta ^{11} \delta \lambda -860160 \beta ^9 \delta ^2 \lambda$$ $$-1720320 \beta ^7 \delta ^3 \lambda -491520 \beta ^5 \delta ^4 \lambda -197120 \beta ^{12} \mu -614400 \beta ^{10} \delta \mu -368640 \beta ^8 \delta ^2 \mu +983040 \beta ^6 \delta ^3 \mu$$ $$+409600 \beta ^{11} \nu+491520 \beta ^9 \delta \nu -1966080 \beta ^7 \delta ^2 \nu -491520 \beta ^{10} \xi +3932160 \beta ^8 \delta \xi -599040 \alpha \beta ^{11} \tau$$ $$+860160 \beta ^{10} \gamma \tau -1781760 \alpha \beta ^9 \delta \tau -983040 \beta ^8 \gamma \delta \tau -245760 \alpha \beta ^7 \delta ^2 \tau -5898240 \beta ^6 \gamma \delta ^2 \tau$$ $$+4915200 \alpha \beta ^5 \delta ^3 \tau +3440640 \beta ^9 \epsilon \tau +5898240 \beta ^7 \delta \epsilon \tau +204000 \beta ^{12} \tau ^2+69120 \beta ^{10} \delta \tau ^2$$ $$-1981440 \beta ^8 \delta ^2 \tau ^2-4300800 \beta ^6 \delta ^3 \tau ^2-1843200 \beta ^4 \delta ^4 \tau ^2)$$ $$+N^4 (-522240 \alpha ^2 \beta ^{10}+2703360 \alpha \beta ^9 \gamma +983040 \beta ^8 \gamma ^2-2703360 \alpha ^2 \beta ^8 \delta -2949120 \alpha \beta ^7 \gamma \delta$$ $$+1474560 \alpha ^2 \beta ^6 \delta ^2+1966080 \alpha \beta ^8 \epsilon +3932160 \beta ^9 \zeta +12608 \beta ^{13} \lambda -77312 \beta ^{11} \delta \lambda -307200 \beta ^9 \delta ^2 \lambda$$ $$+614400 \beta ^7 \delta ^3 \lambda +245760 \beta ^5 \delta ^4 \lambda +136960 \beta ^{12} \mu +1044480 \beta ^{10} \delta \mu +2027520 \beta ^8 \delta ^2 \mu -491520 \beta ^6 \delta ^3 \mu$$ $$-696320 \beta ^{11} \nu -2703360 \beta ^9 \delta \nu +983040 \beta ^7 \delta ^2 \nu +2703360 \beta ^{10} \xi -1966080 \beta ^8 \delta \xi +622080 \alpha \beta ^{11} \tau$$ $$-2396160 \beta ^{10} \gamma \tau +3962880 \alpha \beta ^9 \delta \tau -4423680 \beta ^8 \gamma \delta \tau +6266880 \alpha \beta ^7 \delta ^2 \tau +2949120 \beta ^6 \gamma \delta ^2 \tau$$ $$-2457600 \alpha \beta ^5 \delta ^3 \tau +737280 \beta ^9 \epsilon \tau -2949120 \beta ^7 \delta \epsilon \tau -150000 \beta ^{12} \tau ^2-933120 \beta ^{10} \delta \tau ^2$$ $$-1313280 \beta ^8 \delta ^2 \tau ^2+1290240 \beta ^6 \delta ^3 \tau ^2+921600 \beta ^4 \delta ^4 \tau ^2)$$ $$+N^5 (-245760 \alpha ^2 \beta ^{10}-2949120 \alpha \beta ^9 \gamma +2949120 \alpha ^2 \beta ^8 \delta +373760 \beta ^{13} \lambda +1165312 \beta ^{11} \delta \lambda$$ $$+2457600 \beta ^9 \delta ^2 \lambda +1474560 \beta ^7 \delta ^3 \lambda +547840 \beta ^{12} \mu +491520 \beta ^{10} \delta \mu -2211840 \beta ^8 \delta ^2 \mu -327680 \beta ^{11} \nu$$ $$+2949120 \beta ^9 \delta \nu -2949120 \beta ^{10} \xi +1259520 \alpha \beta ^{11} \tau +983040 \beta ^{10} \gamma \tau -245760 \alpha \beta ^9 \delta \tau +5898240 \beta ^8 \gamma \delta \tau$$ $$-7372800 \alpha \beta ^7 \delta ^2 \tau -2949120 \beta ^9 \epsilon \tau -441600 \beta ^{12} \tau ^2$$ $$+1428480 \beta ^{10} \delta \tau ^2+6819840 \beta ^8 \delta ^2 \tau ^2+3686400 \beta ^6 \delta ^3 \tau ^2)$$ $$+N^6(1228800 \alpha ^2 \beta ^{10}+983040 \alpha \beta ^9 \gamma -983040 \alpha ^2 \beta ^8 \delta -185344 \beta ^{13} \lambda -161792 \beta ^{11} \delta \lambda$$ $$-491520 \beta ^9 \delta ^2 \lambda -491520 \beta ^7 \delta ^3 \lambda -803840 \beta ^{12} \mu -2457600 \beta ^{10} \delta \mu +737280 \beta ^8 \delta ^2 \mu +1638400 \beta ^{11} \nu$$ $$-983040 \beta ^9 \delta \nu +983040 \beta ^{10} \xi -2426880 \alpha \beta ^{11} \tau +1966080 \beta ^{10} \gamma \tau -5652480 \alpha \beta ^9 \delta \tau$$ $$-1966080 \beta ^8 \gamma \delta \tau +2457600 \alpha \beta ^7 \delta ^2 \tau +983040 \beta ^9 \epsilon \tau +764160 \beta ^{12} \tau ^2$$ $$+1428480 \beta ^{10} \delta \tau ^2-2396160 \beta ^8 \delta ^2 \tau ^2-1228800 \beta ^6 \delta ^3 \tau ^2)$$ $$+N^7 (-983040 \alpha ^2 \beta ^{10}-446464 \beta ^{13} \lambda -1556480 \beta ^{11} \delta \lambda -1966080 \beta ^9 \delta ^2 \lambda -204800 \beta ^{12} \mu$$ $$+1966080 \beta ^{10} \delta \mu -1310720 \beta ^{11} \nu +245760 \alpha \beta ^{11} \tau -1966080 \beta ^{10} \gamma \tau$$ $$+4915200 \alpha \beta ^9 \delta \tau -291840 \beta ^{12} \tau ^2-4792320 \beta ^{10} \delta \tau ^2-3686400 \beta ^8 \delta ^2 \tau ^2)$$ $$+N^8 (245760 \alpha ^2 \beta ^{10}+123904 \beta ^{13} \lambda +20480 \beta ^{11} \delta \lambda +491520 \beta ^9 \delta ^2 \lambda +972800 \beta ^{12} \mu -491520 \beta ^{10} \delta \mu$$ $$+327680 \beta ^{11} \nu +1781760 \alpha \beta ^{11} \tau +491520 \beta ^{10} \gamma \tau\ -1228800 \alpha \beta ^9 \delta \tau$$ $$-618240 \beta ^{12} \tau ^2+1751040 \beta ^{10} \delta \tau ^2+921600 \beta ^8 \delta ^2 \tau ^2)$$ $$+N^9 (368640 \beta ^{13} \lambda +1228800 \beta ^{11} \delta \lambda -614400 \beta ^{12} \mu -1228800 \alpha \beta ^{11} \tau +1259520 \beta ^{12} \tau ^2+1843200 \beta ^{10} \delta \tau ^2)$$ $$+N^{10} (73728 \beta ^{13} \lambda -245760 \beta ^{11} \delta \lambda +122880 \beta ^{12} \mu +245760 \alpha \beta ^{11} \tau -460800 \beta ^{12} \tau ^2-368640 \beta ^{10} \delta \tau ^2)$$ $$+N^{11} (-294912 \beta ^{13} \lambda -368640 \beta ^{12} \tau ^2)$$ $$+N^{12} (49152 \beta ^{13} \lambda +61440 \beta ^{12} \tau ^2)$$
[99]{}
Zhedanov A. S., Hidden Symmetry of Askey-Wilson Polynomials, [ *Theor and Math. Phys.*]{} [**89**]{} 2 (1991) 1146-1157
Granovsky Ya.A, Zhedanov A.S. and Lutzenko I.M., Quadratic algebra as a “hidden” symmetry of the Hartmann potential, [*J.Phys.A:Math.Gen.*]{} [**4**]{} 3887-3894 (1991)
Granovskii Ya.I., Zhedanov A.S. and Lutzenko I.M., Quadratic algebras and dynamics in curved spaces. II. The Kepler problem [*Theoret. and Math. Phys.*]{} [**89**]{} (1992) 474-480, [*Theoret. and Math. Phys.*]{} [**91**]{} (1992) 604-612
Granovskii Ya.I., Lutzenko I.M., Zhedanov A.S., Mutual integrability, quadratic algebras, and dynamical symmetry, [*Ann. Physics*]{} [**217**]{} (1992), 1-20.
Granovskii Ya.I., Zhedanov A.S., Lutsenko I.M., Quadratic algebras and dynamics in curved space. I. Oscillator, [*Theoret. and Math. Phys.*]{} [**91**]{} (1992), 474-480.
Zhedanov A.S., Hidden symmetry algebra and overlap coefficients for two ring-shaped potentials, [*J.Phys.A.:Math.gen.*]{} [**26**]{} 4633-4641 (1993)
Bonatsos D., Daskaloyannis C. and Kokkotas K., Quantum-algebraic description of quantum superintegrable systems in two dimensions, [*Phys.Rev.*]{} A [**48**]{} (1993) R3407-R3410
Bonatsos D., Daskaloyannis C. and Kokkotas K., Deformed oscillator algebras for two-dimensional quantum superintegrable systems, [*Phys. Rev.*]{} A [**50**]{} (1994) 3700-3709
Létourneau P. and Vinet L., Superintegrable systems: polynomial algebras and quasi-exactly solvable Hamiltonians, [*Ann. Phys.*]{} (New York) [**243**]{} (1995) 144-168
Bonatsos D., Daskaloyannis C., Quantum groups and their applications in nuclear physics, [*Prog.Part.Nucl.Phys.*]{} [**43**]{} (1999) 537-618
Daskaloyannis C., Quadratic Poisson algebras of two-dimensional classical superintegrable systems and quadratic associative algebras of quantum superintegrable systems, [*J.Math.Phys.*]{} [**42**]{} (2001) 1100-1119
Kalnins E. G., Kress J. M. and Miller W. Jr.,Second order superintegrable systems in conformally flat spaces. 1. 2D classical structure theory, [*J. Math. Phys.*]{} [**46**]{} 053509 (2005)
Kress J.M., Equivalence of superintegrable systems in two dimensions, [*Phys. Atomic Nuclei*]{} [**70**]{} 560-566 (2007)
Daskaloyannis C., Ypsilantis K., Unified treatment and classification of superintegrable systems with integrals quadratic in momenta on a two dimensional manifold, [*J. Math. Phys.*]{} [**47**]{} (2006), 042904, 38 pages
Daskaloyannis C., Tanoudis Y., Quantum superintegrable systems with quadratic integrals on a two dimensional manifold, [*J. Math. Phys.*]{} [**48**]{} (2007), 072108, 22 pages
Quesne C., Quadratic algebra approach to an exactly solvable position-dependent mass Schrödinger equation in two dimensions [*SIGMA*]{} [**3**]{} (2007) 067, 14 pages
Daskaloyannis C. and Tanoudis Y., Classification of the quantum two-dimensional superintegrable systems with quadratic integrals and the Stäckel transforms, [*Phys. Atomic Nuclei*]{} [**71**]{} 853-861 (2008)
Post S., Models of Quadratic Algebras Generated by Superintegrable Systems in 2D, [*SIGMA*]{} [**7**]{} (2011) 036, 20 pages
Kalnins E.G., Miller Jr W. and Post S., Models for Quadratic Algebras Associated with Second Order Superintegrable Systems in 2D, [*SIGMA*]{} [**4**]{} (2008) 008, 21 pages
Kalnins E.G., Miller Jr W., Post S., Contractions of 2D 2nd order quantum superintegrable systems and the Askey scheme for hypergeometric orthogonal polynomials arXiv:1212.4766
Sklyanin E. K., On some algebraic structures related to the Yang Baxter equation [*Funkts. Anal. Prilozhen.*]{} [**16**]{} 4 (1982) 22–34
Sklyanin E. K., On some algebraic structures related to the Yang Baxter equation. II. Representations of a quantum algebra, [*Funkts. Anal. Prilozhen.*]{} [**17**]{} 4 (1983) 34–48
Daskaloyannis C, Generalized deformed oscillator and nonlinear algebras [*J.Phys.A: Math.Gen.*]{} [**24**]{} (1991) L789-L794
Quesne C., Generalized deformed parafermions, nonlinear deformations of so(3) and exactly solvable potentials, [*Phys. Lett.*]{} A [**193**]{} (1994), 245-250.
Marquette I. and Winternitz P., Polynomial Poisson algebras for superintegrable systems with a third order integral of motion, [*J.Math. Phys.*]{} [**48**]{} (2007) 012902 16 pages
Marquette I., Superintegrability with third order integrals of motion, cubic algebras and supersymmetric quantum mechanics I:Rational function potentials, [*J. Math. Phys.*]{} [**50**]{} (2009) 012101 23 pages
Marquette I., Superintegrability with third order integrals of motion, cubic algebras and supersymmetric quantum mechanics II: Painlevé transcendent potentials, [*J. Math. Phys.*]{} [**50**]{} (2009) 095202 18 pages
Marquette I., Superintegrability and higher order polynomial algebras, [*J.Phys. A: Math. Theor.*]{} [**43**]{} (2010) 135203 15 pages
Marquette I., An infinite family of superintegrable systems from higher order ladder operators and supersymmetry, Proceeding of the GROUP28: The XXVIII International Colloquium on Group-Theoretical Methods in Physics, [*J.Phys.:Conf. Series*]{} [**284**]{} (2011) 012047 8 pages
Kalnins E.G., Kress J.M. and Miller Jr W., A Recurrence Relation Approach to Higher Order Quantum Superintegrability, [*SIGMA*]{} [**7**]{} 031 (2011) 24 pages
E. G. Kalnins, J. M. Kress, W. Miller Jr , Extended Kepler-Coulomb quantum superintegrable systems in 3 dimensions , arXiv:1210.8004
Sarah Post, Satoshi Tsujimoto, Luc Vinet , Families of superintegrable Hamiltonians constructed from exceptional polynomials , arXiv:1206.0480
Marquette I. and Quesne C., New families of superintegrable systems from Hermite and Laguerre exceptional orthogonal polynomials, [*J. Math. Phys.*]{} [**54**]{} 042102 (2013)
Marquette I. and Quesne C., New ladder operators for a rational extension of the harmonic oscillator and superintegrability of some two-dimensional systems, arXiv:1303.7150
Kalnins E.G., Miller W. Jr and Pogosyan G.S.,Superintegrability in three dimensional Euclidean space [*J. Math. Phys.*]{} [**40**]{} (1999) 708-725
Kalnins E. G., Kress J. M. Miller W. Jr., Second order superintegrable systems in conformally flat spaces. 3. 3D classical structure theory, [*J. Math. Phys.*]{} [**46**]{} 103507 (2005)
Daskaloyannis C. and Tanoudis Y., Quadratic algebras for three-dimensional nondegenerate superintegrable systems with quadratic integrals of motion, [*Talk XXVII Colloquium on Group Theoretical Methods in Physics, Yerevan, Armenia, Aug (2008)*]{} arXiv:0902.0130
Daskaloyannis C., Tanoudis Y., Ternary Poisson algebra for the nondegenerate three dimensional Kepler-Coulomb potential, in Proceedings of the Fourth International Workshop on Group Analysis of Differential Equations and Integrable Systems (October 26-30, 2008, Protaras, Cyprus), University of Cyprus, Nicosia, 2009, 173-181
Daskaloyannis C., Tanoudis Y., Quadratic algebras for three-dimensional superintegrable systems, [*Phys. Atomic Nuclei*]{} [**73**]{} (2010), 214-221
Marquette I, Generalized MICZ-Kepler system, duality, polynomial, and deformed oscillator algebras [*J.Math.Phys.*]{} [**51**]{} (2010) 102105, 10 pages
Marquette I., Generalized five-dimensional Kepler system, Yang-Coulomb monopole and Hurwitz transformation [*J. Math.Phys.*]{} [**53**]{} 022103 (2012)
Terwilliger P., The Universal Askey-Wilson Algebra [*SIGMA*]{} [**7**]{} 069 (2011) 24 pages
Lee Y.-H., Yang W.-L. and Zhang Y.-Z., Polynomial algebras and exact solutions of general quantum nonlinear optical models I: two-mode boson systems, [*J. Phys. A: Math. Theor.*]{} [**43**]{} 185204 (2010) 17 pages
Lee Y.-H., Yang W.-L. and Zhang Y.-Z., Polynomial algebras and exact solutions of general quantum nonlinear optical models: II. Multi-mode boson systems, [*J. Phys. A: Math. Theor.*]{} [**43**]{} 375211 (2010) 12 pages
Lee Y.-H.,Links J. and Zhang Y.-Z., Exact solutions for a family of spin-boson systems, [*Nonlinearity*]{} [**24**]{} (2011) 1975–1986
Lavrenov A.N., Deformation of the Askey-Wilson algebra with three generators, [*J.Phys.A: Math. Gen.*]{} [**28**]{} (1995) L503-L506
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We give a locally minimal, but not globally minimal bridge position of a knot, that is, an unstabilized, nonminimal bridge position of a knot. It implies that a bridge position cannot always be simplified so that the bridge number monotonically decreases to the minimal.'
address:
- 'Department of Natural Sciences, Faculty of Arts and Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-ku, Tokyo, 154-8525, Japan'
- 'Department of Mathematics, Graduate School of Science, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka, 560-0043, Japan'
author:
- Makoto Ozawa
- Kazuto Takao
title: 'A locally minimal, but not globally minimal bridge position of a knot'
---
[^1]
Introduction {#intro}
============
A [*knot*]{} is an equivalence class of embeddings of the circle $S^1$ into the 3-sphere $S^3$, where two embeddings are said to be [*equivalent*]{} if an ambient isotopy of $S^3$ deforms one to the other. In knot theory, it is a fundamental and important problem to determine whether given two representatives of knots are equivalent, and furthermore to describe how one can be deformed to the other. In particular, a simplification to a “minimal position" is of great interest.
Let $h:S^3\to \mathbb{R}$ be the standard height function, that is, the restriction of $\mathbb{R}^4=\mathbb{R}^3\times \mathbb{R}\to \mathbb{R}$ to $S^3$. A [*Morse position*]{} of a knot $K$ is a representative $k$ such that $k$ is disjoint from the poles of $S^3$ and the critical points of $h|_k$ are all non-degenerate and have pairwise distinct values. Since $k$ is a circle, there are the same number of local maxima and local minima.
In [@Sc], Schubert introduced the notion of bridge position and bridge number for knots. A [*bridge position*]{} of $K$ is a Morse position $k$ where all the local maxima are above all the local minima with respect to $h$. A level $2$-sphere $S$ separating the local maxima from the local minima is called a [*bridge sphere*]{} of $k$. If $k$ intersects $S$ in $2n$ points, then $k$ is called an [*$n$-bridge position*]{} and $n$ is called the [*bridge number*]{} of $k$. The minimum of the bridge number over all bridge positions of $K$ is called the [*bridge number*]{} of $K$. A knot with the bridge number $n$ is called an [*$n$-bridge knot*]{}. The bridge number is a fundamental geometric invariant of knots as well as the crossing number.
In [@G], Gabai introduced the notion of width for knots. Suppose that $k$ is a Morse position of a knot $K$, let $t_1,\ldots,t_m$ be the critical levels of $h|_k$ such that $t_i<t_{i+1}$ for $i=1,\ldots,m-1$, and choose regular levels $r_1,\ldots,r_{m-1}$ of $h|_k$ so that $t_i<r_i<t_{i+1}$. The [*width*]{} of $k$ is defined as $\sum_{i=1}^{m-1}|h^{-1}(r_i)\cap k|$, and the [*width*]{} of $K$ is the minimum of the width over all Morse positions of $K$.
Two Morse positions of a knot are [*isotopic*]{} if an ambient isotopy of $S^3$ deforms one to the other keeping it a Morse position except for exchanging two levels of local maxima or two levels of local minima. Such an isotopy preserves the width of a Morse position and the bridge number of a bridge position. The following two types of moves change the isotopy class of a Morse position.
Suppose that $k$ is a Morse position of a knot $K$ and let $t_1,r_1,t_2,\ldots,r_{m-1},t_m$ as above. We say that the level 2-sphere $h^{-1}(r_i)$ is [*thick*]{} if $t_i$ is a locally minimal level of $h|_k$ and $t_{i+1}$ is a locally maximal level of $h|_k$, and that $h^{-1}(r_i)$ is [*thin*]{} if $t_i$ is a locally maximal level and $t_{i+1}$ is a locally minimal level. A [*strict upper*]{} (resp. [*lower*]{}) [*disk*]{} for a thick sphere $S$ is a disk $D\subset S^3$ such that the interior of $D$ does not intersect with $k$ and any thin sphere, the interior of $D$ contains no critical points with respect to $h$, and $\partial D$ consists of a subarc $\alpha$ of $k$ and an arc in $S-k$. Note that the arc $\alpha$ has exactly one local maximum (resp. minimum). First suppose that there exist a strict upper disk $D_+$ and a strict lower disk $D_-$ for a thick sphere $S$ such that $D_+\cap D_-$ consists of a single point of $k\cap S$. Then $k$ can be isotoped along $D_+$ and $D_-$ to cancel the local maximum in $\partial D_+$ and the local minimum in $\partial D_-$. In [@Sch2], Schultens called this move a [*Type I move*]{}. The inverse operation of a Type I move is called a [*stabilization*]{} ([@B], [@O1]) or a [*perturbation*]{} ([@ST], [@T]) and the resultant position is said to be [*stabilized*]{} or [*perturbed*]{}. Next suppose that there exist a strict upper disk $D_+$ and a strict lower disk $D_-$ for a thick sphere $S$ such that $D_+\cap D_-=\emptyset$. Then $k$ can be isotoped along $D_+$ and $D_-$ to exchange the two levels of the local maximum in $\partial D_+$ and the local minimum in $\partial D_-$. Schultens ([@Sch2]) called this move a [*Type II move*]{}.
![Type I move[]{data-label="type1"}](type1.eps){width=".8\linewidth"}
![Type II move[]{data-label="type2"}](type2.eps){width=".8\linewidth"}
The following are fundamental theorems for bridge positions and Morse positions which correspond to Reidemeister’s theorem ([@R2], [@AB]) for knot diagrams.
\[fundamental1\] Two knots are equivalent if and only if their two bridge positions can be related by a sequence of Type I moves and the inverse operations up to isotopy.
\[fundamental2\] Two knots are equivalent if and only if their two Morse positions can be related by a sequence of Type I and Type II moves and the inverse operations up to isotopy.
We say that a bridge position of a knot $K$ is [*globally minimal*]{} if it realizes the bridge number of $K$, and a bridge position is [*locally minimal*]{} if it does not admit a Type I move. Similarly, we say that a Morse position of a knot $K$ is [*globally minimal*]{} if it realizes the width of $K$, and a Morse position is [*locally minimal*]{} if it does not admit a Type I move nor a Type II move. Note that if a bridge (resp. Morse) position of a knot is globally minimal, then it is locally minimal. Otal (later Hayashi–Shimokawa, the first author) proved the converse for bridge positions of the trivial knot.
\[Otal\] A locally minimal bridge position of the trivial knot is globally minimal.
This implies that even complicated bridge positions of the trivial knot can be simplified into the $1$-bridge position only by Type I moves. Furthermore, Otal (later Scharlemann–Tomova) showed that the same statement for 2-bridge knots is true ([@O2], [@ST]), and the first author showed that the same statement for torus knots is also true ([@OZ2]). Then, the following problem is naturally proposed.
\[question\] Is any locally minimal bridge position of any knot globally minimal?
In this paper, we give a negative answer to this problem. It implies that a bridge position cannot always be simplified into a minimal bridge position only by Type I moves.
\[main\] A $4$-bridge position $\kappa $ of a knot $\mathcal{K}$ in Figure \[example\] is locally minimal, but not globally minimal.
$\kappa $\
![A $4$-bridge position of a knot.[]{data-label="example"}](example.eps "fig:")
$\mathbb{R}$\
To prove this theorem, we show that the Hempel distance of the 4-bridge position is greater than $1$ by the method developed by the second author ([@Ta]). This guarantees that the 4-bridge position is locally minimal (see Lemma \[minimal\]).
On the other hand, Zupan showed that locally minimal, but not globally minimal Morse positions exist even if the knot is trivial.
There exists a locally minimal Morse position of the trivial knot which is not globally minimal.
We remark that this answers Scharlemann’s question [@S1 Question 3.5]. By using this example, Zupan showed that there exist infinitely many locally minimal, but not globally minimal Morse positions for any knot.
Proof of the main theorem
=========================
Note that Figure \[example\] displays a $3$-bridge position of $\mathcal{K}$ after a $(\pi /2)$-rotation of $\kappa $, and so the $4$-bridge position is not globally minimal. To prove that the $4$-bridge position is locally minimal, we apply the following:
\[minimal\] An $n$-bridge position is locally minimal if it has Hempel distance greater than $1$.
\[criterion\] An $n$-bridge position has Hempel distance greater than $1$ if a bridge diagram of it satisfies the well-mixed condition.
Here $n$ is an integer greater than $2$. The notions of [*Hempel distance*]{}, [*bridge diagram*]{} and [*well-mixed condition*]{} are described in the following subsections.
Hempel distance
---------------
Suppose that $k$ is an $n$-bridge position of a knot $K$ and $S$ is a bridge sphere of $k$. Let $B_+,\ B_-\subset S^3$ be the $3$-balls divided by $S$, and $\tau _\varepsilon $ be the $n$ arcs $k\cap B_\varepsilon $ for each $\varepsilon =\pm $.
Consider a properly embedded disk $E$ in $B_\varepsilon $. We call $E$ an [*essential disk*]{} of $(B_\varepsilon ,\tau _\varepsilon )$ if $E$ is disjoint from $\tau _\varepsilon $ and $\partial E$ is essential in the $2n$-punctured sphere $S\setminus k$. Here, a simple closed curve on a surface is said to be [*essential*]{} if it neither bounds a disk nor is peripheral in the surface. The essential simple closed curves on $S\setminus k$ form a $1$-complex ${\mathcal C}(S\setminus k)$, called the [*curve graph*]{} of $S\setminus k$. The vertices of ${\mathcal C}(S\setminus k)$ are the isotopy classes of essential simple closed curves on $S\setminus k$, and a pair of vertices spans an edge of ${\mathcal C}(S\setminus k)$ if the corresponding isotopy classes can be realized as disjoint curves. The [*Hempel distance*]{} of $k$ is defined as $${\rm min}\{ d([\partial E_+],[\partial E_-])\mid E_\varepsilon \text{ is an essential disk of }(B_\varepsilon ,\tau _\varepsilon )\text{ for each }\varepsilon =\pm .\} ,$$ where $d([\partial E_+],[\partial E_-])$ is the minimal distance between $[\partial E_+]$ and $[\partial E_-]$ measured in ${\mathcal C}(S\setminus k)$ with the path metric.
Assume that $k$ has Hempel distance $0$. By the definition, there exist essential disks $E_+,\ E_-$ of $(B_+,\tau _+)$, $(B_-,\tau _-)$, respectively, such that $\partial E_+=\partial E_-$, which requires that $k$ is split. Since the circle $k$ is connected, the Hempel distance is at least $1$. The Hempel distance is $1$ if there exist essential disks $E_+,\ E_-$ of $(B_+,\tau _+),\ (B_-,\tau _-)$, respectively, such that $\partial E_+\cap \partial E_-=\emptyset$. We can find such disks for a not locally minimal bridge position as follows:
Assume that the $n$-bridge position $k$ is not locally minimal. By definition, there exist a strict upper disk $D_+\subset B_+$ and a strict lower disk $D^1_-\subset B_-$ for $S$ such that $D_+\cap D^1_-$ consists of a single point of $k\cap S$. Note that $\tau _-$ is $n$ arcs each of which has a single local minimum. We can choose strict lower disks $D^2_-,\ \ldots ,\ D^n_-\subset B_-$ for $S$ such that $D^1_-,\ D^2_-,\ \ldots ,\ D^n_-$ are pairwise disjoint. Let $\eta (D_+\cup D^1_-)$ denote a closed regular neighborhood of $D_+\cup D^1_-$ in $S^3$. By replacing subdisks of $D^2_-,\ \ldots ,\ D^n_-$ with subdisks of $\partial (\eta (D_+\cup D^1_-))\cap B_-$, we can arrange that $D^1_-,\ D^2_-,\ \ldots ,\ D^n_-$ are disjoint from $D_+$ except for the two points of $k\cap S$. Since we assumed $n>2$, one of the strict lower disks, denoted by $D_-$, is disjoint from $D_+$. The boundary of a regular neighborhood in $S^3$ of each $D_\varepsilon $ intersects $B_\varepsilon $ in an essential disk of $(B_\varepsilon ,\tau _\varepsilon )$. They guarantee that the Hempel distance is $1$.
Bridge diagram
--------------
We continue with the above notation. There are $n$ pairwise disjoint strict upper (resp. lower) disks $D^1_+,\ D^2_+,\ \ldots ,\ D^n_+$ (resp. $D^1_-,\ D^2_-,\ \ldots $, $D^n_-$) for $S$. The knot diagram of $K$ obtained by projecting $k$ into $S$ along these disks is called a [*bridge diagram*]{} of $k$. In the terminology of [@CF], $\tau _+,\ \tau _-$ are the overpasses and the underpasses of $k$.
Now let us describe how we can obtain a bridge diagram of the $4$-bridge position $\kappa $. Isotope $\kappa $ as in Figure \[braid\], and start with a bridge sphere $S=S_0$. There are canonical strict upper disks $D^1_+,\ D^2_+,\ D^3_+$ and $D^4_+$. Figure \[diagram-1\] illustrates a view of the arcs $D^1_+\cap S,\ D^2_+\cap S,\ D^3_+\cap S$ and $D^4_+\cap S$ on $S$ from $B_+$ side. Shifting the bridge sphere $S$ to $S_1$, the arcs can be seen as in Figure \[diagram-2\]. Shifting $S$ further to $S_2$ and to $S_3$, the arcs are as in Figure \[diagram-3\] and \[diagram-4\], respectively. By continuing this process, the arcs are as in Figure \[diagram-5\] when $S$ is at $S_{15}$. The picture grows more and more complicated as $S$ goes down. We include huge pictures in the back of this paper. Figure \[diagram-6\] illustrates the arcs when $S$ is at $S_{20}$, and finally Figure \[diagram-7\] when $S$ has arrived at $S_{25}$. Then we can find canonical strict lower disks $D^1_-,\ D^2_-,\ D^3_-,\ D^4_-$ and obtain a bridge diagram of $\kappa $.
![In $S_3$.[]{data-label="diagram-4"}](braid.eps "fig:")\
$D^1_+$$D^2_+$$D^3_+$$D^4_+$\
$S_0$\
$S_1$\
$S_2$\
$S_3$\
$S_{15}$\
$S_{20}$\
$S_{25}$\
$D^1_-$$D^2_-$$D^3_-$$D^4_-$
$D^1_+\cap S$$D^2_+\cap S$$D^3_+\cap S$$D^4_+\cap S$\
![In $S_3$.[]{data-label="diagram-4"}](diagram-1.eps "fig:")
![In $S_3$.[]{data-label="diagram-4"}](diagram-2.eps)
![In $S_3$.[]{data-label="diagram-4"}](diagram-3.eps)
![In $S_3$.[]{data-label="diagram-4"}](diagram-4.eps)
![In $S_{15}$.[]{data-label="diagram-5"}](diagram-5.eps)
Well-mixed condition
--------------------
Suppose again that $k$ is an $n$-bridge position of a knot $K$ with $n>2$ and $S$ is a bridge sphere of $k$. Let $B_+,\ B_-\subset S^3$ be the $3$-balls divided by $S$, and $\tau _\varepsilon $ be the $n$ arcs $K\cap B_\varepsilon $ for each $\varepsilon =\pm $. Let $D^1_+,\ D^2_+,\ \ldots ,\ D^n_+$ and $D^1_-,\ D^2_-,\ \ldots ,\ D^n_-$ be strict upper and lower disks for $S$ determining a bridge diagram of $k$.
Let $l$ be a loop on $S$ containing the arcs $D^1_-\cap S,\ D^2_-\cap S,\ \ldots ,\ D^n_-\cap S$ such that the arcs are located in $l$ in that order. We can assume that $D^1_+,\ D^2_+,\ \ldots ,\ D^n_+$ have been isotoped so that the arcs $D^1_+\cap S,\ D^2_+\cap S,\ \ldots ,\ D^n_+\cap S$ have minimal intersection with $l$. For the bridge diagram of Figure \[diagram-7\], it is natural to choose $l$ to be the one which is seen as a horizontal line. Let $H_+,\ H_-\subset S$ be the hemi-spheres divided by $l$ and let $\delta _i$ ($1\leq i\leq n$) be the component of $l\setminus (D^1_-\cup D^2_-\cup \cdots \cup D^n_-)$ which lies between $D^i_-\cap S$ and $D^{i+1}_-\cap S$. (Here the indices are considered modulo $n$.) Let ${\mathcal A}_{i,j,\varepsilon }$ be the collection of components of $(D^1_+\cup D^2_+\cup \cdots \cup D^n_+)\cap H_\varepsilon $ separating $\delta _i$ from $\delta _j$ in $H_\varepsilon $ for a distinct pair $i,\ j\in \{ 1,2,\ldots ,n\} $ and $\varepsilon \in \{ +,-\} $. For example, Figure \[(1,2,+)\] roughly displays ${\mathcal A}_{1,2,+}$ for the bridge diagram of Figure \[diagram-7\]. Note that ${\mathcal A}_{i,j,\varepsilon }$ consists of parallel arcs in $H_\varepsilon $.
![The collection ${\mathcal A}_{1,2,+}$ of arcs, which looks like gray bands.[]{data-label="(1,2,+)"}](arcs.eps "fig:")\
$H_+$\
$\delta _4$$D^1_-\cap S$$\delta _1$$D^2_-\cap S$$\delta _2$$D^3_-\cap S$$\delta _3$$D^4_-\cap S$$\delta _4$\
$H_-$\
1. A bridge diagram satisfies the [*$(i,j,\varepsilon )$-well-mixed condition*]{} if in ${\mathcal A}_{i,j,\varepsilon }\subset H_\varepsilon $, a subarc of $D^r_+\cap S$ is adjacent to a subarc of $D^s_+\cap S$ for every distinct pair $r,\ s\in \{ 1,2,\ldots ,n\} $.
2. A bridge diagram satisfies the [*well-mixed condition*]{} if it satisfies the $(i,j,\varepsilon )$-well-mixed condition for every combination of a distinct pair $i,\ j\in \{ 1,2,\ldots ,n\} $ and $\varepsilon \in \{ +,-\} $.
By making Figure \[(1,2,+)\] detailed, one can check the $(1,2,+)$-well-mixed condition for the bridge diagram of Figure \[diagram-7\]. One can also check the $(i,j,\varepsilon )$-well-mixed condition for all the other $(i,j,\varepsilon )$ to complete the well-mixed condition. By Theorem \[criterion\], the Hempel distance of $\kappa $ is greater than $1$. By Lemma \[minimal\], we conclude the proof of Theorem \[main\].
We would like to remark that the Hempel distance of $\kappa $ is exactly $2$. Notice that the boundary of a regular neighborhood in $S$ of the closure of $\delta _3$ is a simple closed curve disjoint from both $D^1_+$ and $D^1_-$. Note that the boundary of a regular neighborhood in $S^3$ of each $D^1_\varepsilon $ intersects $B_\varepsilon $ in an essential disk of $(B_\varepsilon ,\tau _\varepsilon )$. They guarantee that the Hempel distance is at most $2$.
Related results and further directions
======================================
The knot of our example
-----------------------
Figure \[example\] shows that the bridge number of $\mathcal{K}$ is at most $3$. Since any locally minimal bridge position of any 2-bridge knot is globally minimal ([@O2], [@ST]), the bridge number of $\mathcal{K}$ is equal to $3$. Since $\mathcal{K}$ has a $4$-bridge position with the Hempel distance $2$, it is a prime knot by the following:
\[composite\] A bridge position of a composite knot has Hempel distance $1$.
Let $k$ be an $n$-bridge position of a composite knot and $S$ be a bridge sphere of $k$. Let $B_+,\ B_-\subset S^3$ be the $3$-balls divided by $S$, and $\tau _\varepsilon $ be the $n$ arcs $k\cap B_\varepsilon $ for each $\varepsilon =\pm $. By the arguments in [@Sc], [@Sch1], it follows that any decomposing sphere for $k$ can be isotoped so that it intersects $S$ in a single loop. Then, in the opposite sides of the decomposing sphere, there are two essential disks $E_+,\ E_-$ of $(B_+,\tau _+),\ (B_-,\tau _-)$ respectively such that $\partial E_+\cap \partial E_-=\emptyset$. This shows that the Hempel distance is $1$.
Furthermore, $\mathcal{K}$ is hyperbolic since any locally minimal bridge position of any torus knot is globally minimal ([@OZ2]). Thus, $\mathcal{K}$ is a hyperbolic 3-bridge knot which admits a $4$-bridge position with the Hempel distance 2.
We expect that not only $\kappa $ may be an example for Theorem \[main\] but also many $4$-bridge positions of knots with the same projection image as that of Figure \[example\]. However only finitely many knots have the same projection image, and we would like to ask the following problem.
For an integer $n>3$, can we generate infinitely many $n$-bridge positions which are locally minimal, but not globally minimal?
We further expect that for some integers $n>m\geq 3$, we can find a locally minimal $n$-bridge position of an $m$-bridge knot which has a similar projection image as that of Figure \[example\]. However it seems difficult to find more than two locally minimal bridge positions of such a knot, and we would like to ask the following problem.
Does any knot have infinitely many locally minimal bridge positions?
It should be remarked that there exist only finitely many bridge positions of given bridge numbers for a hyperbolic knot ([@C]). In particular, there are finitely many globally minimal bridge positions of a hyperbolic knot. It should be also remarked that multiple bridge surfaces restrict Hempel distances ([@T2]).
Essential surfaces
------------------
Composite knots are a simple example of knots with essential surfaces properly embedded in the exteriors of their representatives. Theorem \[composite\] suggests that essential surfaces restrict Hempel distances. Bachman–Schleimer showed it in general.
\[bound\] Let $F$ be an orientable essential surface properly embedded in the exterior of a bridge position $k$ of a knot. Then the Hempel distance of $k$ is bounded above by twice the genus of $F$ plus $|\partial F|$.
By Theorem \[bound\], if a knot exterior contains an essential annulus or an essential torus, then the Hempel distance of a bridge position is at most 2. Therefore, if there exists a bridge position of a knot with the Hempel distance at least 3, then the knot is hyperbolic. The properties of our knot $\mathcal{K}$ can be compared with it.
A knot without an essential surface with meridional boundary in the exterior of its representative is called a [*meridionally small knot*]{}. For example, the trivial knot, 2-bridge knots and torus knots are known to be meridionally small. As we mentioned in Section \[intro\], these knots also have the nice property that any nonminimal bridge position is stabilized. We say that a knot is [*destabilizable*]{} if it has this property. Zupan showed that any cabled knot $J$ of a meridionally small knot $K$ is also meridionally small, and that if $K$ is destabilizable, then $J$ is also destabilizable ([@Z2]). Then, the following problem is naturally proposed.
\[small\] Is there a relation between meridionally small knots and destabilizable knots?
We remark that a bridge position of a meridionally small knot is locally minimal if and only if the Hempel distance is greater than $1$ by Lemma \[minimal\] and the following fundamental result:
\[WR\] If a bridge position of a knot has Hempel distance $1$, then either it is stabilized or the knot exterior contains an essential surface with meridional boundary.
On the other hand, it is not always true that if the knot exterior contains an essential surface with meridional boundary, then a bridge position has Hempel distance $1$. For example, [@HK Example 5.1] shows that a $3$-bridge position of $8_{16}$ has Hempel distance greater than $1$, but the knot exterior contains an essential surface with meridional boundary.
Distance between bridge positions
---------------------------------
Theorem \[fundamental1\] allows us to define a distance between two bridge positions of a knot, which we call the [*Birman distance*]{}. That is to say, the Birman distance between two bridge positions is the minimum number of Type I moves and the inverse operations relating the bridge positions up to isotopy. For example, the Birman distance between an $n$-bridge position and an $m$-bridge position of the trivial knot is always $|n-m|$ by Theorem \[Otal\]. The Birman distance between $\kappa $ and the $3$-bridge position of $\mathcal{K}$ is at least $3$ since $\kappa $ is locally minimal. In fact, we can see that it is at most $5$ by observing the $(\pi /2)$-rotation of $\kappa $.
Johnson–Tomova gave an upper bound for the Birman distance between two bridge positions with high Hempel distance which are obtained from each other by flipping, namely the rotation of $S^3$ exchanging the poles.
For an integer $n\geq 3$, if an $n$-bridge position $k$ of a prime knot has Hempel distance at least $4n$, then the Birman distance between $k$ and the flipped bridge position of $k$ is $2n-2$.
They also gave the following, which holds even if we consider bridge positions modulo flipping.
For an integer $n\geq 2$, there exists a composite knot with a $2n$-bridge position and a $(2n-1)$-bridge position such that the Birman distance is at least $2n-7$.
We remark that the $2n$-bridge position is not locally minimal, and hence it does not answer Problem \[question\]. It turns out that there are two $(2n-1)$-bridge positions such that the Birman distance is at least $2n-6$. The following are major problems.
Determine or estimate the Birman distance in terms of some invariants of the bridge positions.
For a given $n$, does there exist a universal upper bound for the Birman distance between locally minimal bridge positions of every $n$-bridge knot?
[**Acknowledgements.**]{} The authors are grateful to Joel Hass for suggesting the construction of the bridge position $\kappa $. They would also like to thank Alexander Zupan for valuable comments.
[10]{} J. W. Alexander and G. B. Briggs, [*On types of knotted curves*]{}, Ann. of Math. (2) [**28**]{} (1926/27), no. 1-4, 562–586. J. S. Birman, [*On the stable equivalence of plat representations of knots and links*]{}, Canad. J. Math. [**28**]{} (1976), no. 2, 264–290. D. Bachman and S. Schleimer, [*Distance and bridge position*]{}, Pacific J. Math. [**219**]{} (2005), no. 2, 221–235. A. Coward, [*Algorithmically detecting the bridge number of hyperbolic knots*]{}, arXiv:0710.1262. R. H. Crowell and R. H. Fox, Introduction to knot theory, Reprint of the 1963 original, Graduate Texts in Mathematics, [**57**]{}, Springer-Verlag, New York-Heidelberg, 1977. D. Gabai, [*Foliations and the topology of 3-manifolds. III*]{}, J. Differential Geom. [**26**]{} (1987), no. 3, 479–536. C. Hayashi, [*Stable equivalence of Heegaard splittings of $1$-submanifolds in $3$-manifolds*]{}, Kobe J. Math. [**15**]{} (1998), no. 2, 147–156. C. Hayashi and K. Shimokawa, [*Heegaard splittings of the trivial knot*]{}, J. Knot Theory Ramifications [**7**]{} (1998), no. 8, 1073–1085. C. Hayashi and K. Shimokawa, [*Thin position of a pair (3-manifold, 1-submanifold)*]{}, Pacific J. Math. [**197**]{} (2001), no. 2, 301–324. D. J. Heath and T. Kobayashi, [*Essential tangle decomposition from thin position of a link*]{}, Pacific J. Math. [**179**]{} (1997), no. 1, 101–117. J. Johnson and M. Tomova, [*Flipping bridge surfaces and bounds on the stable bridge number*]{}, Algebr. Geom. Topol. [**11**]{} (2011), no. 4, 1987–2005. J.-P. Otal, [*Présentations en ponts du n[œ]{}ud trivial*]{}, C. R. Acad. Sci. Paris Sér. I Math. [**294**]{} (1982), no. 16, 553–556. J.-P. Otal, [*Presentations en ponts des n[œ]{}uds rationnels*]{}, Low-dimensional topology (Chelwood Gate, 1982), 143–160, London Math. Soc. Lecture Note Ser., 95, Cambridge Univ. Press, Cambridge, 1985. M. Ozawa, [*Bridge position and the representativity of spatial graphs*]{}, Topology and its Appl. [**159**]{} (2012), no. 4, 936–947. M. Ozawa, [*Non-minimal bridge positions of torus knots are stabilized*]{}, Math. Proc. Cambridge Philos. Soc. [**151**]{} (2011) 307–317. K. Reidemeister, [*Elementare Begründung der Knotentheorie*]{}, Abh. Math. Sem. Univ. Hamburg [**5**]{} (1927) 24–32. M. Scharlemann, [*Thin position in the theory of classical knots*]{}, Handbook of knot theory, 429–459, Elsevier B. V., Amsterdam, 2005. M. Scharlemann and M. Tomova, [*Uniqueness of bridge surfaces for 2-bridge knots*]{}, Math. Proc. Cambridge Philos. Soc. [**144**]{} (2008), no. 3, 639–650. H. Schubert, [*Über eine numerische Knoteninvariante*]{}, Math. Z. [**61**]{} (1954), 245–288. J. Schultens, [*Additivity of bridge numbers of knots*]{}, Math. Proc. Cambridge Philos. Soc. [**135**]{} (2003), no. 3, 539–544. J. Schultens, [*Width complexes for knots and 3-manifolds*]{}, Pacific J. Math. [**239**]{} (2009), no. 1, 135–156. K. Takao, [*Bridge decompositions with distances at least two*]{}, Hiroshima Math. J. [**42**]{} (2012), no. 2, 161–168. M. Tomova, [*Thin position for knots in a 3-manifold*]{}, J. Lond. Math. Soc. (2) [**80**]{} (2009), no. 1, 85–98. M. Tomova, [*Multiple bridge surfaces restrict knot distance*]{}, Algebr. Geom. Topol. [**7**]{} (2007), 957–1006. A. Zupan, [*Properties of knots preserved by cabling*]{}, Comm. Anal. Geom. [**19**]{} (2011), no. 3, 541–562. A. Zupan, [*Unexpected local minima in the width complexes for knots*]{}, Algebr. Geom. Topol. [**11**]{} (2011), no. 2, 1097–1105.
![The arcs $D^1_+\cap S,\ D^2_+\cap S,\ D^3_+\cap S$ and $D^4_+\cap S$ in $S=S_{20}$, which extend to the next page.[]{data-label="diagram-6"}](diagram-6.eps)
\
The right part of Figure \[diagram-6\].
![A bridge diagram of the $4$-bridge position, which is decomposed into the following 14 pages.[]{data-label="diagram-7"}](parts.eps "fig:")\
$(+,1)$$(+,2)$$(+,3)$$(+,4)$$(+,5)$$(+,6)$$(+,7)$\
$(-,1)$$(-,2)$$(-,3)$$(-,4)$$(-,5)$$(-,6)$$(-,7)$\
\
The part $(+,1)$ of Figure \[diagram-7\].
\
The part $(+,2)$ of Figure \[diagram-7\].
\
The part $(+,3)$ of Figure \[diagram-7\].
\
The part $(+,4)$ of Figure \[diagram-7\].
\
The part $(+,5)$ of Figure \[diagram-7\].
\
The part $(+,6)$ of Figure \[diagram-7\].
\
The part $(+,7)$ of Figure \[diagram-7\].
\
The part $(-,1)$ of Figure \[diagram-7\].
\
The part $(-,2)$ of Figure \[diagram-7\].
\
The part $(-,3)$ of Figure \[diagram-7\].
\
The part $(-,4)$ of Figure \[diagram-7\].
\
The part $(-,5)$ of Figure \[diagram-7\].
\
The part $(-,6)$ of Figure \[diagram-7\].
\
The part $(-,7)$ of Figure \[diagram-7\].
[^1]: The first author is partially supported by Grant-in-Aid for Scientific Research (C) (No. 23540105), The Ministry of Education, Culture, Sports, Science and Technology, Japan
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We use the analytical model recently introduced in Ref. [@lp92], to investigate the statistics of temperature fluctuations on the cosmic microwave background (CMB), induced by topological defects. The cases of cosmic strings and textures are studied. We derive analytically the characteristic function of the probability distribution for ${\delta T}\over T$ and use it to obtain the lowest twelve moments including the skewness and the kurtosis. The distribution function is also obtained and it is compared with the Gaussian distribution thus identifying long non-Gaussian tails. We show that for both cosmic strings and textures all odd moments (including skewness) vanish while the relative deviation from the Gaussian for even moments increases with the order of the moment. The non-Gaussian signatures of textures, derived from the distribution function and the moments, are found to be much more prominent than the corresponding signatures for strings. We discuss the physical origin of this result.'
author:
- 'Leandros Perivolaropoulos[^1] [^2]'
title: '**On the Statistics of CMB Fluctuations Induced by Topological Defects**'
---
**Introduction**
================
Theoretical models for large scale structure formation can be divided in two classes according to the type of primordial perturbations they consider; models based on adiabatic Gaussian perturbations produced during inflation and models based on non-Gaussian perturbations which are naturally provided by topological defects. The cold dark matter (CDM) model, based on Gaussian primordial fluctuations, produces an evolved density field with small and intermediate scale structure in reasonable agreement with observations[@wdef87]. Recent observations on large scales however, have created significant challenges for the CDM model. One of the most serious such challenges comes from the recent detection of anisotropy in the CMB by the DMR (Differential Microwave Radiometer) instrument of COBE (Cosmic Background Explorer) satellite[@cb92].
This discovery has provided a new powerful experimental probe for testing theoretical models for large scale structure formation. The temperature sky maps constructed by DMR were used to obtain the rms sky variation $\sqrt{<({{\Delta T}\over T})^2>}$ (where $\Delta T\equiv
T(\theta_1)-T(\theta_2)$, and $\theta_1-\theta_2=60^\circ$ is the beam separation in the COBE experiment) and the rms quadrupole amplitude. A fit of the data to spherical harmonic expansion has also provided the angular temperature auto-correlation function $C(\Delta\theta)\equiv <{{\delta
T}\over T}(\theta) {{\delta T}\over T} (\theta ^\prime)>$ where $<>$ denotes averaging over all directions in the sky, $\delta T(\theta)\equiv
T(\theta)-<T>$ and $\Delta \theta=\theta -\theta^\prime$. This result was then used to obtain the rms-quadrupole-normalized amplitude $Q_{rms-PS}$ and the index $n$ of the power law fluctuation spectrum assumed to be of the form $P(k) \sim k^n$. The published results are: $$n=1.1 \pm 0.5$$ $$Q_{rms-PS}=(5.96\pm 0.75)\times 10^{-6}
\eqno(1.1)$$ $$({{\Delta T}\over T})_{rms}=(1.1 \pm 0.2)\times 10^{-5}$$ Severe constraints are imposed on several cosmological models due to these results. For example, the CDM model with bias $1.5\leq b_8 \leq 2.5$ is inconsistent with the COBE results for $H_0 > 50
km/(sec\cdot Mpc)$ and is barely consistent for $H_0\simeq 50 km/(sec\cdot
Mpc)$ [@cbt92] [@be87]. It is therefore interesting to investigate the consistency of alternative models with respect to the COBE measurments. The natural alternative to models based on adiabatic Gaussian perturbations generated during inflation are models where the primordial perturbations are created by topological defects like cosmic strings global monopoles[@lp92a] or textures[@t89].
In a recent paper [@lp92] we introduced an analytical model and used it to study the effects of cosmic strings on the microwave background. Our model was based on counting random multiple impulses, inflicted on photon trajectories by the string network between the time of recombination and today. After constructing the temperature auto-correlation function, we used it to obtain the effective power spectrum index n, the rms-quadrupole-normalized amplitude $Q_{rms-PS}$ and the rms temperature variation smoothed on small angular scales. For the values of the scaling solution parameters obtained in Refs.[@bb90],[@as90] we showed that $$n=1.14 \pm 0.5$$ $$Q_{rms-PS}=(4.5\pm1.5) G\mu
\eqno(1.2)$$ $$({{\Delta T}\over T})_{rms}=5.5 G\mu$$ where $\mu$ is the mass per unit length of the string (the single free parameter in the cosmic string model for structure formation) and $G$ is Newton’s constant.
Demanding consistency of our results with the COBE results (1.1) leads to [@lp92] $$G\mu=(1.7\pm 0.7)\times 10^{-6}
\eqno(1.3)$$ in good agreement with direct normalization of $\mu$ from galaxy[@tb86] and large scale structure[@vv91][@pbs90] observations (for more recent studies of the cosmic string model for structure formation see Ref. [@as92a]). We concluded that the cosmic string model remains a viable model consistent with the up to now announced COBE data. Similar results to those presented in Ref. [@lp92] have been obtained by numerical simulations of the string network from the time of recombination to a small redshift[@bbs88]. These studies however, in contrast to our analytical model, do not attempt to take into account compensation (for a recent study of the effects of compensation see Ref. [@vs92]) and allow the string deficit angle to extend over the whole volume of their simulation. In addition they are constrained to fixed values of the scaling solution parameters produced by simulations which are still subject to some controversy[@at89][@at85][@as90][@bb90]. Other interesting studies [@ttb86] have used the old picture for the cosmic string network[@at85], based on loops, to analytically calculate the effects of strings on the CMB. Recent simulations[@bb90][@as90] however, have shown that the dominant component of the scaling solution consists of long strings rather than loops.
Our analysis had utilized the data concerning the [*amplitude*]{} and the [*spectrum*]{} of the detected fluctuations in order to test the cosmic string model. This type of test can check the consistency of the model with the data but it can not distinguish it from other consistent theories. There are two basic reasons for this: First, the $n=1$ Zeldovich spectrum is fairly generic in physically motivated theories and second, other theories like standard CDM, can also pass the amplitude normalization test [@s92b] by utilizing tensor mode perturbations.
It is therefore clear that further tests are needed in order to distinguish between the topological defect models and other theories. One of the most interesting such tests is the subject of this paper: [*the study of the statistics of the CMB fluctuations*]{}. This is a particularly interesting issue in view of the existing temperature fluctuation sky maps obtained by COBE which are currently subjected to statistical analysis by the COBE collaboration. Such an analysis can reveal the characteristic signature of the source that produced the CMB fluctuations. The identification of this signature for topological defect models is the focus of the present work.
Models based on fluctuations generated during inflation predict (in their generic form) a Gaussian distribution function for the CMB temperature fluctuations. On the other hand, models based on topological defects, like cosmic strings or textures, are distinguished due to the particular type of non-Gaussian fluctuations they predict.
There is an extensive literature on the statistical effects of various types of non-Gaussian perturbations (for a small sample see Ref. [@ls92]). On the other hand the literature on the statistics of seed-like perturbations is much more limited (see e.g. [@gpjbbbs90][@b92][@sb91]). Previous analytical studies on the type of non-Gaussian statistics induced by cosmic strings and other seed-like perturbations have focused on the effects of point-like seeds on the statistics of large scale structure[@sb91]. They have considered the superposition of overdense, spherically symmetric kernels, thus obtaining the density distribution function and other statistical properties. Those studies, even though elegant and convienient for large scale structure considerations can not be easily applied to the CMB case. The basic reason for this, is the need for superposition of variable size (and in general non-spherical) kernels to account for the effects of compensation in different Hubble times and horizon scales. The multiple impulse approximation of Ref. [@lp92] (see also Ref. [@v92] for an alternative application of the method) can incorporate this kernel variability and it is the method that we used in this work.
In particular, we use the multiple impulse approximation to derive the characteristic function, the temperature distribution function and several moments of cosmic string and texture induced CMB fluctuations. These results are then compared with the corresponding Gaussian results in an effort to find signatures of the cosmic string and texture models. In section 2 we give a brief review of our model in order to clarify the basic assumptions made (for a more complete account see Ref.[@lp92]). We also associate with it a set of statistical experiments. In section 3 we introduce some basic statistical quantities and derive the characteristic functions for the statistical processes described in section 2. In particular we derive the characteristic functions that correspond to string, texture and Gaussian temperature paterns. Finally, in section 4 we use the characteristic functions to obtain the probability distributions and the lowest twelve moments in each of the three cases. We also compare and discuss the obtained results.
**The Model**
=============
We begin by reviewing the model introduced in Ref. [@lp92] which was originally designed to approximate the cosmic string induced temperature fluctuations. It will be seen that with some modifications it can also describe texture induced temperature fluctuations.
According to cosmic string simulations[@at89][@bb90][@as90], at any time t there are about 10 long string segments with a typical curvature radius t passing through each horizon volume. There is a well defined mechanism[@ks84] by which these segments give rise to localized linear temperature discontinuities (Fig. 1). Photons passing on different sides of a long straight string moving with velocity $v_s$ perpendicular to the line of sight, reach the observer with a Dobbler shift[@ks84] $${{\delta T}\over T} = \pm 4\pi G\mu v_s \gamma_s$$ Our goal is to find the combined effects of temperature fluctuations induced by strings present between the time of recombination $t_{rec}$ and today.
We approximate the photon path from the recombination time $t_{rec}$ to the present time $t_0$ by a discrete set of $M=16$ Hubble time-steps $t_i$ such that $t_{i+1}=2 t_i$ (${t_0\over t_{rec}}\simeq 2^{16}$ for $z_{rec}=1400$). We assume that the effects of long strings between the time of recombination and today give the dominant contribution to temperature fluctuations and therefore we consider photons emerging from the last scaterring surface at $t_{rec}$ with a fixed uniform temperature. A beam of photons coming from a fixed direction will initially suffer $n_h$ ‘kicks’ from the $n_h\simeq 10$ long strings within the horizon at $t_1 = t_{rec}$ (the linear superposition of ‘kicks’ is consistent with the multi-string metric presented in Ref.[@g91]). Each temperature ‘kick’ from a string with arbitrary orientation with respect to the observer induces a fluctuation ${\delta T}\over T$ of the form $${{\delta T}\over T} = \pm 4\pi G\mu \beta
\eqno(2.1)$$ with $$\beta={\hat k}\cdot({\vec v_s}\gamma_s \times {\hat s})$$ where $\hat{k}$, $\hat{s}$ and $\vec{v}_s$ are the unit wave-vector, the unit vector along the string and the string velocity vector respectively. The sign changes along the string[@ks84]. At the next Hubble time-step $t_2$, $n_h$ further ‘kicks’, uncorrelated with the initial ones, will be inflicted on the photon beam by the strings present within the new horizon scale $t_2 = 2 t_1$. The process will continue until the $M=16$ Hubble time-step corresponding to the present time $t_0$ (Fig. 2).
The physical process described above, corresponds to the following set of successive statistical experiments. For simplicity we will first consider the one dimensional case. The generalization to the realistic two dimensional case will then be made in a straightforward way.
Consider a one dimensional (continuous or not) set of temperature pixels with initially uniform temperature distribution and fixed total length L. Consider now a localized step-function perturbation imposed on this surface. Such a perturbation would increase the temperature of a fixed small length $l_0$ by an amount ${{\delta T}\over T}=\delta$ but would decrease the temperature of a neighbouring equal length by the same amount (see Fig. 3). Thus after this ‘trial’ each pixel of the set has probability $p(1)={l_0\over L}\equiv p_0$ to have been positively perturbed, the same probability to have been negatively perturbed ($p(2)=p(1)\equiv p_0$) and probability $p(3)=1-{2l_0\over L}=1-2
p_0$ to have remained unperturbed. Let this ‘trial’ repeat $n_0$ times before the first ‘experiment’ is completed. The next step is to repeat this experiment with a new scale for the step-function $l_1=2 l_0$ and $n_1=n_0/2$ number of ‘trials’. The successive experiments continue until $l_M=l_0 2^M$ and $n_M=n_0/2^M$. We demand $l_M={L\over 2}$ and $n_M=n_h$ where $n_h$ is a fixed positive integer to be identified with the number of long strings per horizon volume (see below). Therefore $l_0={L\over 2^{M+1}}$ and $n_0=n_h 2^M$. This implies that $$p_j= 2^j p_0={2^j\over 2^{M+1}}
\eqno(2.2)$$ $$n_j={n_0 \over 2^j}= n_h {2^M \over 2^j}
\eqno(2.3)$$ with $j=0,1,...,M$. The above described statistical process corresponds to our physical model provided that the following identifications are made:
- The fixed length L is identified with the scale of the present horizon.
- Each step-function perturbation with scale $l_j$ is identified with a cosmic string perturbation induced at the Hubble time-step $t_j$. At this time the horizon scale is $t_j=2 l_j$. Compensation confines the string induced perturbation within the horizon scale $2 l_j$. Clearly $2
l_0$ is to be identified with the horizon scale at recombination while $t_M=2
l_M$ (M is the final step) should be identified with the present horizon scale L.
- Each ‘experiment’ is identified with a Hubble time-step. During the $j$th ‘experiment’ there are $n_j={{n_h 2^M}\over 2^j}$ ‘trials’ (string perturbations) corresponding to the number of strings per horizon length (volume) $n_h$ times the total number of horizons $2^M\over 2^j$ within the fixed length L at the $j$th step. Clearly for $j=M$ there is only the present horizon included and the total number of ‘trials’(string perturbations) is $n_M=n_h$.
The above described process needs to be improved in two ways in order to correctly approximate the physics:
1. The one dimensional set of pixels must be promoted to a two dimensional one.
2. The amplitude of the step-function perturbation must be allowed to vary in order to account for varying string velocities (varying parameter $\beta$ in equation (2.1)). Without introducing this improvement our analysis would still be interesting but it would approximate the perturbations induced by textures rather than strings. In the case of textures there is no velocity parameter involved but there are still equal magnitude positive and negative fluctuations depending on whether the photon ‘falls in’ or ‘climbs out of’ the texture[@ts90].
Before we incorporate these modifications we proceed to study the statistics of the perturbations in the above described simplest case. This will help illustrate our method more clearly while making its generalization a simple and straightforward task.
**The Statistics**
==================
Let us first focus on a particular ‘experiment’ $j$ (Hubble time-step). Define $f(n_j,k_j)$ to be the probability that any given pixel will have been perturbed by ${{\delta T}\over T}=k_j \delta$ at the end of the $n_j$ ‘trials’ (string perturbations). Since at each ‘trial’ there are three possible outcomes with known fixed probabilities $p(1)=p(2)={l_0\over L}\equiv p_j$, $p(3)=1-2{l_j\over L}=1-2p_j$, the probability distribution $f(n_j,k_j)$ may be obtained from the well known trinomial distribution (a simple generalization of the binomial). The trinomial distribution $f_{n_j}
(x(1),x(2),x(3))$ gives the probability that after $n_j$ ‘trials’ with three possible outcomes, there are $x(i)$ occurences of outcome $i$ ($i=1,2,3$) (obviously $x(1)+x(2)+x(3)=n_j$). The trinomial distribution is: $$f_{n_j} (x(1),x(2),x(3))={{n_j!}\over {x(1) ! x(2) ! x(3) !}}
p(1)^{x(1)} p(2)^{x(2)} p(3)^{x(3)}
\eqno(3.1)$$ Let $x(1)$ ($x(2)$) be the number of positive (negative) temperature shifts for a given pixel while $x(3)$ is the number of ‘trials’ that lead to no shift. Using the relations $x(1)+x(2)+x(3)=n_j$ and $k_j=x(1)-x(2)$ to change variables from $x(1),x(2),x(3)$ to $n_j,k_j,x(3)$ and summing over the possible $x(3)$ we obtain $f(n_j,k_j)$: $$f(n_j,k_j)=\sum_{x(3),2}{{n_j! p_j^{n_j-x(3)} (1-2p_j)^{x(3)}}\over
{({{n_j+k_j-x(3)}\over 2})! ({{n_j-k_j-x(3)}\over 2})! {x(3)}!}}
\eqno(3.2)$$ where the sum extends over all integer values of $x(3)$ for which $n_j\pm
k_j-x(3)$ is even. (notice that terms involving factorials of negative numbers vanish automatically).
A more useful and much simpler function that describes the statistics is the [*characteristic function*]{} $\phi (n,\omega)$ of the distribution. This has two important properties:
1. It is the Fourier transform of the probability distribution i.e. $$\phi (n,\omega)=\sum_{k=-n}^n e^{i\omega k} f(n,k)
\eqno(3.3)$$ $$f(n,k)={1\over {2 \pi}}\int_0 ^{2\pi} e^{-i \omega k} \phi (n,\omega)
\eqno(3.4)$$
2. It can generate all moments $<k^m>$ of the distribution by differentiation i.e. $$<k^m>=(-1)^m i^m {{d^m} \over {d \omega ^m}}\phi (n,\omega)\vert_{\omega = 0}
\eqno(3.5)$$
Using the property $$(p(1)+p(2)+p(3))^n=\sum_{x(1)+x(2)+x(3)=n} f_n(x(1),x(2),x(3))$$ and equation (3.3) it is straightforward to show that the characteristic function for the variable $k_j=x(1)-x(2)$ is: $$\phi (n_j,p_j,\omega)=(2p_j\cos(\omega) + (1-2p_j))^{n_j}
\eqno(3.6)$$ However, we are interested in a multiple ‘experiment’ process i.e. we are looking for the distribution function of the variable: $$K=\sum_{i=0}^{M\simeq 16} k_i
\eqno(3.7)$$ where $K$ is to be identified with the total temperature fluctuation ${{\delta
T}\over T}$ at the present time $t_0$. It is straightforward to show [@f71] that the characteristic function for a sum of independent random variables is equal to the product of the individual characteristic functions. Therefore, the characteristic function $\Phi(\omega)$ corresponding to the variable K is: $$\Phi (\omega)= \prod_{j=0}^{M\simeq 16} \phi(n_j,p_j,\omega)
\eqno(3.8)$$ This result may now be used to obtain the probability distribution by the Fourier transform (3.4). It may also be used to obtain all the moments of the distribution either by differentiation (using equation (3.5)) or by direct integration, using the distribution function. However, we must first identify correctly the parameters $n_j$, $p_j$ for the physical problem under consideration. Clearly, equations (2.2), (2.3) need to be modified to account for the propagation of a surface (photon wavefront) rather than a line, through the string network. It is straightforward to generalize arguments leading to (2.2) and (2.3) to the two dimensional case to obtain: $$p_j= 4^j p_0={2\times 4^j\over 4^{M+1}}
\eqno(3.9)$$ $$n_j={n_0 \over 4^j}= n_h {4^M \over 4^j}
\eqno(3.10)$$ Using (3.9) and (3.10) in (3.8) we obtain: $$\Phi_t (\omega)= \prod_{j=0}^{M\simeq 16} (4^{j-M} \cos(\omega) + (1-
4^{j-M}))^{n_h 4^{(M-j)}}
\eqno(3.11)$$ where the subscript $t$ denotes that this result is valid only for the case of texture-like perturbations where the magnitude of the individual perturbations (step-function magnitude) can be considered fixed. In the case of strings we must generalize (3.11) further in order to account for the variable parameter $\beta$ of equation (2.1) which represents the velocity and string orientation dependence of the perturbations. This generalization will lead to the multinomial distribution.
Consider the above described process of successive perturbations with the additional feature of allowing $Q$ possible magnitudes for the applied step-function perturbations. Let $p_j^i$ be the probability for any temperature pixel to be perturbed $i$ units at the $j$th Hubble time-step where $i=-Q,...,+Q$ and $j=0,...,M$. We now have a total of $2Q+1$ possible outcomes for each ‘trial’. Therefore, the distribution function can be obtained from a generalization of the trinomial: [*the multinomial distribution*]{}. The multinomial distribution $f_n (x(1),...,x(R))$ gives the probability that an experiment consisting of n ‘trials’ each with R possible outcomes will result to $x(i)$ occurrences of the $i$th outcome ($i=1,...,R$) given that the probabilities for each outcome are $p(1),...,p(R)$. In direct correspondence with the trinomial, $f_n (x(1),...,x(R))$ is the general term of the multinomial expansion $(p(1)+p(2)+...+p(R))^n$. The interesting variable for our purposes is the total temperature fluctuation which at the $j$th time-step is given by $$k_j=\sum_{i=1}^Q i (x_j^i-x_j^{-i})$$ where $x_j^i$ is the number of $i$ unit perturbations at the $j$th Hubble time-step. Using the multinomial expansion and equation (3.3), it is straightforward to obtain the characteristic function for the distribution of $k_j$. The result is: $$\phi (n_j,p_j^1,...,p_j^Q,\omega)=(2\sum_{i=1}^Q p_j^i \cos(i\hspace{2mm}\omega
) +
(1-2 \sum_{i=1}^Q p_j^i)^{n_j}
\eqno(3.12)$$ In order to proceed further we must specify the probability distribution of the step-function magnitudes i.e. the dependence of $p_j^i$ on the index $i$. The simplest physically interesting choice is the distribution $$p_j^i={{2\times 4^{j-M-1}}\over Q}
\eqno(3.13)$$ which corresponds to a uniform distribution of the parameter $\beta$ in equation (2.1) in the range $[0,\beta_{max}]$ where $\beta_{max}$ should be chosen to be about unity.
Using now (3.12), (3.13) and (3.10) in (3.8) we obtain the generalization of (3.11) appropriate for cosmic strings: $$\Phi_s (\omega)= \prod_{j=0}^{M\simeq 16} ({4^{j-M}\over Q} \sum_{i=1}^Q
\cos(i\hspace{2mm}\omega) + (1- 4^{j-M}))^{n_h 4^{M-j}}
\eqno(3.14)$$ which along with (3.11) is the central result of this work. In what follows we use the results (3.11) and (3.14) to obtain the probability distributions and several moments for temperature fluctuations induced by textures and cosmic strings. In accordance with the accepted values of the scaling solution parameters we will use $n_h=10$ for strings[@bb90][@as90] and $n_h=0.04$ for textures[@stpr91]. Our goal is to compare the derived results with those corresponding to a Gaussian distribution. It is therefore instructive to first investigate if and under what conditions can we obtain a Gaussian limit for the characteristic functions (3.11) and (3.14).
Consider first the texture case described by equation (3.11). Since we want to compare with the standard Gaussian distribution which has $\sigma=1$ we must first appropriately normalize the variable $K$ dividing by its standard deviation to match the standard deviation $\sigma=1$ of the standard Gaussian. Consider the new variable $$K_g=\sum_{i=0}^M {{k_i}\over \sqrt{{n_h (M+1)}}}
\eqno(3.15)$$ The characteristic function that generates the moments of $K_g$ is $$\Phi_t^g(\omega)=\Phi_t({{\omega}\over {\sqrt{n_h (M+1)}}})
\eqno(3.16)$$ It is now straightforward to show that $$\lim_{n_h\to \infty}{\Phi_t^g (\omega)}=\lim_{n_h\to \infty}{\prod_{j=0}^M
[(1-{{(4^{j/2}\omega
/\sqrt{M+1})^2} \over{ 2n_h 4^M}})^{n_h 4^M}]^{1/4^j}}=e^{-\omega^2/2}
\eqno(3.17)$$ But $e^{-\omega^2/2}$ is the characteristic function for the standard Gaussian[@f71] distribution. Therefore the distribution of the appropriately normalized variable $K_g$ approaches the standard Gaussian for $n_h\longrightarrow \infty$. Notice that the Gaussian limit is not obtained for $M \longrightarrow \infty$ i.e. for the Gaussian limit to be realized we need several perturbations per horizon volume but evolution over more Hubble time-steps does not help. In a similar way, it may be shown that the Gaussian limit for the string characteristic function (equation (3.14)) is obtained for the variable $$K_g=\sum_{i=0}^M {{k_i}\over \sqrt{{n_h (M+1)(Q+1)(2 Q+1)/6}}}
\eqno(3.18)$$ provided that $n_h \longrightarrow \infty$. For this variable the string characteristic function is: $$\Phi_s^g(\omega)=\Phi_s({{\omega}\over {\sqrt{n_h (M+1)(Q+1)(2 Q+1)/6}}})
\eqno(3.19)$$
Results-Discussion
==================
We are now in position to use (3.11) and (3.14) (or (3.16) and (3.19)) to make predictions and compare these with the predictions of the corresponding Gaussian distribution. We first obtain the moments of the distributions. This may be achieved by differentiating the characteristic functions and using equation (3.5) to obtain the moments $<K^m>$. Alternatively we may expand the characteristic functions around $\omega=0$. It is the later method we used here. We used the symbolic manipulation package [*Mathematica*]{}[@w92] to expand (3.16) and (3.19) around $\omega = 0$ up to order 12. From the values of the coefficients and equation (3.5) we obtained the 12 lowest moments for the texture and string cases and also for the standardized Gaussian. The values of these moments are shown in Table 1. 0.5cm [**Table 1**]{}:The values of the lowest six non-vanishing moments for the string ($<K_g^m>_s$), texture ($<K_g^m>_t$) and standard Gaussian ($<K_g^m>_G$). 0.1cm
$m$ $<K_g^m>_s$ $<K_g^m>_t$ $<K_g^m>_G$
----- ------------- ------------- -------------
2 1 1 1
4 3.0045 4.124 3
6 15.0675 35.5577 15
8 105.947 440.226 105
10 959.239 9681.3 945
12 10630 159955 10395
0.5cm
The quantity of interest is the relative deviation from the Gaussian defined as: $$R_m^{s,t}={{<K_g^m>_{s,t}-<K^m>_G}\over <K^m>_G}
\eqno(4.1)$$ where $<K_g^m>_{s,t}$ is the standardized string or texture $m$th moment while $<K^m>_G$ is the corresponding standard Gaussian moment.
In Fig. 4 we plot the relative deviation $R_m^s$ versus $m$ for strings (notice that only even moments are shown since all odd moments vanish) obtained by expanding (3.19) with $M=16$, $Q=5$ and $n_h=10$. Clearly, $R_m^s$ is a rapidly increasing function of $m$ which is evidence for the presence of long non-Gaussian tails in the string distribution function. However, even for the $m=12$ moment the relative deviation does not exceed 3% implying that the non-Gaussian features in the string distribution function are fairly weak. The reason for this is that in the case of strings we have $n_h=10 \gg 1$ which implies that the Gaussian limit is approached in an effective way. We have performed tests for different values of Q and found that the values of $R_m^s$ remain insensitive to within a factor of 2.
In Fig. 5 we plot the corresponding relative deviations $R_m^t$ for textures. We used equation (3.16) with $M=16$ and $n_h=0.04$. In the case of textures, not only is $R_m^t$ a rapidly increasing function of $m$ but also even the lower moments are significantly larger than the corresponding Gaussian. For example the kurtosis (defined as $<K_g^4>/({{<K_g^2>}^2})$) is predicted to be 40% larger than the kurtosis of the standard Gaussian distribution, while the sixth moment is larger by a factor of three. As in the case of strings, the skewness and all the odd moments are found to vanish. This is due to the fact that the superimposed kernels are symmetric with respect to positive and negative perturbations. Clearly, such an assumption even though reasonable for CMB consideration is inapplicable for large scale structure calculations where a non-zero skewness is predicted by seed-based models[@sb91][@ls92].
The characteristic functions (3.11) and (3.14) can also be used to find the temperature fluctuation distribution functions. These can be obtained by Fourier transforming the characteristic functions according to equation (3.4). The results may then be compared with the corresponding Gaussian with the same standard deviation (in order to keep the Fourier transform simple, we use the original forms (3.11) and (3.14) in this case). In Fig. 6a we show the distribution function $F_s (K)$ for strings, obtained by Fourier transforming (3.14) with $M=16$, $Q=5$ and $n_h=10$. The difference $F_s(K)-F_G
(K)$ between the string induced distribution function $F_s (K)$ and the corresponding Gaussian $F_G (K)$ is shown in Fig. 6b. The relative difference $(F_s(K)-F_G (K))/F_G (K)$ at any given point does not exceed 1% but the presence of long non-Gaussian tails is clear. It is these tails that cause the rapid increase of the moments with the order $m$.
Fig. 7a shows the distribution function for textures obtained by Fourier transforming (3.11) with $M=16$ and $n_h=0.04$. Superimposed is the corresponding Gaussian distribution function. In this case, as expected since $n_h \ll 1$, the non-Gaussian features are fairly clear. The central pronounced peak and the long tails seem to be generic features for temperature fluctuations induced by topological defects. Fig. 7b shows the difference $F_t(K)-F_G (K)$ between the texture distribution and the corresponding Gaussian. It shows the same features as Fig. 6b which applies to strings but in the case of textures the magnitude of the relative difference $(F_t(K)-F_G(K))/F_G (K)$ is almost two orders of magnitude larger. This is reflected in the relative deviation of moments for which we have $R_m^t \gg R_m^s$.
Our results on both the relative deviation of moments from the Gaussian and the distribution function itself show the following:
1. The non-Gaussian signature of cosmic strings is difficult to detect by measuring relative deviation of moments from the Gaussian. Relative deviations need to be measured to within less than 1% in order to distinguish cosmic strings fluctuations from Gaussian. The origin of this approximatelly Gaussian behavior of string induced perturbations is the large number of strings per horizon volume ($n_h\simeq 10$). The large number of superimposed non-Gaussian features ‘averages out’ to an approximatelly Gaussian result as predicted by the central limit theorem.
2. The measurment of moments provides a much more powerfull test for the texture model. This is due to the small number of textures unwinding per horizon volume ($n_h\simeq 0.04$) which avoids the suppression of the texture non-Gaussian features. Measuring the relative deviation of the kurtosis to within 40% should be enough to detect the deviation induced by textures. For the relative deviation of the sixth moment, a measurment accurate to within a factor of less than three, would be sufficient to indicate the presence of a texture signature.
The tests based on relative deviation of moments from the Gaussian that have been studied here, have several interesting and powerful features, particularly for testing the texture model. However, they are not sensitive to geometrical and topological features of the temperature fluctuation maps. Such features have been examined in Ref. [@gpjbbbs90] using a numerically obtained realization of cosmic string induced perturbations. It was found that topological and geometrical tests can be a sensitive probe of stringy non-Gaussian features. An interesting extension of the work presented here is the study of the geometrical features of string and texture induced temperature patterns using analytical methods and Monte Carlo simulations. Such a project is currently in progress[@bmp92].
Acknowledgements
================
I wish to thank R. Brandenberger and R. Moessner for interesting discussions and for providing helpful comments after reading the paper. This work was supported by a CfA Postdoctoral Fellowship.
Figure Captions
===============
[**Figure 1:**]{} The production of step-like discontinuities in the microwave temperature for photons passing on different sides of a cosmic string S. The string deficit angle is $\alpha$ and O is the observer. .5cm [**Figure 2:**]{} The propagation of a photon beam from the recombination time $t_{rec}$ to the present time $t_0$. The horizon in three successive Hubble timesteps is also shown. .5cm [**Figure 3:**]{} The effects of a step-function perturbation on an initially uniform one dimensional distribution. .5cm [**Figure 4:**]{} The relative deviation of moments from the standard Gaussian. $R_m^s$ corresponds to moments due to string induced perturbations and is plotted versus the $m$ where $m$ is the order of the moments. Odd moments are omitted since they vanish. .5cm [**Figure 5:**]{} The relative deviation $R_m^t$ for the case of textures. The deviations from the Gaussian are much larger compared to the case of strings. .5cm [**Figure 6a:**]{} The distribution function $F_s(K)$ for string induced perturbations.\
[**Figure 6b:**]{} The difference $F_s (k) - F_G (K)$ where $F_G (K)$ is the Gaussian distribution with the same standard deviation as $F_s (K)$. The [*relative*]{} difference does not exceed 1% but the presence of long non-Gaussian tails is clear. .5cm [**Figure 7a:**]{} The distribution function $F_t(K)$ for texture induced perturbations superimposed with the Gaussian distribution of the same standard deviation.\
[**Figure 7b:**]{} The difference $F_t (k) - F_G (K)$ where $F_G (K)$ is the Gaussian distribution with the same standard deviation as $F_t (K)$. The [*relative*]{} difference exceeds 10% and is much more prominent than in the case of strings.
[99]{} L. Perivolaropoulos, [*COBE vs Cosmic Strings: An Analytical Model*]{}, [*Phys. Lett.*]{} [**B**]{}, in press (1992). S. White, M. Davis, G. Efstathiou, C. Frenk, [*Nature*]{} [**330**]{}, 451 (1987). G. Smoot [*et. al.*]{}, [*Ap. J. Lett.*]{} [**396**]{}, L1 (1992). E. L. Wright [*et. al.*]{}, [*Ap. J. Lett.*]{} [**396**]{}, L5 (1992). J. R. Bond and Efstathiou, [*Mon. Not. R. Astron. Soc.*]{} [**226**]{}, 665 (1987). M. Barriola and A. Vilenkin, [*Phys. Rev. Lett.*]{} [**63**]{}, 341 (1989)\
S. Rhie and D. Bennett, [Phys. Rev. Lett.]{} [**65**]{}, 1709 (1991).\
L. Perivolaropoulos, [*Mod. Phys. Lett.*]{} [**A7**]{},903 (1992). N. Turok, [*Phys. Rev. Lett.*]{} [**63**]{}, 2625 (1989). D. Bennett and F. Bouchet, [*Phys. Rev.*]{} [**D41**]{}, 2408 (1990). B. Allen and E. P. S. Shellard, [*Phys. Rev. Lett.*]{} [**64**]{}, 119 (1990). N. Turok and R. Brandenberger, [*Phys. Rev.*]{} [**D33**]{}, 2175 (1986)\
A. Stebbins, [*Astrophys. J. Lett.*]{} [**303**]{}, L21 (1986)\
H. Sato, [*Prog. Theor. Phys.*]{} [**75**]{}, 1342 (1986). T. Vachaspati and A. Vilenkin, [*Phys. Rev. Lett*]{} [**67**]{},1057 (1991). L. Perivolaropoulos, R. Brandenberger and A. Stebbins, [*Phys. Rev.*]{} [**D41**]{}, 1764 (1990)\
R. Brandenberger, L. Perivolaropoulos and A. Stebbins, [*Int. J. Mod. Phys. A*]{} [**5**]{}, 1633 (1990). A. Albrecht and A. Stebbins, [*Phys. Rev. Lett.*]{} [**68**]{}, 2121 (1992)\
A. Albrecht and A. Stebbins, Fermilab preprint (1992). F. R. Bouchet, D. P. Bennett and A. Stebbins, [*Nature*]{} [**335**]{}, 410 (1988)\
D. P. Bennett, A. Stebbins and F. R. Bouchet, [*Ap. J. Lett.*]{} 399, L5 (1992). S. Veeraraghavan and A. Stebbins, ‘Large-Scale Microwave Anisotropy from Gravitating Seeds’, Fermilab preprint PUB-92-147-A (1992). A. Albrecht and N. Turok, [*Phys. Rev*]{} [**D40**]{}, 973 (1989). A. Albrecht and N. Turok, [*Phys. Rev. Lett.*]{} [**54**]{}, 1868 (1985). J. Traschen, N. Turok and R. Brandenberger, [*Phys. Rev.*]{} [**D34**]{}, 919 (1986). D. Salopek‘Consequenses of the COBE Satelite for the Inflationary Scenario’ DAMTP-R-92-26 (1992)\
L. Krauss, ‘COBE, Inflation and Light Scalars’, Yale preprint, YCTP-P21-92 (1992)\
R. Davis, H. Hodges, G. Smoot, P. Steinhardt, M. Turner, [*Phys. Rev. Lett.*]{} [**69**]{}, 1856 (1992)\
J. Lidsey and P. Coles ‘Inflation, Gravitational Waves and the CMB. Reconciling CDM with COBE?’, Fermilab preprint (1992)\
F.C. Adams et. al. ‘Natural Inflation: Particle Physics Models, Power Law Spectra for Large Scale Structure and Constraints from COBE’, Fermilab preprint (1992)\
A. Liddle and D. Lyth, [*Phys. Lett*]{} [**B291**]{}, 391 (1992)\
T. Souradeep and V. Sahni, ‘Density Perturbations, Gravity Waves and the CMB’, IUCAA preprint, print-92-0354 (1992). Xiao-chun Luo, D. Schramm, ‘Kurtosis, Skewness and non-Gaussian Cosmological Density Perturbations’ Fermilab preprint PUB-92-214-A (1992)\
E. Gaztanaga, J. Yokoyama, ‘Probing the Statistics of Primordial Fluctuations and its Evolution’, Fermilab preprint, PUB-92-71-A (1992)\
P. Coles and D. Barrow, [*M.N.R.A.S*]{} [**228**]{}, 407 (1987)\
P. Coles, [*M.N.R.A.S*]{} [**234**]{}, 509 (1988) F. Lucchin, S. Matarrese and N. Vittorio, [*Ap. J.*]{} [**330**]{}, L21 (1988)\
D.Salopek [*Phys. Rev.*]{} [**45**]{}, 1139 (1992) J.R.Gott III, C. Park, R. Juszkiewicz, W. Bies, D. Bennett, F. Bouchet, A. Stebbins, [*Ap. J.*]{} [**352**]{}, 1 (1990). R. Brandenberger, ‘Topological Defect Models of Structure Formation after the COBE discovery of CMB Anisotropies’,(Invited Talk at Erice Course, Sep. 1992) BROWN-HET-881(1992). R. Scherrer and E. Bertschinger, [*Ap. J*]{} [**381**]{}, 349 (1991). A. Gooding, C. Park, D. Spergel, N. Turok and R. Gott III,‘The Formation of Cosmic Structure in a Texture Seeded CDM Cosmogony’, [*Ap. J.*]{} in press (1992). T. Vachaspati, [*Phys. Lett*]{} [**B282**]{}, 305 (1992). N. Kaiser and A. Stebbins, [*Nature*]{} [**310**]{}, 391 (1984)\
A. Stebbins, [*Astrophys. J.*]{} [**327**]{}, 584 (1988). R. Gott, [*Phys. Rev. Lett.*]{} [**66**]{}, 1126 (1991) and references therein. N. Turok and D. Spergel, [*Phys. Rev. Lett.*]{} [**64**]{}, 2736 (1990)\
R. Durrer and D. Spergel, ‘Microwave Anisotropies from Texture Seeded Structure Formation’, Princeton Univ. preprint PUTP-91-1247 (1991). W. Feller, ‘An Introduction to Probability Theory and its Applications’, New York: Willey, (1971). D. Spergel, N. Turok, W. Press and B. Ryden, [*Phys. Rev.*]{} [**D43**]{}, 1038 (1991). S. Wolfram, [*Mathematica version 2.0*]{}, Addison-Wesley (1991). R. Brandenberger, R. Moessner and L. Perivolaropoulos, in preparation.
[^1]: Division of Theoretical Astrophysics, Harvard-Smithsonian Center for Astrophysics 60 Garden St. Cambridge, Mass. 02138, USA.
[^2]: also Visiting Scientist, Department of Physics Brown University Providence, R.I. 02912, U.S.A.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We give a complete list of hyperbolic two-bridge links which can admit complete exceptional surgeries. Whole of candidates of surgery slopes of them are also given.'
address:
- 'Department of Mathematics, College of Humanities and Sciences, Nihon University, 3-25-40 Sakurajosui, Setagaya-ku, Tokyo 156-8550, Japan.'
- 'Department of Mathematics, Kindai University, 3-4-1 Kowakae, Higashiosaka City, Osaka 577-0818, Japan'
- 'Department of Mathematics Tokyo Institute of Technology 2-12-1 Ookayama, Meguroku, Tokyo 152-8551, Japan.'
author:
- Kazuhiro Ichihara
- In Dae Jong
- Hidetoshi Masai
title: 'Complete exceptional surgeries on two-bridge links'
---
Introduction {#sec:intro}
============
As an extension to the studies of exceptional surgeries on hyperbolic two-bridge links [@GodaHayashiSong2009; @Ichihara2012; @Wu1999], in this paper, we give a complete list of hyperbolic two-bridge links which can admit complete exceptional surgeries. Whole of candidates of surgery slopes of them are also given. Here, by a *complete exceptional* surgery on a hyperbolic link of $n$ components along the slopes $(\gamma_1, \dots, \gamma_n)$, we mean Dehn surgery on whole components of the link to obtain a closed non-hyperbolic 3-manifold such that all its proper sub-fillings (namely, those obtained by replacing at least one non-empty surgery slope with an empty one) are hyperbolic.
For a continued fraction $[a_1, \dots , a_k]$ with non-zero integers $a_1, \dots, a_k$, let $L_{[a_1, \dots , a_k]}$ denote the two-bridge link in the 3-sphere $S^3$, which is represented by a diagram in Figure \[fig:2bridge\]. (Also see Figure \[fig:twists\].) If $[a_1, \dots , a_k] = p/q$, then we also denote the link, called the two-bridge link of type $p/q$, by $L_{p/q}$. Note that, following [@FloydHatcher; @HatcherThurston1985], we denote by $[a_1, . . . , a_k]$ the following subtractive continued fraction: $$\dfrac{1}{a_1 - \dfrac{1}{a_2 - \dfrac{1}{a_3 - \cdots -\dfrac{1}{a_k}}}}$$
Then our main theorem is the following.
\[thm:main\] If a hyperbolic two-bridge link $L$ in $S^3$ admits a complete exceptional surgery along the slopes $(\gamma_1, \gamma_2)$, then $L$ with $(\gamma_1 , \gamma_2)$ are equivalent to one of those given in Tables 1–6 in Section \[sec:computer\].
[fig\_2bridge.eps]{} (11.5,37)[$a_1$]{} (51.5,37)[$a_3$]{} (67,37)[$\cdots$]{} (67,11)[$\cdots$]{} (84.5,37)[$a_k$]{} (84.5,4)[$a_k$]{} (31.5,30.5)[$a_2$]{} (103,37)[($k$ is odd)]{} (103,10)[($k$ is even)]{}
[fig\_twists.eps]{} (7,5.5)[$a_i$]{} (17,5.5)[$=$]{} (32,5.5)[$\cdots$]{} (62.5,5.5)[$a_i$]{} (73,5.5)[$=$]{} (88.5,5.5)[$\cdots$]{} (15,-3)[[$|a_i|$ times right-handed]{}]{} (15,-7)[[half twists when $a_i>0$]{}]{} (70,-3)[[$|a_i|$ times left-handed]{}]{} (70,-7)[[half twists when $a_i<0$]{}]{}
Our study in this paper is motivated by the results given in [@GodaHayashiSong2009; @Ichihara2012; @Wu1999]. In fact, our first approach to obtain Theorem \[thm:main\] is based on the same technique in the previous studies in [@Wu1999], that is, using an essential branched surface. The other technique we use is the computer-aided search of exceptional surgeries on hyperbolic links developed in [@hikmot] and utilized by the first- and the third-named authors in [@IchiharaMasai].
This paper is organized as follows. In Section \[sec:proof\], we propose a theorem as a key step toward Theorem \[thm:main\]. In fact, we give a list of families of hyperbolic two-bridge links containing all of them admitting complete exceptional surgeries (Theorem \[thm:channel\]). In Section \[sec:computer\], by using a computer, we give a complete list of candidates of complete exceptional surgeries on two-bridge links listed in Theorem \[thm:channel\]. This completes the proof of Theorem \[thm:main\]. In Appendix, we will give proofs of elementary algebraic two lemmas used in the proof of Theorem \[thm:channel\].
Constraints from essential branched surfaces {#sec:proof}
============================================
In this section, unless otherwise specified, $L_{p/q}$ denotes a hyperbolic two-bridge link of type $p/q$, and thus, $p$ is odd and $q$ is non-zero even. The purpose of this section is to show the following.
\[thm:channel\] If $L_{p/q}$ admits a complete exceptional surgery, then $L_{p/q}$ is equivalent to one of the followings:
1. $L_{[2m+1, 2n -1]}$ with $m \ge 1$, $n \ne 0, 1$.
2. $L_{[2m, 2n, 2l]}$ with $m \ge 1$, $|n| \ge 2$, $|l| \ge 2$.
3. $L_{[2m, 2n-1, -2l]}$ with $m \ge 1$, $|n| \ge 2$, $l \ge 1$.
4. $L_{[2m, 2n+1, 2l]}$ with $m \ge 1$, $|n| \ge 2$, $l \ge 1$.
5. $L_{[2m+1, 2n, 2l-1]}$ with $m \ge 1$, $n \ne 0$, $l \ne 0,1$.
6. $L_{[2m+1, 2n, -2 {\mathop{\mathrm{sgn}}\nolimits}(l), 2l-1]}$ with $m\ge 1$, $n \ne 0$, $l \ne 0, 1$.
7. $L_{[2m+1, 2n-1, -2{\mathop{\mathrm{sgn}}\nolimits}(l), 2l]}$ with $m\ge 1$, $n \ne 0,1$, $l \ne 0$.
Here ${\mathop{\mathrm{sgn}}\nolimits}(l)$ denotes $1$ (resp. $-1$) when $l$ is positive (resp. negative). In addition, in [(b-1)]{}, [(b-2)]{} and [(b-3)]{}, if $m =1$, then $n \le -2$ holds.
Proof of Theorem \[thm:channel\]
--------------------------------
At first, we give a proof of Theorem \[thm:channel\] assuming several statements which will be proved in latter subsections.
One of the key ingredients to prove Theorem \[thm:channel\] is Delman’s construction [@Delman-unpub] of essential branched surfaces which are described in terms of “allowable paths” with “channels”. In Subsection \[subsec:Delman\], we will recall the definitions of them and give a brief review to introduce the following lemma which would be well-known for experts in this area.
\[lem:3ch\] If there exists an allowable path for $p/q$ containing three channels, then $L_{p/q}$ admits no complete exceptional surgery.
In Subsection \[subsec:Channelidx\], we observe how to find channels in an allowable path by using a notion called “channel indices”. In fact, we have the following which will be proved in Subsection \[subsec:Channelidx\].
\[clm:3channels\] If there are three channel indices for an even continued fraction $[b_1,\dots, b_k]$, then there exists an allowable path for $p/q = [b_1,\dots, b_k]$ with three channels except for the cases where $[b_1,\dots, b_k] = [b_1, 2, \dots, 2, 4, 2, \dots, 2]$ or $[b_1,\dots, b_k] = [b_1, -2, \dots, -2, -4, -2, \dots, -2]$.
The following two lemmas are elementary algebraic and independent from the other arguments. Thus, we put their proofs in Appendix.
\[clm:ch3Exception\] Let $a$ and $a'$ be even integers with $a \ge 4$ and $a' \ge 2$. Each of the even continued fractions $[a, 2,\dots,2, 4, 2,\dots,2]$ and $[a', -2,\dots,-2, -4, -2,\dots,-2]$ is expressed by one of the following continued fractions:
1. $[2m+1, 2n, 2, 2l -1]$ with $m \ge 1$, $n \le -1$, $l \le -1$.
2. $[2m+1, 2n-1, 2, 2l]$ with $m \ge 1$, $n \le -1$, $l \le -1$.
3. $[2m+1, 2n, -2, 2l -1]$ with $m \ge 1$, $n \ge 1$, $l \ge 2$.
4. $[2m+1, 2n-1, -2, 2l]$ with $m \ge 1$, $n \ge 2$, $l \ge 1$.
\[clm:ch2\] Let $[b_1, \dots, b_k]$ be an even continued fraction of $p/q$ with at most two channel indices and $k \ge 3$. Then $p/q$ can be expressed by one of the following continued fractions:
1. $[2m+1, 2n-1]$ with $m \ge 1$, $n \ne 0,1$
2. $[2m, 2n, 2l]$ with $m \ge 1$, $|n| \ge 2$, $|l| \ge 2$.
3. $[2m, 2n-1, -2l]$ with $m \ge 1$, $|n| \ge 2$, $l \ge 1$.
4. $[2m, 2n+1, 2l]$ with $m \ge 1$, $|n| \ge 2$, $l \ge 1$.
5. $[2m+1, 2n, 2l-1]$ with $m \ge 1$, $n \ne 0$, $ l \ne 0,1$.
6. $[2m+1, 2n, -2, 2l-1]$ with $m \ge 1$, $n \le -1$, $l \ge 2$.
7. $[2m+1, 2n-1, -2, 2l]$ with $m \ge 1$, $n \le -1$, $l \ge 1$.
8. $[2m+1, 2n, 2, 2l-1]$ with $m\ge 1$, $n \ge 1$, $l \le -1$.
9. $[2m+1, 2n-1, 2, 2l]$ with $m\ge 1$, $n \ge 2$, $l \le -1$.
In addition, in [(1)–(3)]{}, if $m=1$, then $n \le -2$.
Here we give a proof of Theorem \[thm:channel\] assuming Proposition \[clm:3channels\] and Lemmas \[lem:3ch\], \[clm:ch3Exception\], and \[clm:ch2\].
Let $L_{p/q}$ be a hyperbolic two-bridge link. Since $q$ is non-zero even and $p$ is odd, $p/q$ can be expressed by an *even continued fraction* $[b_1, \dots, b_k]$, that is, all $b_i$’s are even. Here $k$ is odd by the parities of $p$ and $q$ again. Since $L_{p/q}$ is hyperbolic, it is a non-torus two-bridge link, and thus, we may assume that $k \ge 3$. Then, by Proposition \[clm:3channels\], if there are three channel indices for $[b_1, \dots, b_k]$ and if $[b_1, \dots, b_k]$ does not coincide with $[b_1, 2, \dots, 2, 4, 2, \dots, 2]$ or $[b_1, -2, \dots, -2, -4, -2, \dots, -2]$, then we obtain an allowable path for $p/q$ with three channels. Then, by Lemma \[lem:3ch\], $L_{p/q}$ admits no complete exceptional Dehn surgery. Considering the contrapositive, if $L_{p/q}$ admits such a Dehn surgery, then either
- $p/q = [b_1, 2, \dots, 2, 4, 2, \dots, 2]$ or
- $p/q = [b_1, -2, \dots, -2, -4, -2, \dots, -2]$ or
- $p/q = [b_1, \dots, b_k]$ has at most two channel indices with $k \ge 3$.
For the first two even continued fractions, we have Lemma \[clm:ch3Exception\]. To complete the proof of Theorem \[thm:channel\], we have to enumerate even continued fractions with at most two channel indices. Then we have Lemma \[clm:ch2\]. From the continued fractions in Lemma \[clm:ch2\] (0), (1)–(4), we obtain Theorem \[thm:channel\] (a-1), (b-1)–(b-4) respectively. Combining the continued fractions in Lemma \[clm:ch2\] (5) with those in Lemma \[clm:ch3Exception\] (3), and those in Lemma \[clm:ch2\] (7) with those in Lemma \[clm:ch3Exception\] (1), we have the continued fractions $$[2m + 1, 2n, -2, 2l -1] \text{ with } m \ge 1, \ n \ne 0, \ l \ge 2 \, ,$$ and $$[2m + 1, 2n, 2, 2l -1] \text{ with } m \ge 1, \ n \ne 0, \ l \le -1 \, .$$ Combining them, we obtain the continued fractions in Theorem \[thm:channel\] (c-1). Similarly, combining the continued fractions in Lemma \[clm:ch2\] (6) with those in Lemma \[clm:ch3Exception\] (4), and those in Lemma \[clm:ch2\] (8) with those in Lemma \[clm:ch3Exception\] (2), we have the continued fractions $$[2m + 1, 2n-1, -2, 2l] \text{ with } m \ge 1, \ n \ne 0,1, \ l \ge 1 \, ,$$ and $$[2m + 1, 2n-1, 2, 2l] \text{ with } m \ge 1, \ n \ne 0,1, \ l \le -1 \, .$$ Combining them, we obtain the continued fractions in Theorem \[thm:channel\] (c-2). Now we complete the proof of Theorem \[thm:channel\] assuming Proposition \[clm:3channels\] and Lemmas \[lem:3ch\], \[clm:ch3Exception\], and \[clm:ch2\].
Delman’s allowable path {#subsec:Delman}
-----------------------
In this subsection, we briefly review Delman’s branched surfaces and allowable paths to introduce Lemma \[lem:3ch\]. Delman constructed an essential branched surface in a rational tangle space, and studied Dehn surgery on a Montesinos knot in his unpublished preprint [@Delman-unpub]. Actually, he gave a construction of such an essential branched surface and describe them by using a combinatorial object called an allowable path. Based on the work of Li [@Li], Wu [@Wu2012] proposed a sink mark description for branched surfaces. This description has made Delman’s branched surfaces easy to treat. In the following, we briefly review these studies for our purpose, that is, to study Dehn surgeries on two-bridge links. Our notations are basically the same used in [@Wu2012], and we assume that the readers are somewhat familiar with those. For details about the definitions of terms used in the following, please refer [@Wu2012] or [@Wu1999 Section 5].
As already mentioned in the proof of Theorem \[thm:channel\], for a hyperbolic two-bridge link $L_{p/q}$, $p/q$ can be expressed by an even continued fraction $[b_1, \dots, b_k]$ with $k \ge 3$, where all $b_i$’s are even. We can construct the *diagram* $D(p/q)$ associated to $p/q$, which is the minimal sub-diagram of the Hatcher-Thurston diagram [@HatcherThurston1985 Figure 4] that contains all minimal paths from $1/0$ to $p/q$ (see [@Wu1999 Section 5] for example). The diagram $D(p/q)$ can be constructed as follows: Let $p/q = [b_1 \dots, b_k]$ be an even continued fraction of $p/q$. To each $b_i$ is associated a “fan” $F_{b_i}$ consisting of $|b_i|$ simplices in $D(p/q)$; see Figure \[fig:fan\] for the fans $F_4$ and $F_{-4}$. The edges labeled $e_1$ are called *initial* edges, and the ones labeled $e_2$ are called *terminal* edges. The diagram $D(p/q)$ can be constructed by gluing the fans $F_{b_1}, \dots, F_{b_k}$ together in such a way that the terminal edge of $F_{b_i}$ is glued to the initial edge of $F_{b_{i+1}}$. Moreover, if $b_{i} b_{i+1} < 0$, then $F_{b_i}$ and $F_{b_{i+1}}$ have one edge in common, and if $b_i b_{i+1} > 0$, then they have a $2$-simplex in common. See Figure \[fig:fanEx\] for the diagram of $[-2, 2, 4, 2]$. As a sub-diagram of the Hatcher-Thurston diagram, each vertex of $D(p/q)$ is associated a irreducible fraction or possibly $1/0$. There are three possible parities of the numerators and the denominators of them: odd/odd, odd/even, or even/odd, denoted by *o/o*, *o/e*, and *e/o*, respectively. Note that the three vertices of any simplex in $D(p/q)$ have mutually different parities. Also note that a vertex on initial or terminal edges of $F_{b_i}$ always has parity $o/e$ or $e/o$. We use the symbol “$*$” to indicate vertices with parity $o/o$.
[fig\_fan.eps]{} (3,25)[$e_1$]{} (34,25)[$e_2$]{} (60,25)[$e_1$]{} (91,25)[$e_2$]{} (9.3,-3)[$*$]{} (30,-3)[$*$]{} (65.8,50)[$*$]{} (86.5,50)[$*$]{}
[fig\_fanEx.eps]{} (18.3,46.7)[$*$]{} (38,-4.5)[$*$]{} (77,-4.5)[$*$]{}
Take two simplices in $D(p/q)$ with one edge in common. Assume that the two vertices which are not on the common edge are of the parity *o/o*. Then each of the arcs indicated in Figure \[fig:channel\] is called a *channel* which was essentially introduced by Delman [@Delman-unpub]. Note that, though a channel connecting two vertices with common parity which is not *o/o* can be also defined, we only use a channel connecting two vertices with parity *o/o*.
[fig\_channel.eps]{} (-2.5,12.5)[$*$]{} (40.5,12.5)[$*$]{} (57,12.5)[$*$]{} (100.5,12.5)[$*$]{}
A *path* $\gamma$ in $D(p/q)$ is a union of arcs, each of which is either an edge of $D(p/q)$ or a channel. A path $\gamma$ in $D(p/q)$ is said to be *allowable* if the following three conditions hold (see [@Wu1999 Definition 5.2]).
(1) $\gamma$ passes any point of $D(p/q)$ at most once.
(2) Other than the middle points of channels, $\gamma$ intersects the interior of at most one edge of any given simplex.
(3) $\gamma$ contains at least one channel.
For brevity, by a *path for $p/q$*, we mean a path from $1/0$ to $p/q$ in the diagram $D(p/q)$. Then we have the following lemma which was essentially obtained in [@Delman-unpub].
\[lem:3chBrs\] If there is an allowable path for $p/q$ containing three channels, then we can construct essential branched surface $\Sigma$ in the exterior $E(L_{p/q})$, of $L_{p/q}$. Furthermore the two components of $S^3 \setminus Int N(\Sigma)$ containing $L_{p/q}$ form a regular neighborhood of $L_{p/q}$, $N(L_{p/q}) = V_1 \cup V_2$, and each of $V_i$ is a sutured solid torus admitting three disjoint meridional cusps on $\partial V_i$ for $i=1,2$.
The construction of $\Sigma$ was originally introduced by Delman [@Delman-unpub]. Wu has reformulated the construction of $\Sigma$, and reprove that $\Sigma$ is essential, see [@Wu2012 Theorem 5.3]. Each channel creates two meridional cusps on $\partial N(L)$ as in the proof of [@Wu2012 Theorem 5.3]. In addition, one of the two meridional cusps is on $V_1$, and the other is on $V_2$ as in the proof of [@Wu1999 Lemma 5.3]. One can also show this fact directly by drawing the branched surface with a sink mark description introduced in [@Wu2012].
In Delman’s branched surface, the tangencies at the branch points are introduced by a notion called “configuration”. In this paper, we only use type I configuration (see [@Wu1999 Figure 5.2]).
From the above lemma, together with the studies on an essential branched surface in a hyperbolic 3-manifold due to Wu [@Wu1998], we obtain a proof of Lemma \[lem:3ch\].
By Lemma \[lem:3chBrs\], we obtain an essential branched surface $\Sigma$ in the exterior $E(L_{p/q})$ such that the two components of $S^3 \setminus Int N(\Sigma)$ containing $L_{p/q}$ form $N(L_{p/q}) = V_1 \cup V_2$. Moreover, each of $V_i$ is a sutured solid torus admitting three disjoint meridional cusps on $\partial V_i$ for $i=1,2$. Then $L_{p/q}$ admits no complete exceptional surgery as shown in the same way as [@Wu1998 Theorem 2.5] by using [@Wu1998 Theorem 1.9] in stead of [@Wu1998 Theorem 1.6].
Channel index {#subsec:Channelidx}
-------------
In this subsection, we observe how one can find allowable paths with channels in the diagram $D(p/q)$ by using of channel indices, and prove Proposition \[clm:3channels\]. Actually, in [@Wu1999 Lemma 5.4], Wu determined rational numbers $p/q$ such that $D(p/q)$ does not contain allowable paths with at least two channels. Our arguments in this subsection can be regarded as an extension of his arguments.
We start with introducing a channel index for an even continued fraction. As defined in [@Wu1999 Section 5], for an even continued fraction $[b_1, \dots, b_k]$, an index $i$ is said to be a *channel index* if either $b_i b_{i+1} <0$ or $b_{i} b_{i+1} >4$. By this definition, $i$ is not a channel index if and only if $b_i b_{i+1} \ge 0$ and $b_{i} b_{i+1} \le 4$, and it is equivalent to that $(b_i, b_{i+1}) = (2,2)$ or $(-2,-2)$ since each $b_j$ is even for $j = 1, \dots, k$.
We then explain how channels in allowable paths can be found if channel indices exist. We regard $D(p/q)$ as a graph on a disk $D$, with all vertices on $\partial D$, containing $\partial D$ as a sub-graph. Then an edge contained in $\partial D$ is called a *boundary edge*. On the other hand, an edge contained in the interior of $D$ is called an *interior edge*.
First, if $b_i b_{i+1} < 0$, then there is a channel in $F_{b_i} \cup F_{b_{i+1}}$, which starts and ends with boundary edges of $D(p/q)$. See the left side of Figure \[fig:ChannelOdd\] for a channel in $F_2 \cup F_{-2}$.
Next, if $b_i b_{i+1} > 0$ and $b_i \ge 4$, then there is a channel in $F_{b_i} \cup F_{b_{i+1}}$, which starts with a boundary edge and ends with an interior edge, but its union with a boundary edge of $D(p/q)$ is an allowable path. See the center of Figure \[fig:ChannelOdd\] for a channel in $F_4 \cup F_2$.
Similarly, if $b_i b_{i+1} > 0$ and $b_{i+1} \ge 4$, then there is a channel which starts with an interior edge and ends with a boundary edge, but its union with a boundary edge of $D(p/q)$ is an allowable path. See the right side of Figure \[fig:ChannelOdd\] for a channel in $F_2 \cup F_4$.
[fig\_ChannelOdd.eps]{} (7,-2)[$*$]{} (14.3,23.2)[$*$]{} (38.5,-2)[$*$]{} (53.5,-2)[$*$]{} (77,-2)[$*$]{} (91.5,-2)[$*$]{}
Even if there are $n$ channel indices, there do not necessarily exist $n$ channels in a path. For example, if $[b_i, b_{i+1}, b_{i+2}] = [2,4,2]$, then the indices $i$ and $i+1$ are channel indices. However we cannot find a path with two channels in $F_2 \cup F_4 \cup F_2$, and can only find a path with one channel, see Figure \[fig:channel242\]. This situation also arises for $[b_i, b_{i+1}, b_{i+2}] = [-2,-4,-2]$. On the other hand, if $[b_i, b_{i+1}, b_{i+2}] = [2, b, 2]$ or $[-2,-b,-2]$ with $b \ge 6$, then we have a path with two channels. See Figure \[fig:channel262\] for $[2,6,2]$ and $[-2,-6,-2]$.
[fig\_channel242.eps]{} (10,-2)[$*$]{} (30.7,-2)[$*$]{} (67,-2)[$*$]{} (87.8,-2)[$*$]{}
[fig\_channel262.eps]{} (6.7,-2)[$*$]{} (21.5,-2)[$*$]{} (36,-2)[$*$]{} (61.8,23.5)[$*$]{} (76.5,23.5)[$*$]{} (91.2,23.5)[$*$]{}
Here we give a proof of Proposition \[clm:3channels\].
Let $[b_1,\dots, b_k]$ be an even continued fraction of $p/q$. We may assume that $k \ge 3$ as already mentioned. As in [@Wu1999 Section 5], we may also assume that either
(i) $b_1 \ge 4$, or
(ii) $b_1 = 2$ and $b_2 \le -2$.
Thus, we can assume that the index $i=1$ is always a channel index. Suppose that there are three channel indices for $p/q=[b_1,\dots, b_k]$. Let $i$ and $j$ be the second and the third channel indices respectively ($2 \le i < j < k$). We consider a path on $D(p/q)$ starts from $1/0$ with a bottom edge since $b_1$ is positive by the assumption. The proof is achieved by case by case argument. For each case, we construct an allowable path with three channels by combining the channels introduced in Figure \[fig:ChannelOdd\].
1. Assume that $b_1 b_2 < 0$, $b_i b_{i+1} <0$, and $b_j b_{j+1} < 0$.
This is the easiest case. Since the path starts with a bottom edge, the channel for the index $1$ starts with a bottom edge and ends with a top edge of $D(p/q)$. Since $1,i,j$ are the first three channel indices and $b_1$ is positive, $b_2 , \dots, b_i < 0$ and $b_{i+1}, \dots, b_{j} > 0$, and $b_{j+1}< 0$. Thus, the channel for the index $i$ starts with a top edge and ends with a bottom edge, and the channel for the index $j$ starts with a top edge and ends with a bottom edge. Then the three channels can be connected by boundary edges of $D(p/q)$ to become an allowable path for $p/q$. Note that this works even if $j = i + 1$. As typical situations, see Figure \[fig:path1\] for the paths in the diagrams corresponding to $[2,-2,2,2,-2]$ and $[2,-2,-2,2,-2]$.
[fig\_path1.eps]{} (7.1,-2)[$*$]{} (22,-2)[$*$]{} (14.6,16.3)[$*$]{} (37.6,16.3)[$*$]{} (59.3,-2)[$*$]{} (82,-2)[$*$]{} (67,16.3)[$*$]{} (89.3,16.3)[$*$]{}
2. Assume that $b_1 b_2 < 0$, $b_i b_{i+1} <0$, and $b_j b_{j+1} > 4$.
By the similar argument for Case 1, we can find two channels for the indices $1$ and $i$. The channel for the index $i$ ends with a bottom edge. The path constructed as in the center or the right side of Figure \[fig:ChannelOdd\] starts and ends with bottom edges. So they can be joined with boundary edges of $D(p/q)$ to form an allowable path for $p/q$. Note that this works even if $j = i + 1$. As typical situations, see Figure \[fig:path2\] for the paths in the diagrams corresponding to $[2, -2, 2, 2, 4]$ and $[2, -2, 2, 4, 2]$.
[fig\_path2.eps]{} (7.1,-2)[$*$]{} (22.5,-2)[$*$]{} (14.8,16.3)[$*$]{} (37.7,-2)[$*$]{} (60.5,-2)[$*$]{} (75.8,-2)[$*$]{} (68,16.3)[$*$]{} (91,-2)[$*$]{}
3. Assume that $b_1 b_2 < 0$, $b_i b_{i+1} > 4$, and $b_j b_{j+1} < 0$.
In this case, the channel for the index $1$ starts with a bottom edge and ends with a top edge, the channel for the index $i$ starts and ends with a top edge, and the channel for the index $j$ starts with a top edge and ends with a bottom edge. Then the three channels can be connected by boundary edges of $D(p/q)$ to become an allowable path for $p/q$. Note that this works even if $j = i + 1$. As typical situations, see Figure \[fig:path3\] for the paths in the diagrams corresponding to $[2,-2,-4,2,2]$ and $[2,-2,-2,-4,2]$.
[fig\_path3.eps]{} (7.1,-2)[$*$]{} (30.2,-2)[$*$]{} (14.8,16.3)[$*$]{} (25.3,16.3)[$*$]{} (60.4,-2)[$*$]{} (91,-2)[$*$]{} (68,16.3)[$*$]{} (83.3,16.3)[$*$]{}
4. Assume that $b_1 b_2 > 4$, $b_i b_{i+1} < 0$, and $b_j b_{j+1} < 0$.
We omit details for this case since the proof is similar to that of Case 2.
5. Assume that $b_1 b_2 > 4$, $b_i b_{i+1} > 4$, and $b_j b_{j+1} < 0$.
In this case, we may assume that $b_1 \ge 4$, because if $b_1 = 2$, then $b_2 \le -2$ and then $b_1 b_2 <0$. Thus, the sub-diagram corresponding to $[2,4,2]$ does not appear even if $i=2$. Each of the channels for the indices $1$ and $i$ starts and ends with a bottom edge, and the channel for the index $j$ starts with a bottom edge and ends with a top edge. Then the three channels can be connected by boundary edges of $D(p/q)$ to become an allowable path for $p/q$. Note that this works even if $j = i + 1$. As typical situations, see Figure \[fig:path5\] for the paths in the diagrams corresponding to $[4,2,4,-2,-2]$ and $[4,4,2,-2,-2]$.
[fig\_path5.eps]{} (4,-1.4)[$*$]{} (13.7,-1.4)[$*$]{} (27.9,16.3)[$*$]{} (23.4,-1.4)[$*$]{} (53.5,-1.4)[$*$]{} (63.3,-1.4)[$*$]{} (84.5,16.3)[$*$]{} (77.4,-1.4)[$*$]{}
6. Assume that $b_1 b_2 > 4$, $b_i b_{i+1} < 0$, and $b_j b_{j+1} > 4$.
We can prove this case by the similar argument. As typical situations, see Figure \[fig:path6\] for the paths in the diagrams corresponding to $[4,2,-2,-4, -2]$ and $[4,2,2,-2,-4]$.
[fig\_path6.eps]{} (4,-2)[$*$]{} (13.7,-2)[$*$]{} (24.3,15.2)[$*$]{} (36.7,15.2)[$*$]{} (60.5,-2)[$*$]{} (70.3,-2)[$*$]{} (84.5,15.2)[$*$]{} (94.2,15.2)[$*$]{}
7. Assume that $b_1 b_2 < 0$, $b_i b_{i+1} > 4$, and $b_j b_{j+1} > 4$.
In this case, we have to be careful since the sub-diagram corresponding to $[-2,-4,-2]$ may appear. By the assumptions, $b_2, \dots, b_i, b_{i+1}, \dots, b_j, b_{j+1} \le -2$. If $b_2 \le -4$, then $i =2$ and we can find two channels in $F_{b_2} \cup F_{b_3}$ and $F_{b_j} \cup F_{b_{j+1}}$ as in Case 5 even if $j = i+1 = 3$. As typical situations, see Figure \[fig:path7\_1\] for the paths in the diagrams corresponding to $[2,-4,-2,-4,-2]$ and $[2,-4,-4,-2,-2]$. Thus, we assume that $b_2 = \dots = b_i = -2$. Then $b_{i+1} \le -4$. If $b_{i+1} \le -6$, then we can find two channels in $F_{b_i} \cup F_{b_{i+1}}$ and $F_{b_j} \cup F_{b_{j+1}}$. As typical situations, see Figure \[fig:path7\_2\] for the paths in the diagrams corresponding to $[2,-2,-6,-2,-2]$ and $[2,-2,-2,-6,-2]$. Thus, we assume that $b_{i+1} = -4$. Then $j = i+1$. The case where $b_{j+1} = -2$ is excluded in Proposition \[clm:3channels\] as $[b_1,\dots, b_k] = [b_1, -2, \dots, -2, -4, -2, \dots, -2]$. Thus, we may assume that $b_j \le -4$. Then we can construct an allowable path for $p/q$ with three channels. As typical situations, see Figure \[fig:path7\_3\] for the paths in the diagrams corresponding to $[2,-2,-4,-4,-2]$ and $[2,-2,-2,-4,-4]$.
[fig\_path7\_1.eps]{} (6.1,-2)[$*$]{} (10.2,15)[$*$]{} (19.4,15)[$*$]{} (32.6,15)[$*$]{} (59.1,-2)[$*$]{} (63.2,15)[$*$]{} (72.1,15)[$*$]{} (85.3,15)[$*$]{}
[fig\_path7\_2.eps]{} (7.1,-2)[$*$]{} (14.8,17)[$*$]{} (22.4,17)[$*$]{} (30,17)[$*$]{} (60.7,-2)[$*$]{} (68.3,17)[$*$]{} (79.7,17)[$*$]{} (91,17)[$*$]{}
[fig\_path7\_3.eps]{} (6.6,-2)[$*$]{} (13.7,15.7)[$*$]{} (27.9,15.7)[$*$]{} (42,15.7)[$*$]{} (63.3,-2)[$*$]{} (70.5,15.7)[$*$]{} (84.5,15.7)[$*$]{} (94.3,15.7)[$*$]{}
8. Assume that $b_1 b_2 > 4$, $b_i b_{i+1} > 4$, and $b_j b_{j+1} > 4$.
In this case, we also have to be careful since the sub-diagram corresponding to $[2,4,2]$ may appear. We may assume that $b_1 \ge 4$. By the assumptions, $b_2, \dots, b_i, b_{i+1}, \dots, b_j, b_{j+1} >0$. If $b_2 \ge 4$, then $i =2$ and we can find two channels in $F_{b_2} \cup F_{b_3}$ and $F_{b_j} \cup F_{b_{j+1}}$ as in Case 5 even if $j = i+1 = 3$. As typical situations, see Figure \[fig:path8\_1\] for the paths in the diagrams corresponding to $[4,4,4,2,2]$ and $[4,4,2,4,2]$. Thus, we assume that $b_2 = \dots = b_i = 2$. Then $b_{i+1} \ge 4$. If $b_{i+1} \ge 6$, then we can find two channels in $F_{b_i} \cup F_{b_{i+1}}$ and $F_{b_j} \cup F_{b_{j+1}}$. As typical situations, see Figure \[fig:path8\_2\] for the paths in the diagrams corresponding to $[4, 2,6,2,2]$ and $[4,2,2,6,2]$. Thus, we assume that $b_{i+1} = 4$. Then $j = i+1$. The case where $b_j = 2$ is excluded in Proposition \[clm:3channels\] as $[b_1,\dots, b_k] = [b_1, 2, \dots, 2, 4, 2, \dots, 2]$. Thus, we may assume that $b_j \ge 4$. Then we can construct an allowable path for $p/q$ with three channels. As typical situations, see Figure \[fig:path8\_3\] for the paths in the diagrams corresponding to $[4,2,4,4,2]$ and $[4,2,2,4,4]$.
Now we complete the proof of Proposition \[clm:3channels\].
[fig\_path8\_1.eps]{} (4.3,-1.5)[$*$]{} (14.7,-1.5)[$*$]{} (24.3,-1.5)[$*$]{} (34,-1.5)[$*$]{} (57.7,-1.5)[$*$]{} (68.1,-1.5)[$*$]{} (77.7,-1.5)[$*$]{} (87.3,-1.5)[$*$]{}
[fig\_path8\_2.eps]{} (4.3,-1.7)[$*$]{} (15.4,-1.7)[$*$]{} (23.4,-1.7)[$*$]{} (31.3,-1.7)[$*$]{} (59.7,-1.7)[$*$]{} (70.9,-1.7)[$*$]{} (78.8,-1.7)[$*$]{} (86.6,-1.7)[$*$]{}
[fig\_path8\_3.eps]{} (4.1,-1.7)[$*$]{} (14.1,-1.7)[$*$]{} (24.8,-1.7)[$*$]{} (37.3,-1.7)[$*$]{} (61.7,-1.7)[$*$]{} (71.7,-1.7)[$*$]{} (86.2,-1.7)[$*$]{} (94.2,-1.7)[$*$]{}
Computer search of exceptional surgeries {#sec:computer}
========================================
Thanks to Theorem \[thm:channel\], it suffices to investigate links in Figure \[fig:fig\_SD-a\], Figure \[fig:fig\_SD-b\], and Figure \[fig:fig\_SD-c\]. In [@MPR], Martelli-Petronio-Roukema implemented a program which enumerate all candidate exceptional surgeries along a given link. The code is called `find_exceptional_fillings` (see also [@IchiharaMasai]). It utilizes hyperbolicity verifier HIKMOT [@hikmot], and hence we can verify that all the slopes which do [*not*]{} appear in the result of `find_exceptional_fillings` give hyperbolic surgeries. We modified the code so that it only investigates slopes specified in Figure \[fig:fig\_SD-a\], Figure \[fig:fig\_SD-b\], and Figure \[fig:fig\_SD-c\]. However, if we only use `find_exceptional_fillings` and HIKMOT, we get many “candidate exceptional slopes” which quite likely to be hyperbolic. This is because SnapPy often finds non-geometric solutions (whose solution type is called ’contains negatively oriented tetrahedra’ in SnapPy) especially for closed manifolds. Most of such manifolds have hyperbolic structures, but unfortunately, SnapPy’s randomize function does not work in many cases. To prove those closed manifolds to be hyperbolic, we used [@hikmot Algorithm 2]. In the algorithm, we try drilling out a closed geodesic and then by refilling, we get a new surgery description of a given closed manifold. By this procedure, we have much better chance to obtain geometrics solutions. In a few cases, [@hikmot Algorithm 2] does not suffice and we need to take finite coverings and apply [@hikmot Algorithm 2]. For more detail see the codes available as ancillary files of arXiv version of this paper. The results of the calculations are presented in Tables 1–6. We remark that the link (c-2) given in Theorem \[thm:channel\] yields no elements. This completes the proof of the main theorem.
[fig\_SD-a.eps]{} (22,40)[$-1/m$]{} (54,26)[$-1/n$]{}
[fig\_SD-b.eps]{} (10,52)[$-1/m$]{} (19,31)[$-1/n$]{} (27,52)[$-1/l$]{} (57,52)[$-1/m$]{} (71,31)[$-1/n$]{} (82,52)[$1/l$]{} (4,18)[$-1/m$]{} (18,-3)[$-1/n$]{} (27,18)[$-1/l$]{} (64,18)[$-1/m$]{} (71,-3)[$-1/n$]{} (86,18)[$-1/l$]{} (0,44.5)[(b-1)]{} (47,44.5)[(b-2)]{} (-6,11)[(b-3)]{} (47,11)[(b-4)]{}
[fig\_SD-c.eps]{} (7.5,13.5)[$-1/m$]{} (13,-2)[$-1/n$]{} (30,-2)[$-1/l$]{} (20,13.5)[${\mathop{\mathrm{sgn}}\nolimits}(l)$]{} (62,13.5)[$-1/m$]{} (73,-2)[$-1/n$]{} (85,-2)[$-1/l$]{} (80,13.5)[${\mathop{\mathrm{sgn}}\nolimits}(l)$]{}
Link slopes
------------------------- -----------------------------------------------------------------------------
$L_{[3,3]}$ $(-2,-2)$ $(-2,-1)$ $(-1,-4)$ $(-1,-3)$ $(-1,-1)$ $(5,\frac{4}{3})$
$L_{[3,2 n - 1]}$ $(n - 2,n - 2)$ $(n + 3,\frac{2 n - 1}{2})$
$L_{[2 m + 1,-3]}$ $(m - 3,\frac{2 m + 1}{2})$ $(m + 2,m + 2)$
$L_{[2 m + 1,3]}$ $(m - 1,m - 1)$ $(m + 4,\frac{2 m + 1}{2})$
$L_{[2 m + 1,-5]}$ $(m - 5,m)$
$L_{[5,2 n - 1]}$ $(n,n + 5)$
$L_{[2 m + 1,5]}$ $(m + 1,m + 6)$
$L_{[2 m + 1,2 n - 1]}$ $(m + n - 2,m + n + 2)$ $(\frac{2 m + 2 n - 1}{2},\frac{2 m + 2 n + 1}{2})$
: Exceptional fillings on Link (a-1)
Link slopes
------------------- -----------------
$L_{[2,2 n,2 l]}$ $(l - 1,l - 1)$
: Exceptional fillings on Link (b-1)
Link slopes
------------------------- ---------------------
$L_{[2,2 n - 1,- 2 l]}$ $(- l - 1,- l - 1)$
$L_{[2 m,2 n - 1,-2]}$ $(m + 1,m + 1)$
: Exceptional fillings on Link (b-2)
Link slopes
----------------------- ---------------------------------------------------
$L_{[2,2 n + 1,2]}$ $(-3,-1)$ $(-2,-2)$ $(-2,-1)$ $(-1,-4)$ $(-1,-1)$
$L_{[2,2 n + 1,2 l]}$ $(l - 1,l - 1)$
$L_{[2 m,2 n + 1,2]}$ $(m - 1,m - 1)$
: Exceptional fillings on Link (b-3)
Link slopes
----------------------------- -----------------------------------------------------------------
$L_{[3,2,3]}$ $(-3,-1)$ $(-2,-2)$ $(-2,-1)$ $(-1,-4)$ $(-1,-1)$
$L_{[3,2,2 l - 1]}$ $(l - 2,l - 2)$
$L_{[2 m + 1,2,3]}$ $(m - 1,m - 1)$
$L_{[2 m + 1,2,2 l - 1]}$ $(l + m,l + m)$ $(l + m,l + m + 1)$
$L_{[2 m + 1,-2,-3]}$ $(m + 2,m + 2)$
$L_{[2 m + 1,-2,2 l - 1]}$ $(l + m - 1,l + m)$ $(l + m,l + m)$
$L_{[2 m + 1,2 n,2 l - 1]}$ $(l + m + n - 2,l + m + n + 2)$ $(l + m + n - 1,l + m + n + 1)$
$(l + m + n,l + m + n)$
: Exceptional fillings on Link (b-4)
Link slopes
----------------------- -----------------
$L_{[3,2,2,2 l - 1]}$ $(l - 2,l - 2)$
: Exceptional fillings on Link (c-1)
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors are partially supported by JSPS KAKENHI Grant Numbers 18K0327 and 19K03483 and 19K14525, respectively.
[99]{}
M. Culler, N. M. Dunfield, and J. R. Weeks, *SnapPy*, a computer program for studying the geometry and topology of 3-manifolds, available at `http://snappy.computop.org`
C. Delman, *Constructing essential laminations and taut foliations which survive all Dehn surgeries*, preprint (unpublished).
W. Floyd and A. Hatcher, *The space of incompressible surfaces in a $2$-bridge link complement*, Trans. Amer. Math. Soc. [**305**]{} (1988), no. 2, 575–599. D. Gabai and U. Oertel, *Essential laminations in $3$-manifolds*, Ann. of Math. (2) **130** (1989), no. 1, 41–73.
, *Dehn surgeries on $2$-bridge links which yield reducible $3$-manifolds*, J. Knot Theory Ramifications **18** (2009), no. 7, 917–965.
A. Hatcher and W. Thurston, *Incompressible surfaces in $2$-bridge knot complements*, Invent. Math. **79** (1985), no. 2, 225–246.
N. Hoffman, K. Ichihara, M. Kashiwagi, H. Masai, S. Oishi, and A. Takayasu, Verified Computations for Hyperbolic 3-Manifolds, Exp. Math. [**25**]{} (2016), no. 1, 66–78.
K. Ichihara, *Exceptional surgeries on components of 2-bridge links*, Arch. Math. (Basel), **99** (2012), no. 1, 71–79.
K. Ichihara and H. Masai, *Exceptional surgeries on alternating knots*, Comm. Anal. Geom. [**24**]{} (2016), no. 2, 337–377. T. Li, *Laminar Branched Surfaces in $3$-manifolds*, Geom. Topol. **6** (2002), 153–194.
B. Martelli, C. Petronio, and F. Roukema. [*Exceptional Dehn surgery on the minimally twisted five-chain link.*]{} Comm. Anal. Geom. [**22**]{} (2014), no. 4, 689-735.
W.-Q. Wu, Sutured manifold hierarchies, essential laminations, and Dehn surgery, J. Diff. Geom. **48** (1998), 407–437.
W.-Q. Wu, Dehn surgery on arborescent links Trans. Amer. Math. Soc. **351** (1999), no. 6, 2275–2294. W.-Q. Wu, *Persistently laminar branched surfaces*, Comm. Anal. Geom. [**20**]{} (2012), no. 2, 397-434.
Calculations of continued fractions
===================================
We give a lemma used to replace an even continued fraction into a continued fraction not necessarily even.
\[lem:cf\] Let $a, k$ be integers with $a \ne 0$, $k \ge 1$, and $y$ a rational number with $y \ne 0$. Then we have the following.
1. $[\underbrace{2,\dots,2}_k] = \dfrac{k}{k+1}$.
2. $[\underbrace{-2,\dots,-2}_k] = -\dfrac{k}{k+1}$.
3. $[a, \underbrace{2,\dots,2}_k, y] = [a-1, -(k+1), y-1]$.
4. $[a, \underbrace{-2,\dots,-2}_k, y] = [a+1, k+1, y+1]$.
The proof is achieved by an induction on $k$. We omit details here.
Here we give a proof of Lemma \[clm:ch3Exception\].
First we consider $[a, \underbrace{2,\dots,2}_{b}, 4, \underbrace{2,\dots,2}_{c}]$. By Lemma \[lem:cf\] (1) and (3), we have $$\begin{aligned}
[a, \underbrace{2,\dots,2}_{b}, 4, \underbrace{2,\dots,2}_{c}]
&= \dfrac{1}{a - \dfrac{1}{2 - \cdots -\dfrac{1}{2 - \dfrac{1}{4 - \dfrac{c}{c+1}}}}} \\
&= [a, \underbrace{2,\dots,2}_{b}, \dfrac{3c+4}{c+1}] \\
&= [a-1, -(b+1), \dfrac{3c+4}{c+1} -1 ] \\
&= [a-1, -(b+1), 2 + \dfrac{1}{c+1}] \\
&= [a-1, -(b+1), 2, -(c+1)] \, , \end{aligned}$$ where $a$ is even and $b,c$ have the opposite parities, and $a \ge 4$, $b \ge 1$, $c \ge 1$. Replacing $a -1$ into $2m+1$, we have $$[2m+1, -(b+1), 2, -(c+1)] \,$$ where $b,c$ have the opposite parities, and $m \ge 1$, $b \ge 1$, $c \ge 1$. If $b$ is odd and $c$ is even, then replacing $-(b+1)$ into $2n$ and $-(c+1)$ into $2l -1$, we have $$[2m+1, 2n, 2, 2l -1] \text{ \ with \ } m \ge 1, \ n \le -1, \ l \le -1.$$ Then we obtain the continued fractions in Lemma \[clm:ch3Exception\] (1). If $b$ is even and $c$ is odd, then replacing $-(b+1)$ into $2n-1$ and $-(c+1)$ into $2l$, we have $$[2m+1, 2n-1, 2, 2l] \text { \ with \ } m \ge 1, \ n \le -1, \ l \le -1.$$ Then we obtain the continued fractions in Lemma \[clm:ch3Exception\] (2).
Next we consider $[a', \underbrace{-2,\dots, -2}_{b}, -4, \underbrace{-2,\dots,-2}_{c}]$. By Lemma \[lem:cf\] (2) and (4), we have $$\begin{aligned}
[a', \underbrace{-2,\dots,-2}_{b}, -4, \underbrace{-2,\dots,-2}_{c}]
&= \dfrac{1}{a' - \dfrac{1}{-2 - \cdots -\dfrac{1}{-2 - \dfrac{1}{-4 - \left(-\dfrac{c}{c+1}\right)}}}} \\
&= [a', \underbrace{-2,\dots,-2}_{b}, \dfrac{-3c-4}{c+1}] \\
&= [a' +1, b+1, \dfrac{-3c-4}{c+1} +1 ] \\
&= [a' +1, b+1, -2 - \dfrac{1}{c+1}] \\
&= [a'+1, b+1, -2, c+1] \, , \end{aligned}$$ where $a'$ is even and $b,c$ have the opposite parities, and $a' \ge 2$, $b \ge 1$, $c \ge 1$. Replacing $a' +1$ into $2m+1$, we have $$[2m+1, b+1, -2, c+1] \,$$ where $b,c$ have the opposite parities, and $m \ge 1$, $b \ge 1$, $c \ge 1$. If $b$ is odd and $c$ is even, then replacing $b+1$ into $2n$ and $c+1$ into $2l -1$, we have $$[2m+1, 2n, -2, 2l -1] \text { \ with \ } m \ge 1, \ n \ge 1, \ l \ge 2.$$ Then we obtain the continued fractions in Lemma \[clm:ch3Exception\] (3). If $b$ is even and $c$ is odd, then replacing $b+1$ into $2n-1$ and $c+1$ into $2l$, we have $$[2m+1, 2n-1, -2, 2l] \text { \ with \ } m \ge 1, \ n \ge 2, \ l \ge 1.$$ Then we obtain the continued fractions in Lemma \[clm:ch3Exception\] (4), and complete the proof of Lemma \[clm:ch3Exception\].
Next we give a proof of Lemma \[clm:ch2\].
Let $[b_1, \dots, b_k]$ be an even continued fraction of $p/q$ with at most two channel indices and $k \ge 3$. As already introduced in the proof of Proposition \[clm:3channels\], we may assume that either (i) $b_1 \ge 4$, or (ii) $b_1 = 2$ and $b_2 \le -2$. First we consider an even continued fraction with just one channel index. Since the index $i=1$ is a channel index, an even continued fraction $[b_1, \dots , b_k]$ with just one channel index is one of either $$[b_1, \underbrace{2,\dots,2}_{k-1}] \, , \,
[b_1, \underbrace{-2,\dots,-2}_{k-1}] \, ,
\text{ or }[b_1, b_2]$$ with $|b_2|\ge 4$. Since $k \ge 3$, $[b_1, b_2]$ is unsuitable. We consider the two cases where (i) $b_1 \ge 4$, and (ii) $b_1 = 2$ and $b_2 \le -2$.
(i) Assume that $b_1 \ge 4$. By Lemma \[lem:cf\] (1), we have $$[b_1, \underbrace{2,\dots,2}_{k-1}] = \dfrac{1}{b_1 - \dfrac{k-1}{k}}
= \dfrac{1}{(b_1 -1) - \dfrac{1}{-k}}
= [b_1 - 1, -k] \, .$$ Replacing $b_1 - 1$ into $2m +1$ and $-k$ into $2n-1$, we have $$[b_1, \underbrace{2,\dots,2}_{k-1}] = [2m+1, 2n-1]\, ,$$ where $m,n$ are integers with $m \ge 1$, $n \le -1$. Similarly, by using Lemma \[lem:cf\] (2) and replacing $b_1 + 1$ into $2m+1$ and $k$ into $2n-1$, we have $$[b_1, \underbrace{-2,\dots,-2}_{k-1}]
= \dfrac{1}{b_1 - \left(- \dfrac{k-1}{k}\right)}
= [b_1 + 1, k] = [2m+1, 2n-1]\, ,$$ where $m,n$ are integers with $m \ge 2$, $n \ge 2$.
(ii) Assume that $b_1 = 2$. Using Lemma \[lem:cf\] (2) and replacing $k$ into $2n-1$, we have $$[2, \underbrace{-2,\dots,-2}_{k-1}]
= \dfrac{1}{2 - \left(- \dfrac{k-1}{k}\right)}
= [3,k] = [3, 2n-1]\, ,$$ where $n$ is an integer with $n \ge 2$.
Combining (i) with (ii), we obtain the continued fractions in Lemma \[clm:ch2\] (0).
Next we consider an even continued fraction with just two channel indices. As in the former case, we consider the two cases where (i) $b_1 \ge 4$, and (ii) $b_1 = 2$ and $b_2 \le -2$. Recall that the index $i=1$ is a channel index in each case.
(i) Assume that $b_1 \ge 4$. Let $a = b_1$. In this case, by the definition of a channel index, an even continued fraction with just two channel indices is one of the followings:
(1) $[a,b,c]$, where $b,c$ are even, and $|b| \ge 4$, $|c| \ge 4$.
(2) $[a, b, \underbrace{2, \dots, 2}_c ]$, where $b$ is even and $c$ is odd, and $|b| \ge 4$, $c \ge 1$.
(3) $[a,b, \underbrace{-2, \dots, -2}_c ]$, where $b$ is even and $c$ is odd, and $|b| \ge 4$, $c \ge 1$.
(4) $[a, \underbrace{2, \dots, 2}_b, c]$, where $b$ is odd and $c$ is even, and $b \ge 1$, $|c| \ge 4$.
(5) $[a, \underbrace{2, \dots, 2}_b, \underbrace{-2, \dots, -2}_c]$, where $b,c$ have the same parity, and $b \ge 1$, $c \ge 1$.
(6) $[a, \underbrace{-2, \dots, -2}_b, c]$, where $b$ is odd and $c$ is even, and $b \ge 1$, $|c| \ge 4$.
(7) $[a, \underbrace{-2, \dots, -2}_b, \underbrace{2, \dots, 2}_c]$, where $b,c$ have the same parity, and $b \ge 1$, $c \ge 1$.
(ii) Assume that $b_1 = 2$ and $b_2 \le -2$. By the same enumeration, we have the followings:
(1) $[2,b,c]$, where $b,c$ are even, and $b \le -4$, $|c| \ge 4$.
(2) $[2, b, \underbrace{2, \dots, 2}_c ]$, where $b$ is even and $c$ is odd, and $b \le -4$, $c \ge 1$.
(3) $[2, b, \underbrace{-2, \dots, -2}_c]$, where $b$ is even and $c$ is odd, and $b \le -4$, $c \ge 1$.
(4) $[2, \underbrace{-2, \dots, -2}_b, c]$, where $b$ is odd and $c$ is even, and $b \ge 1$, $|c| \ge 4$.
(5) $[2, \underbrace{-2, \dots, -2}_b, \underbrace{2, \dots, 2}_c]$, where $b,c$ have the same parity, and $b \ge 1$, $c \ge 1$.
Combining (i-1-1) with (ii-1-1), (i-1-2) with (ii-1-2), (i-1-3) with (ii-1-3), we obtain the following respectively.
(1) $[a,b,c]$, where $a,b,c$ are even, and $a \ge 2$, $|b| \ge 4$, $|c| \ge 4$.
(2) $[a, b, \underbrace{2, \dots, 2}_c ]$, where $a,b$ are even and $c$ is odd, and $a \ge 2$, $|b| \ge 4$, $c \ge 1$.
(3) $[a,b, \underbrace{-2, \dots, -2}_c ]$, where $a,b$ are even and $c$ is odd, and $a \ge 2$, $|b| \ge 4$, $c \ge 1$.
Note that, in (1-1)–(1-3), if $a = 2$, then $b \le -4$ holds.
From (i-2-1) and (i-2-2), we obtain the following respectively.
(1) $[a, \underbrace{2, \dots, 2}_b, c]$, where $a, c$ are even and $b$ is odd, and $a \ge 4$, $b \ge 1$, $|c| \ge 4$.
(2) $[a, \underbrace{2, \dots, 2}_b, \underbrace{-2, \dots, -2}_c]$, where $a$ is even and $b,c$ have the same parity, and $a \ge 4$, $b \ge 1$, $c \ge 1$.
Combining (i-3-1) with (ii-3-1), (i-3-2) with (ii-3-2), we obtain the following respectively.
(1) $[a, \underbrace{-2, \dots, -2}_b, c]$, where $a, c$ are even and $b$ is odd, and $a \ge 2$, $b \ge 1$, $|c| \ge 4$.
(2) $[a, \underbrace{-2, \dots, -2}_b, \underbrace{2, \dots, 2}_c]$, where $a$ is even and $b,c$ have the same parity, and $a \ge 2$, $b \ge 1$, $c \ge 1$.
The continued fractions (1-1) coincide with those of in Lemma \[clm:ch2\] (1) by replacing $a$ into $2m$, $b$ into $2n$, $c$ into $2l$.
For the continued fraction (1-2), using Lemma \[lem:cf\] (1), we have $$[a, b, \underbrace{2, \dots, 2}_c ]
= \dfrac{1}{a - \dfrac{1}{b - \dfrac{c}{c+1}}}
= \dfrac{1}{a - \dfrac{1}{(b - 1) - \dfrac{1}{-(c+1)}}} = [a, b-1, -(c+1)] \, ,$$ where $a,b$ are even and $c$ is odd, and $a \ge 2$, $|b| \ge 4$, $c \ge 1$. In addition, if $a = 2$, then $b \le -4$ holds. Replacing $a$ into $2m$, $b -1$ into $2n-1$, $-(c+1)$ into $-2l$, we have $$[2m, 2n-1, -2l]$$ with $m \ge 1$, $|n| \ge 2$, $l \ge 1$. In addition, if $m = 1$, then $n \le -2$ holds. Then we obtain the continued fractions listed in Lemma \[clm:ch2\] (2).
By the similar argument, for the continued fraction (1-3), using Lemma \[lem:cf\] (2) and replacing $a,b,c$ suitably, we obtain the continued fractions listed in Lemma \[clm:ch2\] (3).
For the continued fraction (2-1), using Lemma \[lem:cf\] (3), we have $$[a, \underbrace{2, \dots, 2}_b, c] = [a-1, -(b+1), c-1] \, ,$$ where $a, c$ are even and $b$ is odd, and $a \ge 4$, $b \ge 1$, $|c| \ge 4$. Replacing $a -1$ into $2m+1$, $-(b+1)$ into $2n$, $c-1$ into $2l-1$, we have $$[2m+1, 2n, 2l-1] \text{ with } m \ge 1, \ n \le -1, \ |l| \ge 2. \tag{$*$}$$ For the continued fraction (2-2), using Lemma \[lem:cf\] (2), we have $$[a, \underbrace{2, \dots, 2}_b, \underbrace{-2, \dots, -2}_c]
= \dfrac{1}{a - \dfrac{1}{2 - \cdots -\dfrac{1}{2 - \left(-\dfrac{c}{c+1}\right)}}}
= [a, \underbrace{2, \dots, 2}_b, -\dfrac{c+1}{c}] \, .$$ Then, by Lemma \[lem:cf\] (3), we have $$[a, \underbrace{2, \dots, 2}_b, -\dfrac{c+1}{c}]
= [a-1, -(b+1), -\dfrac{c+1}{c} -1]
= [a-1, -(b+1), -2, c] \, ,$$ where $a$ is even and $b,c$ have the same parity, and $a \ge 4$, $b \ge 1$, $c \ge 1$. Replacing $a -1$ into $2m+1$, we have $$[2m+1, -(b+1), -2, c] \,$$ where $b,c$ have the same parities, and $m \ge 1$, $b \ge 1$, $c \ge 1$. In this case, if $c = 1$, then $b$ is odd, and we have $$[2m+1, -(b+1), -2, 1] = [2m+1, -(b+1), -3]$$ which can be regard as the case where $l = -1$ in ($*$) by replacing $-(b+1)$ into $2n$.
Thus, from (2-1) and (2-2), we obtain the following three families of continued fractions:
(a) $[2m+1, 2n, 2l-1]$ with $m \ge 1$, $n \le -1$, $l \ne 0, 1$.
(b) $[2m+1, 2n, -2, 2l-1]$ with $m \ge 1$, $n \le -1$, $l \ge 2$.
(c) $[2m+1, 2n-1, -2, 2l]$ with $m \ge 1$, $n \le -1$, $l \ge 1$.
By the similar argument for the continued fractions (3-1) and (3-2), we have the following two families:
1. $[2m+1,2n, 2l -1]$ with $m \ge 1$, $n \ge 1$, $l \ne 0,1$.
2. $[2m+1, 2n, 2, 2l-1]$ with $m\ge 1$, $n \ge 1$, $l \le -1$.
3. $[2m+1, 2n-1, 2, 2l]$ with $m\ge 1$, $n \ge 2$, $l \le -1$.
Combining (a) into (a)’, we obtain the continued fractions listed in Lemma \[clm:ch2\] (4). The continued fractions (b), (c), (b)’, (c)’ coincide with those listed in Lemma \[clm:ch2\] (5), (6), (7), (8) respectively. Now we complete the proof of Lemma \[clm:ch2\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present and discuss correlations for optical and near-infrared ($5500-10030$ Å) line intensity measurements at many positions in the Crab Nebula. These correlations suggest the existence of gas produced by a range of nuclear processing, from material in which synthesis ended with the CNO-cycle, to some helium-burning and nitrogen depletion, to regions containing enriched products of oxygen-burning. The latter exhibit a gradual, linear rise of $[$Ni II$]$ emission with increasing argon enrichment, whereas gas with less nuclear processing shows markedly different $[$Ni II$]$ emission characteristics, including the highest derived abundances. This suggests two origins for stable, neutron-rich nickel in the nebula: a type of “alpha-rich freezeout” in the more highly processed material, and possibly removal of ions from the neutron star in other regions. In addition, the data indicate that anomalously strong observed $[$C I$]$ emission comes from broad, low-ionization H$^{+}$ to H$^{0}$ transition zones. Although the strongest He I emission could also be enhanced in similar low-ionization gas, correlations between relevant line ratios argue against that interpretation, strengthening the case for an exceptionally high helium mass fraction in some locations.'
author:
- 'Gordon M. MacAlpine, Tait C. Ecklund, William R. Lester, Steven J. Vanderveer'
- 'Louis-Gregory Strolger'
title: |
A SPECTROSCOPIC STUDY OF NUCLEAR PROCESSING\
AND PRODUCTION OF ANOMALOUSLY STRONG LINES\
IN THE CRAB NEBULA [^1]
---
INTRODUCTION
============
Young supernova remnants are excellent laboratories for investigating how stars make elements, and the Crab Nebula in particular can provide unique information about the precursor star, the supernova event, associated heavy element production, and the environment of a highly energetic pulsar. It is the bright remnant of a core-collapse supernova observed in 1054 A.D. Its age and location, roughly 180 pc away from the plane of the Galaxy, suggest that the ejecta are not significantly contaminated by swept-up interstellar material. Furthermore, measured electron temperatures in the gas (Woltjer 1958; Miller 1978; Fesen & Kirshner 1982; MacAlpine et al. 1989, 1996) along with the lack of other possible evidence for shocks (Frail et al. 1995) imply that the line-emitting gas shines primarily because of photoionization by the locally-generated synchrotron radiation field (see also Davidson & Fesen 1985). Therefore it can be analyzed using powerful numerical photoionization modeling codes. It is generally believed that the supernova precursor star initially contained about $9-11$ M$_{\odot}$ (e.g., Arnett 1975; Nomoto 1985), representing the important low end of the Type II supernova mass range, below which stars could ignite carbon degenerately and above which successive nuclear reaction stages would be expected to take place through silicon burning. As discussed by Woosley & Weaver (1986a), the applicable stellar models allow for a number of explosive and nucleosynthesis possibilities.
Spectroscopic and photometric investigations of the Crab Nebula to date have indicated several apparent “gas components.” The majority of the observed nebular gas is helium (e.g., MacAlpine et al. 1989), consisting of less than $2$ M$_{\odot}$ (MacAlpine & Uomoto 1991). It has been postulated that this represents “helium mantle” material from deep within the original star, ejected by the explosive event. This is consistent with some stellar models, or scenarios involving another star, in which outer layers of the precursor were lost prior to the core-collapse event. Most of this helium-mantle gas appears to be nitrogen-rich, confirming its origin from CNO processing (MacAlpine et al. 1996).
There also is a major component of gas, primarily in the southern part of the nebula near the pulsar, which is significantly nitrogen-poor and much of which is sulfur-rich (MacAlpine et al. 1996). It was suggested that this gas resulted from localized oxygen-burning episodes, consistent with stellar models (Woosley & Weaver 1986b, 1995) that involve off-center (in a shell) oxygen-burning. Strolger & MacAlpine (1996) provided a preliminary demonstration that this explanation is probably correct.
Still another significant nebular component or “anomaly” involves an apparent helium-rich torus viewed as an east-west band across the pulsar region, which constitutes approximately 25% of the visible material (Uomoto & MacAlpine 1987; MacAlpine et al. 1989; MacAlpine & Uomoto 1991). The computed helium mass fraction is about 95%, and there is not yet a realistic explanation for this apparent structure.
Other known apparent anomalies in the Crab Nebula include exceptionally strong $[$Ni II$]$ and $[$C I$]$ line emission, resulting in large spatial variations for deduced abundances of nickel (along with iron) and carbon. Strong $[$Ni II$]$ $\lambda$7378 has been reported by numerous authors (Miller 1978; Fesen & Kirshner 1982; Henry et al. 1984; MacAlpine et al. 1989). The latter work suggested neutron-rich nickel isotopic abundance enhancements, compared with solar, by factors of 5-50 at various locations. Iron was also found to vary widely, together with nickel but at a much lower level; deduced nickel/iron abundance ratios are roughly 60-75 times the solar value.
Henry et al. (1984) found surprisingly high $[$C I$]$ $\lambda$$\lambda$9823,9850 emission at several locations in the Crab Nebula, where it was measured to be as much as 7 times stronger than predicted by the photoionization models of Henry & MacAlpine (1982). To account for this, it was postulated that the $[$C I$]$ lines might arise from collisional excitation involving hydrogen atoms as well as electrons.
As part of an effort to develop a more consistent and accurate overall understanding of the Crab Nebula, we have measured relative line intensities of He I $\lambda$5876, $[$O I$]$ $\lambda$6300, H$\alpha$, $[$N II$]$ $\lambda$6583, $[$Ar III$]$ $\lambda$7136, $[$Ni II$]$ $\lambda$7378, $[$S III$]$ $\lambda$9069, $[$S III$]$ $\lambda$9531, and $[$C I$]$ $\lambda$9850 for roughly 200 well-distributed positions throughout the emitting gas. In this paper, we address all of the above issues (apparent gas components and anomalies) using comparisons among these line measurements. A follow-up paper, involving more in-depth photoionization analyses for deriving improved gas physical conditions and chemical abundances, is planned.
The spectroscopic observations are described in § 2. Then § 3 presents emission-line correlations and resulting inferences about the nebular gas. The correlations illustrate a broad range of nuclear processing and confirm significant enrichment with products of oxygen-burning in some areas. They also suggest two distinct nickel/argon line relationships. In addition, the importance of optical depth for enhanced $[$C I$]$ lines is demonstrated, along with the [*lack*]{} of a dominant role for optical depth in the production of strong He I emission. A summary discussion is given in § 4.
OBSERVATIONS
============
Spectroscopy covering the wavelength range from approximately 5500 to 7700 Å was obtained through a long slit at various orientations across the pulsar during the nights of 1995 January 21 and 22, at the 2.4-m Hiltner telescope of the former Michigan-Dartmouth-MIT Observatory, which is located on Kitt Peak. The Mark III spectrograph was used with a TEK 1024$\times$1024 CCD and a 600 lines mm$^{-1}$ grism blazed at 5800 Å, resulting in roughly 2.3 Å pixel$^{-1}$ dispersion. The slit width was 12, projected on the sky, and the effective projected length was about 45. The slit positions employed for this project are illustrated in Figure 1. Each involves alignment through the pulsar and another star, and they are the same as some of the slit orientations used by MacAlpine et al. (1996), in which extensive N-rich and N-poor gas components were first identified. However, the spectral coverage extends further to the red for the optical observations presented here.
Near-infrared spectral coverage (from 7270 to 10030 Å) was obtained during the nights of 2006 January 4 and 5, at the 2.7-m Harlan J. Smith telescope of the McDonald Observatory. The Large Cassegrain Spectrograph was used with the CC1 1024$\times$1024 CCD, grating No. 42, and an RG610 blocking filter. The dispersion through a 2-wide slit was about 2.7 Å pixel$^{-1}$. The approximately 26-long slit was placed in the positions shown with darker outlines in Figure 1 (with overlapping coverage to increase the length in the roughly north-south direction).
Observing conditions on all nights listed above were clear, and exposure times were at least 1 hour at every slit position. All of the optical and near-infrared two-dimensional images were carefully aligned along pixel columns or rows, and the data were reduced to relative flux against linear wavelength using IRAF[^2] software, along with observations of both lamp and moonlit-sky continua, wavelength calibration lamps, and spectrophotometric standard stars. Sky spectral observations near the nebula were employed for removing night sky emission from the nebular spectra.
The goal was to identify and extract useful one-dimensional spectra at many positions spatially along the slits. Resulting measured line intensities were corrected for differential dust extinction of E(B-V) = 0.47 (see Davidson & Fesen 1985 and references therein) using the average interstellar extinction table from Osterbrock (1989).
To obtain one-dimensional optical spectra, the dispersed radiation for every spatial pixel along the slits was carefully examined and compared with the associated two-dimensional image. Emission knots were identified, and appropriate combinations of two or three spatial pixels were averaged and extracted for our measurements. In all cases, the individual spectral fluxes for averaged spatial pixels were required to be comparable (within 25% of each other), and no single spatial pixels were used (in order to minimize the possibility of compromising the data by imperfect dispersion pixel alignment). Whereas preliminary measurements were made by more than one person, all of the data employed here were ultimately, consistently measured by the first author.
A sample optical spectrum is presented in Figure 2. This location was selected for illustration because it has relatively strong $[$O I$]$, $[$Ar III$]$, and $[$Ni II$]$ lines. Two emission systems (e.g., from filaments at the front and back of the expanding nebula) are often represented in the spectra; and here a lower-intensity, near-side system with very weak $[$N II$]$ emission can also be seen. Sometimes there is line blending, particularly in the wavelength range with $[$N II$]$ $\lambda$6548, H$\alpha$, and $[$N II$]$ $\lambda$6583. For such cases, IRAF deblending routines were employed, and occasionally the knowledge that $[$N II$]$ $\lambda$6583 $\approx$ 3 $[$N II$]$ $\lambda$6548 was used in estimating line fluxes. In general, repeated measurements of line fluxes were within 5% of each other. Continuum placement in the optical spectra was reasonably straightforward, and the principal source of error was line blending. Experiments showed that using various reasonable combinations of two or three spatial pixels at a filament location could lead to measurement differences up to 25%, but these changes were always aligned along, or consistent with, the trends to be illustrated and discussed in § 3; so specific pixel selection (following established guidelines) should not alter the conclusions. Actual measurement errors for $\lambda$ $<$ 7500 Å lines are estimated to be less than $\pm$10%.
A sample near-infrared spectrum, illustrating strong $[$Ni II$]$, $[$S III$]$, and $[$C I$]$ emission, is presented in Figure 3. Whereas there could have been more than one dynamically different emission system represented, line blending was never as much of a problem as in the H$\alpha$ and $[$N II$]$ region. Also, we note that these spectra generally involved larger numbers of averaged pixels than was the case for the optical spectra. Although the sky was clear, there were atmospheric seeing problems when the near-infrared data were obtained. This and the larger slit width resulted in somewhat less resolution. In order to obtain broad wavelength coverage, potentially useful line-emission positions in each two-dimensional, near-infrared spectral image were identified and subsequently located as accurately as possible in the corresponding optical two-dimensional image. Then optimal numbers of pixels were averaged to obtain consistent (for all wavelengths) one-dimensional spectra for these filaments. Because of the different slit widths and the fact that we could not expect exact correspondence between optical and near-infrared pixel groupings, it was necessary to normalize the spectra using the overlapped $[$Ni II$]$ $\lambda$7378 line in both wavelength ranges. Although these normalization corrections could be quite accurate and were always less than a factor of 2, they still provided a significant potential source of error (estimated as much as 25%) when comparing near-infrared and optical line intensities.
Another source of error, for one near-infrared line, is telluric water vapor absorption. A plot of measured and reddening corrected values for $[$S III$]$ $\lambda$9069 against $[$S III$]$ $\lambda$9531 shows a tightly defined linear relation (with a linear correlation coefficient greater than 0.99), as it should for emission from two transitions that arise from the same upper atomic level. However, the predicted slope is about 2.6 for $[$S III$]$ $\lambda$9531 on the vertical axis, according to the relevant transition probabilities given in the current NIST Atomic Spectra Database, whereas the slope of the line in our data is close to 2.0. This type of situation has been reported before (Vermeij et al. 2002) and is caused by water vapor absorption in the 9531 Å wavelength region. The absorption is clearly seen in our standard star spectra, and it should not significantly affect either the $[$S III$]$ $\lambda$9069 or $[$C I$]$ $\lambda$9850 line measurements. Because of this, in § 3 the well-measured $[$S III$]$ $\lambda$9069 line will be used for examining spectral trends, rather than the stronger $[$S III$]$ $\lambda$9531.
TRENDS IN THE DATA
==================
All measured He I $\lambda$5876, $[$O I$]$ $\lambda$6300, $[$N II$]$ $\lambda$6583, $[$S II$]$ $\lambda$6731, $[$Ar III$]$ $\lambda$7136, $[$Ni II$]$ $\lambda$7378, $[$S III$]$ $\lambda$9069, and $[$C I$]$ $\lambda$9850 emission line intensities were reddening corrected and normalized to the H$\alpha$ line. Then they were plotted against each other in various combinations, as we looked for trends that might provide new insights for understanding the physical conditions and chemical abundances in the Crab Nebula, with the future plan of developing improved photoionization models for the emitting gas.
The Range of Nuclear Processing and Confirmation of Regions with Enhanced Products of Oxygen Burning
----------------------------------------------------------------------------------------------------
As mentioned in the Introduction, MacAlpine et al. (1996) identified regions in the Crab Nebula with either very strong or very weak $[$N II$]$ emission (see their Figure 1). These gas regimes appear to be distinct both spatially and dynamically. The $[$N II$]$-weak gas often (but not always, as discussed below) shows unusually strong $[$S II$]$ emission, and it was suggested that the latter probably indicates areas which contain products of oxygen-burning. This would be consistent with some stellar models of Woosley & Weaver (1986b, 1995). MacAlpine et al. then used photoionization model analyses to derive overabundances of nitrogen by factors of 3 to 7 (compared with solar) in $[$N II$]$-strong regions and overabundances of Si, S, and Ar (assuming solar nitrogen) by factors of 10 to 20 for some $[$N II$]$-weak areas.
Enhanced $[$S II$]$ emission could also result from low-ionization, warm H$^{+}$$\rightarrow$H$^{o}$ transition zones in the emitting gas (Henry & MacAlpine 1982), wherein S$^{+}$ ions are effectively collisionally excited by thermal electrons. Therefore, the hypothesis for oxygen-burning products should be further investigated and convincingly demonstrated before its acceptance. There are ways for examining this issue with the current data; and the most straightforward involve correlations between nitrogen and sulfur emission, between sulfur emission from different ions, or between emission from different elements expected to be produced together by the oxygen-burning process.
Figure 4 shows $[$N II$]$ $\lambda$6583 plotted against $[$S II$]$ $\lambda$6731. We note the large and comparable range in intensities on both axes. The nitrogen emission can be quite strong with very weak sulfur intensities, representing gas which has progressed no further than the CNO-cycle. Similarly, the strongest sulfur emission correlates only with weak nitrogen, suggestive of advanced processing through oxygen-burning. It may also be seen that weak nitrogen does not always correspond with strong sulfur emission, implying intermediate regions where some helium-burning has taken place and nitrogen has been converted into neon (see Pequignot & Dennefeld 1983; Nomoto 1985; Henry 1986). Regarding the latter point, infrared neon lines have been observed in the Crab Nebula by Temim et al. (2006). Figure 9 of Temim et al. shows particularly strong $[$Ne II$]$ 12.8 emission in an area roughly 15 SW of the pulsar. One of the slits used in this study crosses over an emitting filament at that location, and averages of our measurements there are $[$N II$]$ $\lambda$6583/H$\alpha$ = 0.43 and $[$S II$]$ $\lambda$6731/H$\alpha$ = 1.5. As may be deduced from our Figure 4, these are low $[$N II$]$ and modest $[$S II$]$ values that could be expected for a region where helium-burning and nitrogen depletion (to neon) have occurred, but significant oxygen-burning has not taken place.
The data of Figure 4 illustrate the [*range*]{} of nuclear processing, but not necessarily the relative [*amounts*]{} of material for the various nucleosynthesis stages. Because of line blending and other factors, not all emitting positions could be measured and represented.
Figure 5 is a plot of $[$S III$]$ $\lambda$9069 versus $[$S II$]$ $\lambda$6731, which argues that the latter line does not arise predominantly in optically-thick H$^{+}$$\rightarrow$H$^{o}$ transition regions. Even with potential problems involving normalization of the near-infrared and optical spectra, there is a reasonably strong correlation, with a linear correlation coefficient of 0.75 for 37 points. Therefore, since these different ionization stages (only one of which might be produced in an extended low-ionization zone) increase together, it would appear that strong sulfur emission observed in the Crab Nebula is mainly a result of enhanced sulfur abundance.
In addition to sulfur, other primary products of oxygen-burning include silicon and argon. Silicon emission is not strong in the optical and near-infrared regions, but $[$Ar III$]$ $\lambda$7136 and $\lambda$7751 (weaker) can be measured in our data. Therefore, in Figure 6 we plotted the correlation between $[$Ar III$]$ $\lambda$7136 and $[$S II$]$ $\lambda$6731 for 182 locations from the optical slit spectra. In this case, there is no potential problem with spectral normalization, and the correlation is quite high (linear correlation coefficient of 0.91). This confirms that the argon and sulfur emission must represent abundances with a related origin.
Evidence presented here supports the hypothesis (MacAlpine et al. 1996; Strolger & MacAlpine 1996) that certain regions in the Crab Nebula are heavily enriched with products of oxygen-burning. This has important implications for understanding stellar models, and it will be explicitly considered in our follow-up photoionization model analyses. For instance, as pointed out by Henry (1993), high silicon and sulfur abundances would cause infrared fine-structure lines such as $[$Si II$]$ 34.8 and $[$S III$]$ 33.6 to become more important coolants for the gas, thereby influencing the line spectra in the optical and near-infrared. For lower electron temperature, longer-wavelength collisionally-excited lines such as $[$S II$]$ $\lambda\lambda$6716,6731 and $[$N II$]$ $\lambda\lambda$6548,6583 would be enhanced at the expense of shorter wavelength lines like $[$O II$]$ $\lambda\lambda$3726,3729. Finally, we note that the above infrared fine-structure $[$Si II$]$ and $[$S III$]$ lines were recently observed to have significant intensities at some locations in the Crab Nebula (Temim et al. 2006).
Products of oxygen-burning are not unique to the Crab Nebula. Cassiopeia A, another young, collapse-driven, somewhat more massive supernova remnant also exhibits extensive regions with high concentrations of silicon-group elements like sulfur and argon, in which it is recognized that oxygen-burning has taken place while silicon-burning has been incomplete (Chevalier & Kirshner 1979; Willingale et al. 2002).
$[$Ni II$]$ $\lambda$7378 and $[$Ar III$]$ $\lambda$7136 Line Trends: Possible Implications for the Origin of Nickel
--------------------------------------------------------------------------------------------------------------------
In order to investigate observed very strong nickel emission in the Crab Nebula, we plotted $[$Ni II$]$ intensity against every other measured line, and we found a particularly interesting correlation with argon. In Figure 7, all optical data are presented for the $[$Ni II$]$ $\lambda$7378 and $[$Ar III$]$ $\lambda$7136 lines. Points below the diagonal show a linear trend (correlation coefficient of 0.86) involving more highly processed gas, whereas points above the diagonal show the strongest $[$Ni II$]$ emission and tend to represent regions where less nucleosynthesis has occurred.
To investigate or highlight these apparent correlations further, we considered only data for the very lowest and highest nitrogen emission. Following guidelines similar to those used by MacAlpine et al. (1996), we identified the subset of points with measured $[$N II$]$ $\lambda$6583 $<$ 0.55 H$\alpha$ and $[$N II$]$ $\lambda$6583 $<$ $[$S II$]$ $\lambda$6731 as being extreme “low-N” locations where advanced nuclear processing has taken place. Similarly those points with $[$N II$]$ $\lambda$6583 $>$ 3 H$\alpha$ and $[$N II$]$ $\lambda$6583 $>$ $[$S II$]$ $\lambda$6731 were selected as being “high-N,” where nucleosynthesis stopped with the CNO-cycle. Figure 8 illustrates how these data appear in the $[$Ni II$]$ $\lambda$7378 versus $[$Ar III$]$ $\lambda$7136 plane, where filled squares represent high-N and open squares (with a +) denote low-N. The separation of trends in this plot is remarkable, with the low-N (more highly processed) positions having a linear correlation coefficient of 0.94.
Some points below the diagonal of Figure 7 do not appear in Figure 8, either because they do not have measurable $[$N II$]$ emission or because it is somewhat higher than the imposed limit for inclusion in Figure 8. We also note that, whereas emission from several pixel groups along one extended filament might conceivably create an almost linear structure of points in a diagram, the measurements for Figures 7 and 8 come from widely separated locations in all of the slits.
The linear correlation between $[$Ni II$]$ and $[$Ar III$]$ emission from the most highly processed gas could result from a type of “alpha-rich freezeout” (see Woosley & Weaver 1995; Jordan et al. 2003), whereby silicon-group elements in core-collapse supernovae may be heated by a shock wave and broken down into nucleons and alpha particles. Then, as the gas cools, these particles can reassemble into various stable iron-peak nuclei. Another possibly contributing process has been discussed by Thielemann & Arnett (1985), who wrote: “During O-burning, temperatures favor the photodisintegration of heavy nuclei (produced by the s-process) into Fe-peak nuclei. This is seen for $^{60}$Ni and partially for $^{62}$Ni and $^{58}$Fe, depending on the prior neutron excess $\eta$.”
The above explanations would not apply to the steeper correlations in Figures 7 and 8, where $[$Ni II$]$ $\lambda$7378 can be strongest in less-processed gas. This may be an indication that some iron-peak, neutron-rich nuclei were removed from the surface of the neutron star. For pulsars in general, this possibility was investigated by Ruderman & Sutherland (1975), who concluded that extremely strong surface magnetic fields would not permit the release of heavy ions for most pulsars. However, they also stated that the Crab Nebula’s neutron star may be an exception to this rule because of very high surface temperature. In this regard, we note that an extremely high temperature region on this young neutron star may have been detected by Weisskopf et al. (2004). Furthermore, Michel et al. (1991) mapped an extensive north-south relativistic wind for the nebula, in the directions of the highest apparent concentrations of nickel (MacAlpine et al 1989). For line-emitting knots immersed in this wind, the highest $[$Ni II$]$ emission was measured on the side facing the pulsar in each case (MacAlpine et al. 1994; MacAlpine & Lawrence 1994). As Freiburghaus et al. (1999) have noted, the origins of neutron-rich heavy elements are not yet well understood. If ions can leave the surface of the neutron star in the Crab Nebula, then young pulsars in general could represent sources for some heavy nuclei.
On The Origin of Strong $[$C I$]$ Emission
------------------------------------------
Henry et al. (1984) measured the strength of $[$C I$]$ $\lambda$9850 in some filaments to be at least several times stronger than predicted by their photoionization models, and they suggested that previously neglected collisional excitation by H$^{0}$ may need to be considered as a potentially important process for production of this emission.
If we can understand where and how the $[$C I$]$ emission is produced, we may be able to use it for estimating the carbon abundance relative to other elements. This, in turn, could provide additional useful insights into the extent of nuclear processing at various locations. As noted by Nomoto (1985) and Henry (1986), the amounts of carbon and oxygen would be depressed somewhat by CNO processing and then would increase (above solar) as a result of helium-burning. Also, Nomoto pointed out that an improved understanding of the carbon abundance, and therefore the amount of processing in the gas, can lead to a more accurate estimate of the stellar precursor mass. Whereas carbon abundances derived from ultraviolet lines like C IV $\lambda$1549 for the Crab Nebula may be significantly influenced by the way in which absorption of He II $\lambda$304 photons is considered (Eastman et al. 1985), that complication should not be an issue for the $[$C I$]$ $\lambda$9850 line, since it arises away from the He II-emitting gas.
New insight regarding the production of $[$C I$]$ emission may be gained from examining its correlation with $[$O I$]$. As previously shown in Figure 2, the Crab Nebula contains some locations with particularly strong $[$O I$]$ $\lambda$6300 emission, probably indicating the existence of broad H$^{+}$$\rightarrow$H$^{o}$ transition zones in the gas. The $[$O I$]$ emission is known to be strengthened in these warm low-ionization regions (which could be expected for photoionization by a relatively flat synchrotron spectrum) because O$^{0}$ follows H$^{0}$ due to very effective charge exchange interactions.
Figure 9 illustrates the correlation between our measured $[$C I$]$ $\lambda$9850 and $[$O I$]$ $\lambda$6300 intensities. Although the plot is affected by the spectrum normalization procedure for near-infrared and optical wavelengths, the linear correlation coefficient is a significantly high 0.81. Therefore, we conclude that the strong $[$C I$]$ emission is probably enhanced in extended ionization transition zones, by electron collisional excitation and perhaps also by collisions involving H$^{0}$. However, since the C and O contents of the gas could also have been depleted or increased [*together*]{} by the CNO-cycle or helium-burning, additional information is needed.
[*Indirect*]{} support for the idea that $[$C I$]$ emission must be strengthened in low-ionization regions also comes from Figure 10, which shows $[$C I$]$ $\lambda$9850 plotted against $[$N II$]$ $\lambda$6583. There is a rather large (though not extreme) range of $[$N II$]$ emission represented, and (except for the one very high point) $[$C I$]$ looks random over an order of magnitude in the $[$N II$]$ intensities. Since the latter may provide a rough representation of the amount of nucleosynthesis that has taken place, as discussed previously, it would appear that the strong measured $[$C I$]$ emission is not directly correlated with nuclear processing. Clearly the roles of both abundance and ionization structure will be important avenues for exploration in further refinements of photoionization model computations.
Exceptionally Strong He I Emission
----------------------------------
The final abundance or line-intensity anomaly to be considered here involves the exceptionally high helium content derived by MacAlpine et al. (1989) for an apparent helium torus around the pulsar (see also Lawrence et al. 1995). Having measured dereddened He I $\lambda$5876 $\gtrsim$ H$\beta$, the results of photoionization analyses by Henry and MacAlpine (1982) were used to infer a helium mass fraction around 95% in this region.
Henry & MacAlpine (1982) also considered the possibility that He I $\lambda$5876 recombination lines could be significantly enhanced relative to hydrogen emission in low-ionization zones. Because the photoionization cross section for He$^{0}$ scales roughly with frequency as $\nu$$^{-2}$, whereas that for H$^{0}$ scales more like $\nu$$^{-3}$, a high energy synchrotron radiation field can continue to ionize helium significantly beyond where hydrogen starts becoming neutral, thereby producing excess He I recombination emission. This process could be especially important if the ionizing radiation flux is low (see, e.g., Shields 1974). However, Henry & MacAlpine found that photoionization models with relatively high values for the ionizing flux did a much better job of matching the majority of observed line intensities in the Crab Nebula, so they favored very high derived abundance as the most plausible explanation for the strongest helium intensities. Now we have the opportunity to investigate this further.
As with $[$C I$]$ emission, we examined the relation between He I $\lambda$5876 and $[$O I$]$ $\lambda$6300 for our data, as shown in Figure 11. It may be seen that evidence for a meaningful correlation is lacking. Strong He I emission is similarly represented at both high and low $[$O I$]$ intensities, so it is not enhanced primarily in low-ionization gas. Furthermore, we considered the He I $\lambda$5876 correlation with $[$N II$]$ $\lambda$6583 in Figure 12. Again, the strongest He I emission is evident with both high and low $[$N II$]$ measurements, so it apparently is not directly tied to the overall progress of nuclear processing. We conclude that the most reasonable explanation for the strongest He I lines in the apparent torus around the pulsar is anomalously high abundance, the source of which is still in need of a plausible explanation.
SUMMARY
=======
We have measured emission of He I $\lambda$5876, $[$O I$]$ $\lambda$6300, $[$N II$]$ $\lambda$6583, $[$S II$]$ $\lambda$6731, $[$Ar III$]$ $\lambda$7136, $[$Ni II$]$ $\lambda$7378, $[$S III$]$ $\lambda$9069, $[$S III$]$ $\lambda$9531, and $[$C I$]$ $\lambda$9850 at many locations within the Crab Nebula. The different line intensities (or subsets thereof) were plotted against each other in efforts to investigate correlations and improve our understanding of the range of nuclear processing in the gas, as well as the causes of exceptionally strong emission from $[$Ni II$]$, $[$C I$]$, and He I. We identified gas where nucleosynthesis has not progressed significantly beyond the CNO-cycle, gas in which some helium-burning and nitrogen depletion have taken place, and regions where oxygen-burning has occurred. The anomalously strong observed $[$Ni II$]$ emission may have two sources, in one case resulting from high temperature and subsequent cooling in gas enriched with products of oxygen-burning, while in the other case possibly representing the release of nuclei from the neutron star surface. Line correlations indicate that very strong $[$C I$]$ emission arises in low-ionization H$^{+}$$\rightarrow$H$^{o}$ transition regions. On the other hand, exceptionally strong He I $\lambda$5876 does not show similar evidence of a low-ionization zone origin; and it does not appear to correlate with different levels of nuclear processing as represented by $[$N II$]$ emission. Therefore, the apparent high-helium torus around the pulsar may be a distinct component of the nebula.
We are grateful for generous financial and logistical support from Trinity University and the endowed Charles A. Zilker Chair position. We also thank the staffs of the Michigan-Dartmouth-MIT Observatory and the McDonald Observatory for providing excellent technical assistance with this research.
Arnett, W. D. 1975, ApJ, 195, 727
Chevalier, R. A., & Kirshner, R. P. 1979, ApJ, 233, 154
Davidson, K., & Fesen, R. A. 1985, ARA&A, 23, 119
Eastman, R. G., MacAlpine, G. M., Kirshner, R. P., & Henry, R. B. C. 1985, in The Crab Nebula and Related Supernova Remnants, ed. M. Kafatos and R. B. C. Henry, (Cambridge: Cambridge University Press), 19
Fesen, R. A., & Kirshner, R. P. 1982, ApJ, 258, 1
Frail, D. A., Kassim, N. E., Cornwell, T. J., & Goss, W. M. 1995, ApJ, 454, L129
Freiburghaus, C., Rosswog, S., & Thielemann, F-K. 1999, ApJ, 525, L121
Henry, R. B. C. 1986, PASP, 98, 1044
Henry, R. B. C. 1993, MNRAS, 261, 306
Henry, R. B. C., & MacAlpine, G. M. 1982, ApJ, 258, 11
Henry, R. B. C., MacAlpine, G. M., & Kirshner, R. P. 1984, ApJ, 278, 619
Jordan, G. C., Gupta, S. S., & Meyer, B. S. 2003, Phys. Rev. C, 68, 065801
Lawrence, S. S., MacAlpine, G. M., Uomoto, A., Woodgate, B. E., Brown, L. W., Oliversen, R, J., Lowenthal, J. D., & Liu, C. 1995, AJ, 109, 2635
MacAlpine, G. M., & Lawrence, S. S. 1994, in The Analysis of Emission Lines, Poster Papers from The 8th Space Telescope Science Institute Symposium, ed. R. E. Williams and M. Livio, StSci Publication
MacAlpine, G. M., Lawrence, S. S., Brown, B. A., Uomoto, A., Woodgate, B. E., Brown, L. W., Oliversen, R. J., Lowenthal, J. D., & Liu, C. 1994, ApJ, 432, L131
MacAlpine, G. M., Lawrence, S. L., Sears, R. L., Sosin, M. S., & Henry, R. B. C. 1996, ApJ, 463, 650
MacAlpine, G. M., McGaugh, S. S., Mazzarella, J. M., & Uomoto, A. 1989, ApJ, 342, 364
MacAlpine, G. M., & Uomoto, A. 1991, AJ, 102, 218
Michel, F. C., Scowen, P. A., Dufour, R. J., & Hester, J. J. 1991, ApJ, 368, 463
Miller, J. S. 1978, ApJ, 220, 490
NIST Atomic Spectra Database. Version 3.0.3. January 2006. National Institute of Standards and Technology. 10 June 2006 $<$http://physics.nist.gov/PhysRefData/ASD/index.html$>$
Nomoto, K. 1985, in The Crab Nebula and Related Supernova Remnants, ed. M. Kafatos and R. B. C. Henry (Cambridge: Cambridge University Press), 97
Osterbrock, D. E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (Mill Valley: University Science Books)
Pequignot, D., & Dennefeld, M. 1983, Astr. Ap., 120, 249
Ruderman, M. A., & Sutherland, P. G. 1975, ApJ, 196, 51
Shields, G. A. 1974, ApJ, 191, 309
Strolger, L-G., & MacAlpine, G. M. 1996, BAAS, vol. 28, 950
Temim, T., Gehrz, R. D., Woodward, C. E., Roellig, T. L., Smith, N., Rudnick, L. R., Polomski, E. F., Davidson, K., Yuen, L., & Onaka, T. 2006, preprint (astro-ph/0606321)
Thielemann, F-K., & Arnett, W. D. 1985, in Nucleosynthesis: Challenges and New Developments, eds. Arnett, W. D. and Truran, J. W. (Univ. of Chicago Press: Chicago & London), 170
Uomoto, A., & MacAlpine, G. M. 1987, AJ, 93, 1511
Vermeij, R., Damour, F., van der Hulst, J. M., & Baluteau, J.-P. 2002, A&A, 390, 649
Weisskopf, M. C., O’Dell, S. L., Paerels, F., Elsner, R. F., Becker, W., Tennant, A. F., & Swartz, D. A. 2004, ApJ, 601, 1050
Willingale, R., Bleeker, J. A. M., van der Heyden, K. J., Kaastra, J. S., & Vink, J. 2002, A&A, 381, 1039
Woltjer, L. 1958, Bull. Astr. Inst. Netherlands, 14, 39
Woosley, S. E., & Weaver, T. A. 1986a, ARA&A, 24, 205
Woosley, S. E., & Weaver, T. A. 1986b, in Nucleosynthesis and Its Implications on Nuclear and Particle Physics, ed. J. Audouze & N. Mathieu (Dordrecht: Reidel), 145
Woosley, S. E., & Weaver, T. A. 1995, APJS, 101, 181
[^1]: This paper involves data obtained at the Michigan-Dartmouth-MIT Observatory and at the McDonald Observatory of The University of Texas at Austin.
[^2]: The Image Reduction and Analysis Facility (IRAF) is distributed by the Association of Universities for Research in Astronomy, Inc., under contract to the National Science Foundation.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
It is shown that if $\lambda$ is a multiple of a fundamental weight of $\mathfrak{sl}_k$, the lower global basis of the irreducible $U_q(\mathfrak{sl}_k)$-representation $V^{\lambda}$ with highest weight $\lambda$ comprises the disjoint union of the lower global bases of the irreducible $U_q(\mathfrak{sl}_{k-1})$-representations appearing in the decomposition of the restriction of $V^{\lambda}$ to $U_q(\mathfrak{sl}_{k-1})$.
Rhoades’s description of the action of the long cycle on the dual canonical basis of $V^{\lambda}$ is then deduced from Berenstein–Zelevinsky’s description of the action of the long element. This yields a short proof of Rhoades’s result on tableaux fixed under promotion which directly relates it to Stembridge’s result on tableaux fixed under evacuation.
author:
- David B Rush
title: 'Restriction of Global Bases and Rhoades’s Theorem'
---
Introduction
============
Motivation
----------
Let $\Lambda$ be a rectangular partition. Perhaps the most celebrated example of the *cyclic sieving phenomenon* of Reiner, Stanton, and White [@Reiner] is the theorem of Rhoades [@Rhoades] that *jeu-de-taquin* promotion exhibits cyclic sieving:
- on the set $SYT(\Lambda)$ of standard tableaux of shape $\Lambda$ with respect to the $q$-analogue of the Weyl dimension formula for the $(1, \ldots, 1)$-weight space of the irreducible $GL_{|\Lambda|}(\mathbb{C})$-representation with highest weight $\Lambda$;
- on the set $SSYT(\Lambda, k)$ of semistandard tableaux of shape $\Lambda$ with entries in $\lbrace 1, \ldots, k \rbrace$ (for any positive integer $k \geq \ell(\Lambda)$) with respect to the $q$-analogue of the Weyl dimension formula for the irreducible $GL_k(\mathbb{C})$-representation with highest weight $\Lambda$.
Rhoades’s theorem was featured in several surveys of the cyclic sieving phenomenon (cf. Reiner–Stanton–White [@Reiner2] and Sagan [@Sagan]) and inspired activity in and around algebraic combinatorics. The result for standard tableaux has been reproved thrice,[^1] but the result for semistandard tableaux has only been reproved by Shen and Weng [@Shen], who discovered an equivalent result via cluster algebras — although they did not realize their result was the same (cf. Hopkins [@Hopkins] for the details). None of the subsequent proofs is simpler than Rhoades’s original proof adhering to the representation-theoretic paradigm for observing cyclic sieving phenomena.
Rhoades proved that, up to sign, the long cycle $c_{|\Lambda|} \in \mathfrak{S}_{|\Lambda|}$ permutes by promotion the elements — indexed by $SYT(\Lambda)$ — of the *Kazhdan–Lusztig basis* of the irreducible $\mathfrak{S}_{|\Lambda|}$-representation $S^{\Lambda}$ corresponding to $\Lambda$. Then, he proved that, for any positive integer $k \geq \ell(\Lambda)$, up to sign, $c_k \in \mathfrak{S}_k$ permutes by promotion the elements — indexed by $SSYT(\Lambda, k)$ — of the basis (constructed by Du [@Du] and Skandera [@Skandera]) of Kazhdan–Lusztig immanants of (the dual of) the irreducible $GL_k(\mathbb{C})$-representation $V^{\Lambda}$ with highest weight $\Lambda$. After evaluating characters, he obtained his enumerative results.
Fourteen years before Rhoades’s theorem, Stembridge [@Stembridge] discovered that evacuation exhibits the *$q=-1$ phenomenon* (the antecedent, pertaining to involutions, of the cyclic sieving phenomenon) on the same tableaux sets, with respect to the same generating functions, that Rhoades would consider — without the stipulation that $\Lambda$ be rectangular. His proof followed the same framework that Rhoades’s later would: He proved that, up to sign, the long element $w_{0, |\Lambda|} \in \mathfrak{S}_{|\Lambda|}$ permutes by evacuation the Kazhdan–Lusztig basis of $S^{\Lambda}$, and he quoted the result of Berenstein and Zelevinsky [@Berenstein] that, for any positive integer $k \geq \ell(\Lambda)$, up to sign, $w_{0,k} \in \mathfrak{S}_k$ permutes by evacuation the *dual canonical basis* of $V^{\Lambda}$.[^2]
The author first noted this resemblance in Rush [@Rush], in which he devised a combinatorial expression for certain plethysm coefficients by refining Rhoades’s and Stembridge’s results to tableaux he construed to be highest weight. That article treated the stories for promotion and evacuation as separate but parallel. The purpose of this article is to merge the stories and provide a proof of Rhoades’s theorem that elucidates its relationship to Stembridge’s theorem.
The Kazhdan–Lusztig basis
-------------------------
Rhoades did not miss the apparent likeness between his work and Stembridge’s; on the contrary, he explicitly noted that his description of the action of $c_{|\Lambda|}$ on the Kazhdan–Lusztig basis of $S^{\Lambda}$ was analogous to Stembridge’s description of the action of $w_{0, |\Lambda|}$. However, he did miss that the former is actually a consequence of the latter.
For all positive integers $i$, let $\xi_i$ denote the $i^{\text{th}}$ *partial Sctützenberger involution*, which acts on a tableau by performing evacuation on the subtableau containing all entries less than or equal to $i$, and let $J$ denote *jeu-de-taquin* promotion. Then, as actions on $SYT(\Lambda)$, $$\xi_{|\Lambda|} \circ \xi_{|\Lambda|-1} = J.$$
Furthermore, $$w_{0,|\Lambda|} \circ w_{0,|\Lambda|-1} = c_{|\Lambda|}.$$ Thus, on the Kazhdan–Lusztig basis of $S^{\Lambda}$, up to sign, $c_{|\Lambda|}$ acts by $J$ if $w_{0, |\Lambda|-1}$ acts by $\xi_{|\Lambda|-1}$.
Suppose again that $\Lambda$ is rectangular, and let $\widehat{\Lambda}$ be the partition obtained from $\Lambda$ by removing the sole outside corner. Then $S^{\widehat{\Lambda}}$ and $S^{\Lambda}$ are isomorphic as $\mathfrak{S}_{|\Lambda|-1}$-representations, and, up to sign, $w_{0, |\Lambda|-1}$ acts by $\xi_{|\Lambda|-1}$ on the Kazhdan–Lusztig basis of $S^{\widehat{\Lambda}}$. Therefore, it suffices to show that the Kazhdan–Lusztig bases of $S^{\widehat{\Lambda}}$ and $S^{\Lambda}$ are compatible.
Let $a$ and $m$ be positive integers for which $\Lambda = (m^a)$. For a standard tableau $P$ of shape $\Lambda$, let $\widehat{P}$ be the standard tableau of shape $\widehat{\Lambda}$ obtained by removing the sole outside corner. Let $Q$ be the standard tableau of shape $\Lambda$ such that the $i^{\textit{th}}$ column of $Q$ contains the entries $ia-a+1, \ldots, ia$ for all $1 \leq i \leq m$. Let $\mathcal{C}$ be the left cell of $\mathfrak{S}_{ma}$ comprising the permutations with recording tableau $Q$, and let $\widehat{\mathcal{C}}$ be the left cell of $\mathfrak{S}_{ma-1}$ comprising the permutations with recording tableau $\widehat{Q}$.[^3] Then the map $\phi \colon \widehat{\mathcal{C}} \rightarrow \mathcal{C}$ given by $$\widehat{x} \mapsto \widehat{x} (ma \ ma-1 \ \cdots \ ma - a + 1)$$ is a bijection, and $\mu[\phi(\widehat{x}), \phi(\widehat{y})] = \mu[\widehat{x}, \widehat{y}]$ for all $\widehat{x}, \widehat{y} \in \widehat{\mathcal{C}}$ (cf. Fact 11 in Garsia–McLarnan [@Garsia]). Therefore, the map $\tilde{\phi} \colon S^{\widehat{\Lambda}} \rightarrow S^{\Lambda}$ given by $C_{\widehat{x}} \mapsto C_{\phi(\widehat{x})}$ (where $C_{\widehat{x}} \in \mathbb{C}[\mathfrak{S}_{ma-1}]$ and $C_{\phi(\widehat{x})} \in \mathbb{C}[\mathfrak{S}_{ma}]$ are the Kazhdan–Lusztig basis elements associated to $\widehat{x}$ and $\phi(\widehat{x})$, respectively) is an isomorphism of $\mathfrak{S}_{ma-1}$-representations. Identifying the permutations in $\widehat{\mathcal{C}}$ and $\mathcal{C}$ with their respective insertion tableaux, we see that $\tilde{\phi}^{-1}$ is given by $C_{P} \mapsto C_{\widehat{P}}$.
Hence, by Theorem 5.1 of Stembridge [@Stembridge], there exist $\epsilon_{\Lambda}, \epsilon_{\widehat{\Lambda}} \in \lbrace \pm 1 \rbrace$ such that for all $P \in SYT(\Lambda)$, $$\begin{aligned}
c_{|\Lambda|} \cdot C_{P} & = w_{0,|\Lambda|} w_{0, |\Lambda|-1} \cdot C_{P} = w_{0, |\Lambda|} \cdot \tilde{\phi}\left(w_{0, |\Lambda|-1} \cdot C_{\widehat{P}}\right)
\\ & = w_{0, |\Lambda|} \cdot \tilde{\phi}\left(\epsilon_{\widehat{\Lambda}} C_{\xi_{|\Lambda|-1} (\widehat{P})}\right) = w_{0, |\Lambda|} \cdot \epsilon_{\widehat{\Lambda}} C_{\xi_{|\Lambda|-1} (P)}
\\ & = \epsilon_{\Lambda} \epsilon_{\widehat{\Lambda}} C_{\xi_{|\Lambda|}( \xi_{|\Lambda|-1} (P))} = \epsilon_{\Lambda} \epsilon_{\widehat{\Lambda}} C_{J(P)},\end{aligned}$$ whence Rhoades’s result for standard tableaux follows.
The dual canonical basis
------------------------
Since Rhoades obtains his description of the action of $c_k$ on the basis of Kazhdan–Lusztig immanants of (the dual of) $V^{\Lambda}$ from his description of the action of $c_{|\Lambda|}$ on the Kazhdan–Lusztig basis of $S^{\Lambda}$, the argument we have just given is sufficient to conclude Rhoades’s result for semistandard tableaux as well. To do so, however, would be to sidestep the challenge of offering an analogous argument relating Rhoades’s description of the action of $c_k$ to Berenstein and Zelevinsky’s description of the action of $w_{0,k}$. That the basis of Kazhdan–Lusztig immanants is (essentially) the dual canonical basis of $V^{\Lambda}$ (cf. Skandera [@Skandera]) suggests that such an argument should be feasible. But it would entail a subtler invocation of the hypothesis that $\Lambda$ is rectangular, for, unlike the restriction of $S^{\Lambda}$ to $\mathfrak{S}_{|\Lambda|-1}$, the restriction of $V^{\lambda}$ to $U_q(\mathfrak{sl}_{k-1})$ is not irreducible.
To this challenge we devote the remainder of the article. We depart from the approach of Berenstein–Zelevinsky [@Berenstein] and Rhoades [@Rhoades], who rely on explicit constructions of the dual canonical basis (the better to analyze combinatorial actions on the basis elements), and turn to a characterization of canonical bases due to Kashiwara [@Kashiwara2]. In particular, we study the restriction of Kashiwara’s *lower* and *upper global bases*, which coincide with Lusztig’s canonical and dual canonical bases, respectively (cf. Grojnowski–Lusztig [@Grojnowski]). We find that the decomposition of $V^{\lambda}$ as a direct sum of irreducible $U_q(\mathfrak{sl}_{k-1})$-representations respects the global bases: The lower global basis of $V^{\lambda}$ is the disjoint union of the lower global bases of the representations appearing in the decomposition, and the same holds for the upper global basis (up to scaling by Gaussian polynomials in $q$), just as we would wish.
We provide background on quantum groups and crystal bases in section 2 and discuss the crystal structure on tableaux in section 3. We carry out our investigation of global bases in section 4. Then, in section 5, we realize Rhoades’s description of the action of $c_k$ as a consequence of Berenstein and Zelevinky’s description of the action of $w_{0,k}$ — and thereby arrive at a direct proof of Rhoades’s theorem.
Background
==========
Quantum groups
--------------
Let $\mathfrak{h} \subset \mathfrak{sl}_k(\mathbb{C})$ be a Cartan subalgebra, and let $P^{\vee} \subset \mathfrak{h}$ and $P \subset \mathfrak{h}^*$ be the coroot and weight lattices, respectively.
Identify $P$ with the $\mathbb{Z}$-module generated by the symbols $E_1, \ldots, E_k$ subject to the relation $E_1 + \cdots + E_k = 0$. For all $1 \leq i \leq k-1$, set $\alpha_i := E_i - E_{i+1}$.
Choose $\alpha_1, \ldots, \alpha_{k-1} \in P$ to be the simple roots. Let $h_1, \ldots, h_{k-1} \in P^{\vee}$ and $\omega_1, \ldots, \omega_{k-1} \in P$ be the corresponding simple coroots and fundamental weights, respectively. Note that $h_1, \ldots, h_{k-1}$ generate $P^{\vee}$ and $\omega_1, \ldots, \omega_{k-1}$ generate $P$.
A weight $\lambda \in P$ is *dominant* if it can be expressed as a nonnegative linear combination of fundamental weights. Let $P^+ \subset P$ be the semigroup of dominant weights.
The quantum group $U_q(\mathfrak{sl}_k)$ is the (unital) associative algebra over $\mathbb{Q}(q)$ generated by the symbols $e_i, f_i \enspace (1 \leq i \leq k-1)$ and $q^h \enspace (h \in P^{\vee})$ subject to the following relations:
1. $q^0 = 1$ and $q^h q^{h'} = q^{h+h'}$ for $h, h' \in P^{\vee}$;
2. $q^h e_i q^{-h} = q^{\alpha_i(h)} e_i$ and $q^h f_i q^{-h} = q^{-\alpha_i(h)} f_i$ for $h \in P^{\vee}$;
3. $e_i f_j - f_j e_i = \delta_{i,j} \frac{q^{h_i}-q^{-h_i}}{q - q^{-1}}$;
4. $e_i^2 e_j - (q + q^{-1}) e_i e_j e_i + e_j e_i^2 = f_i^2 f_j - (q + q^{-1}) f_i f_j f_i + f_j f_i^2 = 0$ for $|i-j|=1$;
5. $e_i e_j - e_j e_i = f_i f_j - f_j f_i = 0$ for $|i-j|>1$.
For all $n \in \mathbb{Z}$, set $[x; n]_q := \frac{x q^n - x^{-1}q^{-n}}{q-q^{-1}}$ and $[n]_q := [1;n]_q$. For all $n \geq 0$, set $e_i^{(n)} := \frac{e_i^n}{[n]_q!}$ and $f_i^{(n)} := \frac{f_i^n}{[n]_q!}$, where $[n]_q! := [n]_q \cdots [1]_q$ for all $n \geq 1$ and $[0]_q! := 1$.
Representations and crystal bases
---------------------------------
In this article, by a $U_q(\mathfrak{sl}_k)$-representation we mean a $U_q(\mathfrak{sl}_k)$-module $V$, finite-dimensional as a vector space over $\mathbb{Q}(q)$, admitting a *weight space decomposition* $V = \bigoplus_{\mu \in P} V_{\mu}$, where $$V_{\mu} := \lbrace v \in V : q^h v = q^{\mu(h)}v \quad \forall h \in P^{\vee} \rbrace.$$
Given a $U_q(\mathfrak{sl}_k)$-representation $V$, we say that $\mu \in P$ is a *weight* of $V$ if the $\mu$-weight space $V_{\mu}$ is nonzero, in which case we refer to the nonzero vectors in $V_{\mu}$ as *weight vectors*. If $\lambda$ is a weight of $V$ and there exists a weight vector $v_{\lambda} \in V_{\lambda}$ such that $e_i v_{\lambda} = 0$ for all $1 \leq i \leq k-1$ and $V = U_q(\mathfrak{sl}_k) v_{\lambda}$, we say that $\lambda$ is the *highest weight* of $V$.
For all $\lambda \in P^+$, there exists an irreducible $U_q(\mathfrak{sl}_k)$-representation $V^{\lambda}$ with highest weight $\lambda$. Furthermore, the map $\lambda \mapsto [V^{\lambda}]$ defines a bijection between $P^+$ and the set of isomorphism classes of irreducible $U_q(\mathfrak{sl}_k)$-representations.
To define the global bases of $V^{\lambda}$, we must first define a crystal basis, for which we require Kashiwara’s $\mathbb{Q}(q)$-linear operators $\tilde{e}_i$ and $\tilde{f}_i$.
Let $V$ be a $U_q(\mathfrak{sl}_k)$-representation, and let $\mu$ be a weight of $V$. Each weight vector $u \in V_{\mu}$ uniquely determines a nonnegative integer $N$ and weight vectors $u_n \in V_{\mu + n \alpha_i} \cap \ker e_i$ for all $0 \leq n \leq N$ such that $u = u_0 + f_i u_1 + \cdots + f_i^{(N)} u_N$ (cf. Lemma 4.1.1 in Hong–Kang [@Hong]). Then $$\tilde{e}_i u := \sum_{n=1}^N f_i^{(n-1)} u_n \quad \text{and} \quad \tilde{f}_i u := \sum_{n=0}^N f_i^{(n+1)} u_n.$$
Let $A_0$ be the localization of the ring $\mathbb{Q}[q]$ at the prime ideal $(q)$.
Let $V$ be a $U_q(\mathfrak{sl}_k)$-representation. A free $A_0$-submodule $L \subset V$ such that $\mathbb{Q}(q) \otimes_{A_0} L \cong V$ is a *crystal lattice* if:
1. $L = \bigoplus_{\mu \in P} L_{\mu}$, where $L_{\mu} := L \cap V_{\mu}$;
2. $\tilde{e}_i L \subset L$ and $\tilde{f}_i L \subset L$ for all $1 \leq i \leq k-1$.
Given a crystal lattice $L$, it follows from condition (2) that the operators $\tilde{e}_i$ and $\tilde{f}_i$ act on the quotient $L/qL$. For all $v \in L$, we denote by $\overline{v}$ the image of $v$ in $L/qL$.
Let $V$ be a $U_q(\mathfrak{sl}_k)$-representation. A pair $(L, B)$ consisting of a crystal lattice $L$ of $V$ and a $\mathbb{Q}$-basis $B$ of $L/qL$ is a *crystal basis* if:
1. $B = \bigsqcup_{\mu \in P} B_{\mu}$, where $B_{\mu} := B \cap L_{\mu}/qL_{\mu}$;
2. $\tilde{e}_i B \subset B \cup \lbrace 0 \rbrace$ and $\tilde{f}_i B \subset B \cup \lbrace 0 \rbrace$ for all $1 \leq i \leq k-1$;
3. $\tilde{f}_i b = b'$ if and only if $\tilde{e}_i b' = b$ for all $b, b' \in B$.
Let $\lambda \in P^+$, and fix a weight vector $v_{\lambda} \in V^{\lambda}_{\lambda}$.
\[lattice\] There exists a unique crystal basis $(L^{\lambda}, B^{\lambda})$ of $V^{\lambda}$ such that $L^{\lambda}_{\lambda} = A_0 v_{\lambda}$ and $B^{\lambda}_{\lambda} = \lbrace \overline{v_{\lambda}} \rbrace$. Furthermore, $$L^{\lambda} = \operatorname{span} \lbrace \tilde{f}_{i_r} \cdots \tilde{f}_{i_1} v_{\lambda} : r \geq 0; 1 \leq i_1, \ldots, i_r \leq k-1 \rbrace,$$ and $$B^{\lambda} = \lbrace \tilde{f}_{i_r} \cdots \tilde{f}_{i_1} \overline{v_{\lambda}} : r \geq 0; 1 \leq i_1, \ldots, i_r \leq k-1 \rbrace \setminus \lbrace 0 \rbrace.$$
Global bases
------------
Set $A := \mathbb{Q}[q, q^{-1}]$. Let $U_A(\mathfrak{sl}_{k}) \subset U_q(\mathfrak{sl}_k)$ be the $A$-subalgebra generated by $e_i^{(n)}, f_i^{(n)}, q^h, \frac{[q^h;0]_q \cdots [q^h;1-n]_q}{[n]_q!} \enspace (n \geq 1)$.
Let $\psi \colon U_q(\mathfrak{sl}_{k}) \rightarrow U_q(\mathfrak{sl}_{k})$ be the $\mathbb{Q}$-algebra automorphism given by $$e_i \mapsto e_i, \quad f_i \mapsto f_i, \quad q \mapsto q^{-1}, \quad \text{and} \quad q^h \mapsto q^{-h},$$ and let $\psi$ also denote the induced $\mathbb{Q}$-linear automorphism of $V^{\lambda}$ given by $p v_{\lambda} \mapsto \psi(p) v_{\lambda}$ for all $p \in U_q(\mathfrak{sl}_k)$.
\[global\] There exists a unique $A_0$-basis $G^{\lambda} = \lbrace G_b \rbrace_{b \in B^{\lambda}}$ of $L^{\lambda}$ such that (i) $G^{\lambda}$ is a $\mathbb{Q}$-basis of $U_A(\mathfrak{sl}_k) v_{\lambda} \cap L^{\lambda} \cap \psi(L^{\lambda})$, and (ii) $\overline{G_b} = b$ and $\psi(G_b) = G_b$ for all $b \in B^{\lambda}$.
Let $\varphi \colon U_q(\mathfrak{sl}_k) \rightarrow U_q(\mathfrak{sl}_k)$ be the $\mathbb{Q}$-algebra antiautomorphism given by $$e_i \mapsto f_i, \quad f_i \mapsto e_i, \quad q \mapsto q, \quad \text{and} \quad q^h \mapsto q^h.$$ The *Shapovalov form* on $V^{\lambda}$ is the unique symmetric bilinear form $(\cdot, \cdot)$ such that $(v_{\lambda}, v_{\lambda}) = 1$ and $(pu, v) = (u, \varphi(p)v)$ for all $u, v \in V^{\lambda}$ and $p \in U_q(\mathfrak{sl}_k)$.
\[perpen\] If $\mu, \nu \in P$ are distinct weights of $V^{\lambda}$, then $(V^{\lambda}_{\mu}, V^{\lambda}_{\nu}) = 0$.
Let $u \in V^{\lambda}_{\mu}$ and $v \in V^{\lambda}_{\nu}$. For all $h \in P^{\vee}$, $$q^{\mu(h)} (u,v) = (q^h u, v) = (u, q^h v) = q^{\nu(h)} (u, v).$$
The *lower global basis* of $V^{\lambda}$ is $G^{\lambda}$, and the *upper global basis* $F^{\lambda}$ of $V^{\lambda}$ is the dual basis to $G^{\lambda}$ with respect to the Shapovalov form.
\[wvec\] $G^{\lambda}_{\mu} := \lbrace G_b \rbrace_{b \in B^{\lambda}_{\mu}}$ and $F^{\lambda}_{\mu} := \lbrace F_b \rbrace_{b \in B^{\lambda}_{\mu}}$ are $\mathbb{Q}(q)$-bases of $V^{\lambda}_{\mu}$ for all $\mu \in P$.
From Lusztig’s construction of the canonical basis (cf. Lusztig [@Lusztig]), we see that $G^{\lambda} \cap V^{\lambda}_{\mu}$ is a $\mathbb{Q}(q)$-basis of $V^{\lambda}_{\mu}$ for all $\mu \in P$. Hence $G^{\lambda}$ consists of weight vectors, so $G^{\lambda}_{\mu} \subset V^{\lambda}_{\mu}$ for all $\mu \in P$, which implies $G^{\lambda}_{\mu} = G^{\lambda} \cap V^{\lambda}_{\mu}$ for all $\mu \in P$.
It follows that $(F^{\lambda}_{\mu}, V^{\lambda}_{\nu}) = 0$ for all distinct $\mu, \nu \in P$ (cf. Proposition \[perpen\]). The Shapovalov form is nondegenerate, so $F^{\lambda}_{\mu} \subset V^{\lambda}_{\mu}$ for all $\mu \in P$.
For all $i$, the entries of the matrices of $e_i$ and $f_i$ on the $\mathbb{Q}(q)$-vector space $V^{\lambda} = \bigoplus_{b \in B^{\lambda}} \mathbb{Q}(q) F^{\lambda}_b$ are regular at $q=1$ (cf. Berenstein–Zelevinsky [@Berenstein]). Thus, specializing the matrices of $e_i$ and $f_i$ at $q = 1$ for all $i$ and letting $h$ act as multiplication by $\mu(h)$ on $F^{\lambda}_{\mu}$ for all $\mu \in P$ and $h \in P^{\vee}$ equips the $\mathbb{C}$-vector space $V^{\lambda}_{\mathbb{C}} := \bigoplus_{b \in B^{\lambda}} \mathbb{C} F^{\lambda}_b$ with the structure of an $\mathfrak{sl}_k$-module.
As an $\mathfrak{sl}_k$-representation, $V^{\lambda}_{\mathbb{C}}$ is irreducible with highest weight $\lambda$ (cf. [@Berenstein]). We refer to $V^{\lambda}_{\mathbb{C}}$ as the $\mathbb{C}$-form of the $q=1$ specialization of $V^{\lambda}$.
Crystal structure on tableaux
=============================
Review
------
Let $\lambda \in P^+$, and let $l_1, \ldots, l_{k-1}$ be nonnegative integers such that $\lambda = l_1 \omega_1 + \cdots + l_{k-1} \omega_{k-1}$. For all $1 \leq i \leq k-1$, set $\lambda_i := l_i + \cdots + l_{k-1}$, and let $\lambda$ also denote the partition $(\lambda_1, \ldots, \lambda_{k-1})$.
The crystal basis $(L^{\lambda}, B^{\lambda})$ is an effective combinatorial model for the $U_q(\mathfrak{sl}_k)$-representation $V^{\lambda}$ in part because there exists an identification of $B^{\lambda}$ with $SSYT(\lambda, k)$ under which the action of Kashiwara’s operators admits a simple description.
Label the boxes in the Young diagram of shape $\lambda$ with the integers in $\lbrace 1, \ldots, |\lambda| \rbrace$ so that, for all $1 \leq i \leq k-1$, the boxes in the $i^{\text{th}}$ row are assigned the labels $\lambda_1 + \cdots + \lambda_{i-1} + 1, \ldots, \lambda_1 + \cdots + \lambda_i$, and the labels increase *from right to left* within each row.
Let $T \in SSYT(\lambda, k)$, and let $w_T \colon \lbrace 1, \ldots, |\lambda| \rbrace \rightarrow \lbrace 1, \ldots, k \rbrace$ be the map associating to each label the entry in the underlying box of $T$. From the set $\lbrace 1, \ldots, |\lambda| \rbrace$, remove all $j$ for which $w_T(j) \notin \lbrace i, i+1 \rbrace$. Then, iteratively remove all consecutive $j < j'$ for which $w_T(j) = i$ and $w_T(j') = i+1$.[^4]
Let $S_i$ denote the remaining set, and set $w_{T,i} := w_T|_{S_i}$.
- If $w_{T,i}^{-1}(i+1)$ is nonempty, let $\tilde{e}_i(T)$ be the tableau obtained from $T$ by changing the entry in the box labeled $\max w_{T,i}^{-1}(i+1)$ from $i+1$ to $i$; otherwise, set $\tilde{e}_i(T) :=0$.
- If $w_{T,i}^{-1}(i)$ is nonempty, let $\tilde{f}_i(T)$ be the tableau obtained from $T$ by changing the entry in the box labeled $\min w_{T,i}^{-1}(i)$ from $i$ to $i+1$; otherwise, set $\tilde{f}_i(T) :=0$.
\[crystal\] The operators $\tilde{e}_i$ and $\tilde{f}_i$ map $SSYT(\lambda, k)$ into $SSYT(\lambda, k) \cup \lbrace 0 \rbrace$ for all $1 \leq i \leq k-1$.
Furthermore, let $T_{\lambda} \in SSYT(\lambda, k)$ be the tableau satisfying $w_{T_{\lambda}}^{-1}(i) = \lbrace \lambda_1 + \cdots + \lambda_{i-1} + 1, \ldots, \lambda_1 + \cdots + \lambda_i \rbrace$ for all $1 \leq i \leq k-1$. Then there exists a bijection $\pi \colon B^{\lambda} \rightarrow SSYT(\lambda, k)$ such that $\pi(\overline{v_{\lambda}}) = T_{\lambda}$ and $\pi$ commutes with $\tilde{e}_i$ and $\tilde{f}_i$.[^5]
Rectangular tableaux
--------------------
Let $a \in \lbrace 1, \ldots, k-1 \rbrace$, and let $m$ be a positive integer. Set $\lambda := m \omega_a$. We present two lemmas, the latter of which we appeal to in the following section.
\[expl\] For all $a \leq a' \leq k-1$, and all sequences of nonnegative integers $(c_a, \ldots, c_{a'})$, set $$T_{(c_a, \ldots, c_{a'})} := \tilde{f}_{a'}^{c_{a'}} \cdots \tilde{f}_a^{c_a} (T_{\lambda}).$$
Suppose $m \geq c_a \geq \cdots \geq c_{a'} \geq 0$. Set $c_{a-1} := m$ and $c_{a'+1} := 0$. Then $T_{(c_a, \ldots, c_{a'})} \in SSYT(\lambda, k)$, and $w_{T_{(c_a, \ldots, c_{a'})}}$ is described by the following conditions:
- $w_{T_{(c_a, \ldots, c_{a'})}}^{-1}(i) = \lbrace (i-1)m + 1, \ldots, im \rbrace$ for all $1 \leq i \leq a-1$;
- $w_{T_{(c_a, \ldots, c_{a'})}}^{-1}(i) = \lbrace (a-1)m + c_i + 1, \ldots, (a-1)m + c_{i-1} \rbrace$ for all $a \leq i \leq a' + 1$;
We induct on $a'$. Consider the action of $\tilde{f}_{a'}$ on $T_{(c_a, \ldots, c_{a'-1})}$. By the inductive hypothesis, $w_{T_{(c_a, \ldots, c_{a'-1})}}^{-1}(a') = \lbrace (a-1)m + 1, \ldots, (a-1)m + c_{a'-1} \rbrace$, and $w_{T_{(c_a, \ldots, c_{a'-1})}}(a'+1)$ is empty. Therefore, $$S_{a'} = \lbrace (a-1)m + 1, \ldots, (a-1)m + c_{a'-1} \rbrace.$$
Provided $c_{a' - 1} \geq c_{a'} \geq 0$, we see by induction on $c_{a'}$ that $\tilde{f}_{a'}^{c_{a'}}$ changes the entries of $T_{(c_a, \ldots, c_{a'-1})}$ in the boxes labeled $(a-1)m + 1, \ldots, (a-1)m + c_{a'}$ from $a'$ to $a'+1$.
\[zero\] The following assertions hold.
1. For all $a \leq a' \leq k-1$, and all sequences of nonnegative integers $(c_a, \ldots, c_{a'})$, $$\tilde{f}_{a'}^{c_{a'}} \cdots \tilde{f}_a^{c_a} \overline{v_{\lambda}}$$ is nonzero if and only if $m \geq c_a \geq \cdots \geq c_{a'} \geq 0$.
2. For all $a+1 \leq a' \leq k-2$, and all $0 \leq j \leq m$, $$\tilde{f}_{a'} \tilde{f}_{a'+1}^{j} \cdots \tilde{f}_a^{j} \overline{v_{\lambda}} = 0.$$
3. For all $0 \leq j \leq m$, $$\tilde{f}_{a-1}^j \tilde{f}_a^j \overline{v_{\lambda}} \neq 0 \quad \text{and} \quad \tilde{f}_{a-1}^{j+1} \tilde{f}_a^j \overline{v_{\lambda}} = 0.$$
4. For all $0 \leq j \leq m$, $$\tilde{f}_a^{m-j} \tilde{f}_{a+1}^{j} \tilde{f}_a^j \overline{v_{\lambda}} \neq 0 \quad \text{and} \quad \tilde{f}_a^{m-j+1} \tilde{f}_{a+1}^{j} \tilde{f}_a^j \overline{v_{\lambda}} = 0.$$
In view of Theorem \[crystal\], it suffices to show each assertion with $T_{\lambda}$ in place of $\overline{v_{\lambda}}$. We freely apply Lemma \[expl\] throughout.
1. The “if” direction is immediate.
For the “only if” direction, assume the contrary. Set $c_{a-1} := m$, and let $a'$ be minimal for which there exists a sequence $(c_a, \ldots, c_{a'})$ with $c_{a'-1} < c_{a'}$ such that $\tilde{f}_{a'}^{c_{a'}} \cdots \tilde{f}_a^{c_a} \overline{v_{\lambda}} \neq 0$.
Since $w_{T_{(c_a, \ldots, c_{a'-1}, c_{a'-1})}}^{-1}(a')$ is empty, it follows that $\tilde{f}_{a'}$ vanishes on $T_{(c_a, \ldots, c_{a'-1}, c_{a'-1})}$, so $T_{(c_a, \ldots, c_{a'-1}, c_{a'})} = 0$.
2. Immediate.
3. Since $w_{T_{(j)}}^{-1}(a-1) = \lbrace (a-2)m + 1, \ldots, (a-1)m \rbrace$ and $w_{T_{(j)}}^{-1}(a) = \lbrace (a-1)m + j + 1, \ldots, am \rbrace$, we see that $\tilde{f}_{a-1}^j$ changes the entries of $T_{(j)}$ in the boxes labeled $(a-2)m +1, \ldots, (a-2)m + j$ from $a-1$ to $a$, and $\tilde{f}_{a-1}$ annihilates the tableau so obtained.
4. Since $w_{T_{(j,j)}}^{-1}(a) = \lbrace (a-1)m + j + 1, \ldots, am \rbrace$ and $w_{T_{(j,j)}}^{-1}(a+1)$ is empty, we see that $\tilde{f}_a^{m-j}$ changes the entries of $T_{(j,j)}$ in the boxes labeled $(a-1)m + j + 1, \ldots, am$ from $a$ to $a+1$, and $\tilde{f}_a$ annihilates the tableau so obtained.
Restriction of Global Bases
===========================
Restricting $V^{\lambda}$ to $U_q(\mathfrak{sl}_{k-1})$
-------------------------------------------------------
Let $a \in \lbrace 1, \ldots, k-1 \rbrace$, and let $m$ be a positive integer. Set $\lambda := m \omega_a$. We start with two lemmas that assist us in “lifting” to $L^{\lambda}$ the assertions in Lemma \[zero\] concerning elements of $L^{\lambda}/qL^{\lambda}$.
\[lift\] Let $N$ be a positive integer, and let $v \in L^{\lambda}$ be a weight vector. Suppose that there exists a weight vector $v_N \in L^{\lambda}$ such that $e_i v_N = 0$ and $v = f_i^{(N)} v_N$. If $\overline{v} \neq 0$ and $\tilde{f}_i \overline{v} = 0$, then $\tilde{f}_i v = 0$.
Assume the contrary. Since $\tilde{f}_i v = f_i^{(N+1)} v_N$ is nonzero, it follows that $v = \tilde{e}_i \tilde{f}_i v$, whence $\overline{v} = \tilde{e}_i \tilde{f}_i \overline{v} = 0$.
\[convert\] For all sequences of nonnegative integers $(c_1, \ldots, c_{k-1})$, $$\tilde{f}_{k-1}^{c_{k-1}} \cdots \tilde{f}_1^{c_1} v_{\lambda} = f_{k-1}^{(c_{k-1})} \cdots f_1^{(c_1)} v_{\lambda}.$$
Note that $f_{i-1}^{(c_{i-1})} \cdots f_1^{(c_1)} v_{\lambda} \in \ker e_i$ for all $1 \leq i \leq k-1$.
\[tech\] The following assertions hold.
1. For all $a \leq a' \leq k-1$, and all sequences of nonnegative integers $(c_a, \ldots, c_{a'})$, $$f_{a'}^{(c_{a'})} \cdots f_a^{(c_a)} v_{\lambda}$$ is nonzero if and only if $m \geq c_a \geq \cdots \geq c_{a'} \geq 0$.
2. For all $a \leq a' \leq k-2$, and all $0 \leq j \leq m$, $$e_{a'} f_{a'+1}^{(j)} \cdots f_a^{(j)} v_{\lambda} = 0.$$
3. For all $a+1 \leq a' \leq k-2$, and all $0 \leq j \leq m$, $$f_{a'}f_{a'+1}^{(j)} \cdots f_a^{(j)} v_{\lambda} = 0.$$
4. $f_{a-1}^{(j+1)} f_{a}^{(j)} v_{\lambda} = 0$ for all $0 \leq j \leq m$.
5. $f_{a}^{(m-j+1)} f_{a+1}^{(j)} f_{a}^{(j)} v_{\lambda} = 0$ for all $0 \leq j \leq m$.
We freely apply Lemmas \[zero\], \[lift\], and \[convert\] throughout.
1. The “if” direction is immediate.
For the “only if” direction, assume the contrary. Set $c_{a-1} := m$, and let $a'$ be minimal for which there exists a sequence $(c_a, \ldots, c_{a'})$ with $c_{a'-1} < c_a$ such that $f_{a'}^{(c_{a'})} \cdots f_a^{(c_a)} v_{\lambda} \neq 0$.
Note that $$\tilde{f}_{a'}^{c_{a'-1}} \tilde{f}_{a'-1}^{c_{a'-1}} \cdots \tilde{f}_a^{c_a} \overline{v_{\lambda}} \neq 0 \quad \text{and} \quad \tilde{f}_{a'}^{c_{a'-1}+1} \tilde{f}_{a'-1}^{c_{a'-1}} \cdots \tilde{f}_a^{c_a} \overline{v_{\lambda}} = 0.$$
2. For all $a \leq a' \leq k-1$, set $v_{a', j} := f_{a'}^{(j)} \cdots f_{a}^{(j)} v_{\lambda}$, and set $v_{a-1,j} := v_{\lambda}$. From the identity $e_{a'} f_{a'}^{(j)} = f_{a'}^{(j)} e_{a'} + f_{a'}^{(j-1)} [q^{h_{a'}};1-j]_q$ (cf. Lemma 3.2.5 in Hong–Kang [@Hong]), we see that $$\begin{aligned}
e_{a'} v_{a'+1,j} & = f_{a'+1}^{(j)} e_{a'} f_{a'}^{(j)} v_{a'-1,j} \\
& = f_{a'+1}^{(j)} f_{a'}^{(j)} e_{a'} v_{a'-1,j} + f_{a'+1}^{(j)} f_{a'}^{(j-1)} [q^{h_{a'}};1-j]_q v_{a'-1,j}.
\end{aligned}$$
The first summand is zero. Since $[q^{h_{a'}}; 1-j]_q$ acts as a scalar in $\mathbb{Q}(q)$ on the weight vector $v_{a'-1,j}$, and $f_{a'+1}^{(j)} f_{a'}^{(j-1)} v_{a'-1,j} = 0$ by assertion (1), it follows that the second summand is also zero.
3. Since $e_{a'} v_{a'+1,j} = 0$ by assertion (2), it follows that $\tilde{f}_{a'} v_{a'+1,j}= f_{a'} v_{a'+1,j}$. Note that $\overline{v_{a'+1,j}} \neq 0$ and $\tilde{f}_{a'} \overline{v_{a'+1,j}} = 0$.
4. Since $e_{a-1} v_{a,j} = 0$, it follows that $\tilde{f}_{a-1}^{j+1} v_{a,j} = f_{a-1}^{(j+1)} v_{a,j}$. Note that $\tilde{f}_{a-1}^{j} \overline{v_{a,j}} \neq 0$ and $\tilde{f}_{a-1}^{j+1} \overline{v_{a,j}} = 0$.
5. Since $e_a v_{a+1,j} = 0$ by assertion (2), it follows that $\tilde{f}_a^{m-j+1} v_{a+1,j} = f_a^{(m-j+1)} v_{a+1,j}$. Note that $\tilde{f}_a^{m-j} \overline{v_{a+1,j}} \neq 0$ and $\tilde{f}_a^{m-j+1} \overline{v_{a+1,j}} = 0$.
The following proposition follows immediately from Proposition \[tech\].
\[vanish\] For all $0 \leq j \leq m$, set $v_{j} := f_{k-1}^{(j)} \cdots f_{a}^{(j)} v_{\lambda}$. Then, for all $0 \leq j \leq m$, the following conditions hold:
1. $v_{j} \neq 0$;
2. $v_{j} \in \ker e_1 \cap \cdots \cap \ker e_{k-2}$;
3. $v_{j} \in \ker f_1 \cap \cdots \cap \ker f_{a-2} \cap \ker f_{a+1} \cap \cdots \cap \ker f_{k-2}$;
4. $v_{j} \in \ker f_{a-1}^{j+1} \cap \ker f_{a}^{m-j+1}$.
We are finally ready to describe explicitly the decomposition of $V^{\lambda}$ as a direct sum of irreducible $U_q(\mathfrak{sl}_{k-1})$-representations.
\[iso\] For all $0 \leq j \leq m$, the $U_q(\mathfrak{sl}_{k-1})$-subrepresentation $U_q(\mathfrak{sl}_{k-1}) v_{j}$ is isomorphic to the irreducible $U_q(\mathfrak{sl}_{k-1})$-representation with highest weight $\lambda^j := j \omega_{a-1} + (m-j) \omega_a$.
Note that $q^h v_j = q^{m \omega_a(h)-j\alpha_{a}(h) - \cdots - j\alpha_{k-1}(h)} v_j$ for all $h \in P^{\vee}$. Since $$m \omega_a - j \alpha_{a} - \cdots j \alpha_{k-1} = j \omega_{a-1} + (m-j) \omega_a + j E_k,$$ and the image of $E_k$ in the weight lattice of $\mathfrak{sl}_{k-1}$ is zero, $v_j$ belongs to the $\lambda^j$-weight space of the restriction of $V^{\lambda}$ to $U_q(\mathfrak{sl}_{k-1})$. In view of Proposition \[vanish\], the conclusion follows from Corollary 3.4.7 in Hong–Kang [@Hong].
Given a dominant weight $\widehat{\lambda}$ of $\mathfrak{sl}_{k-1}$, we write $\widehat{V}^{\widehat{\lambda}}$ for the irreducible $U_q(\mathfrak{sl}_{k-1})$-representation with highest weight $\widehat{\lambda}$.
\[decomp\] $V^{\lambda} = \bigoplus_{j=0}^m U_q(\mathfrak{sl}_{k-1}) v_{j}$.
By the Pieri rule, $V^{\lambda}$ is isomorphic as a $U_q(\mathfrak{sl}_{k-1})$-representation to $\bigoplus_{j=0}^m \widehat{V}^{\lambda^j}$. In view of Proposition \[iso\], the conclusion follows from Schur’s lemma.
Restricting the lower global basis
----------------------------------
For all $0 \leq j \leq m$:
- Fix a weight vector $u_j \in \widehat{V}^{\lambda_j}_{\lambda_j}$;
- Let $\phi_j \colon \widehat{V}^{\lambda_j} \rightarrow U_q(\mathfrak{sl}_{k-1})v_j$ be the isomorphism with $\phi_j(u_j) = v_j$;
- Let $(\widehat{L}^{\lambda^j}, \widehat{B}^{\lambda^j})$ be the crystal basis of $\widehat{V}^{\lambda^j}$ such that $\widehat{L}^{\lambda^j}_{\lambda^j} = A_0 u_j$ and $\widehat{B}^{\lambda^j}_{\lambda^j} = \lbrace \overline{u_j} \rbrace$;
- Let $\widehat{G}^{\lambda^j} = \lbrace \widehat{G}^j_b \rbrace_{b \in \widehat{B}^{\lambda^j}}$ be the lower global basis of $\widehat{V}^{\lambda^j}$.
\[in\] The isomorphism $\phi_j$ sends $\widehat{L}^{\lambda^j}$ into $L^{\lambda}$, and the induced map $\phi_j \colon \widehat{L}^{\lambda^j}/q\widehat{L}^{\lambda^j} \rightarrow L^{\lambda}/qL^{\lambda}$ sends $\widehat{B}^{\lambda^j}$ into $B^{\lambda}$.
Since $v_j = \tilde{f}_{k-1}^{j} \cdots \tilde{f}_{a}^j v_{\lambda}$, the former claim follows from Proposition \[lattice\]. For the latter claim, it suffices to show that $\tilde{f}_{i_r} \cdots \tilde{f}_{i_1} \overline{u_j} \neq 0$ implies $\tilde{f}_{i_r} \cdots \tilde{f}_{i_1} \overline{v_j} \neq 0$ for all $r \geq 0$ and sequences $i_1, \ldots, i_r \in \lbrace 1, \ldots, k-2 \rbrace$.
By Theorem \[crystal\], we may substitute $T_{\lambda^j}$ for $\overline{u_j}$ and $T_j := \tilde{f}_{k-1}^j \cdots \tilde{f}_a^j (T_{\lambda})$ for $\overline{v_j}$. By Lemma \[expl\], the entries of $T_j$ in the rightmost $j$ boxes in the bottom row are all equal to $k$, and the tableau obtained by removing these boxes is $T_{\lambda^j}$. Inducting on $r$, we see that if $\tilde{f}_{i_r} \cdots \tilde{f}_{i_1} (T_{\lambda^j}) \neq 0$, then $\tilde{f}_{i_r} \cdots \tilde{f}_{i_1}(T_j)$ is the unique tableau such that (i) the entries in the rightmost $j$ boxes in the bottom row are all equal to $k$, and (ii) the tableau obtained by removing these boxes is $\tilde{f}_{i_r} \cdots \tilde{f}_{i_1}(T_{\lambda^j})$.
\[res\] The following assertions hold.
1. $L^{\lambda} = \bigoplus_{j=0}^m \phi_j(\widehat{L}^{\lambda^j})$.
2. $B^{\lambda} = \bigsqcup_{j=0}^m \phi_j(\widehat{B}^{\lambda^j})$.
3. $G^{\lambda} = \bigsqcup_{j=0}^m \phi_j(\widehat{G}^{\lambda^j})$. In particular, $\phi_j(\widehat{G}^j_b) = G^{\lambda}_{\phi_j(b)}$ for all $b \in \widehat{B}^{\lambda^j}$.
We freely apply Lemma \[in\].
1. Set $L := \bigoplus_{j=0}^m \widehat{L}^{\lambda^j}$, and define $\phi \colon L \rightarrow L^{\lambda}$ by $\phi := \sum_{j=0}^m \phi_j$. Note that $\phi$ induces maps $\phi_q \colon \mathbb{Q}(q) \otimes_{A_0} L \rightarrow \mathbb{Q}(q) \otimes_{A_0} L^{\lambda}$ and $\phi_{\text{zero}} \colon L/qL \rightarrow L^{\lambda}/qL^{\lambda}$. It follows from Theorem \[decomp\] that $\phi_q$ is an isomorphism of $\mathbb{Q}(q)$-vector spaces. Thus, $\operatorname{rank}_{A_0} L = \operatorname{rank}_{A_0} L^{\lambda}$, which implies $\dim_{\mathbb{Q}} L/qL = \dim_{\mathbb{Q}} L^{\lambda}/qL^{\lambda}$. Since $\phi_{\text{zero}}$ is injective, we see that $\phi_{\text{zero}}$ is an isomorphism of $\mathbb{Q}$-vector spaces, and the conclusion follows from Nakayama’s Lemma.
2. Immediate from the observation that $\phi_{\text{zero}}$ is an isomorphism.
3. By assertion (1), we see that $\bigsqcup_{j=0}^m \phi_j(\widehat{G}^{\lambda^j})$ is an $A_0$-basis of $L^{\lambda}$.
Let $b \in \widehat{B}^{\lambda^j}$. We claim that $\phi_j(\widehat{G}^{j}_b) \in U_A(\mathfrak{sl}_k) v_{\lambda} \cap L^{\lambda} \cap \psi(L^{\lambda})$. Since $\widehat{G}^j_b \in \widehat{L}^{\lambda^j}$, it follows that $\phi_j(\widehat{G}^j_b) \in L^{\lambda}$.
Fix $p \in U_A(\mathfrak{sl}_{k-1})$ such that $\widehat{G}^j_b = p u_j$. Note that $\phi_j(p u_j) = p v_j = p f_{k-1}^{(j)} \cdots f_{a}^{(j)} v_{\lambda} \in U_A(\mathfrak{sl}_k) v_{\lambda}$.
Furthermore, since $p u_j = \psi(p) u_j$, it follows that $p v_j = \psi(p) v_j = \psi(p) f_{k-1}^{(j)} \cdots f_{a}^{(j)} v_{\lambda} = \psi(p) \psi(f_{k-1}^{(j)} \cdots f_{a}^{(j)}) v_{\lambda} = \psi(p v_j) \in \psi(L^{\lambda})$.
Hence $\bigsqcup_{j=0}^m \phi_j(\widehat{G}^{\lambda^j})$ is a $\mathbb{Q}$-basis of $U_A(\mathfrak{sl}_k) v_{\lambda} \cap L^{\lambda} \cap \psi(L^{\lambda})$, for it is a linearly independent subset of cardinality equal to that of $B^{\lambda}$.
Restricting the upper global basis
----------------------------------
For all $0 \leq j \leq m$, let $(\cdot, \cdot)_j$ denote the Shapovalov form on $\widehat{V}^{\lambda^j}$. Note that $(\cdot, \cdot)_j = \frac{(\phi_j(\cdot), \phi_j(\cdot))}{(v_j, v_j)}$.
\[upper\] Let $\widehat{F}^{\lambda^j} = \lbrace \widehat{F}^j_b \rbrace_{b \in \widehat{B}^{\lambda^j}}$ be the upper global basis of $\widehat{V}^{\lambda^j}$. Then $F^{\lambda} = \bigsqcup_{j=0}^m \frac{\phi_j(\widehat{F}^{\lambda^j})}{(v_j, v_j)}$. In particular, $\frac{\phi_j(\widehat{F}^{j}_b)}{(v_j, v_j)} = F^{\lambda}_{\phi_j(b)}$ for all $b \in \widehat{B}^{\lambda^j}$.
Let $b \in \widehat{B}^{\lambda^j}$. For all $b' \in \widehat{B}^{\lambda^j}$, we see that $$\frac{(\phi_j(\widehat{F}^j_b),\phi_j(\widehat{G}^j_{b'}))}{(v_j, v_j)} = (\widehat{F}^j_b, \widehat{G}^j_{b'})_j = \delta_{b,b'}.$$
Furthermore, for all $j' \neq j$ and $b' \in \widehat{B}^{\lambda^{j'}}$, it follows from Propositions \[perpen\] and \[wvec\] that $(\phi_j(\widehat{F}^j_b),\phi_{j'}(\widehat{G}^{j'}_{b'})) = 0$, and the conclusion follows from Theorem \[res\].
\[gauss\] For all $a \leq a' \leq k-1$, set $v_{a',j} := f_{a'}^{(j)} \cdots f_{a}^{(j)} v_{\lambda}$. Then $(v_{a',j}, v_{a',j}) = \begin{bsmallmatrix} m \\ j \end{bsmallmatrix}_q$ for all $0 \leq j \leq m$.
We induct on $a'$. For the base case, note that $$\begin{aligned}
(f_a^{(j)}v_{\lambda}, f_a^{(j)} v_{\lambda}) & = (v_{\lambda}, e_a^{(j)} f_a^{(j)} v_{\lambda})
\\ &= \frac{(v_{\lambda},e_a^{(j-1)}f_a^{(j)}e_a v_{\lambda}) + (v_{\lambda}, e_a^{(j-1)} f_a^{(j-1)} [q^{h_a}; 1-j]_q v_{\lambda})}{[j]_q}
\\ &= \frac{[m+1-j]_q(v_{\lambda}, e_a^{(j-1)} f_a^{(j-1)} v_{\lambda})}{[j]_q},
\end{aligned}$$ so we see by induction on $j$ that $$(f_a^{(j)} v_{\lambda}, f_a^{(j)} v_{\lambda}) = \frac{[m+1-j]_q \cdots [m]_q}{[j]_q \cdots [1]_q} (v_{\lambda}, v_{\lambda}) = \begin{bmatrix} m \\ j \end{bmatrix}_q.$$
For the inductive step, note that $$\begin{aligned}
& (f_{a'}^{(j')} v_{a'-1,j}, f_{a'}^{(j')} v_{a'-1,j})
= (v_{a'-1,j}, e_{a'}^{(j')} f_{a'}^{(j')} v_{a'-1,j})
\\ & = \frac{(v_{a'-1,j}, e_{a'}^{(j'-1)} f_{a'}^{(j')} e_{a'} v_{a'-1,j}) + (v_{a'-1,j}, e_{a'}^{(j'-1)} f_{a'}^{(j'-1)} [q^{h_{a'}}; 1-j']_q v_{a'-1,j})}{[j']_q}
\\ & = \frac{[j+1-j']_q (v_{a'-1,j},e_{a'}^{(j'-1)} f_{a'}^{(j'-1)} v_{a'-1,j})}{[j']_q},
\end{aligned}$$ so we see by induction on $j'$ that $$\begin{aligned}
& (f_{a'}^{(j)} v_{a'-1,j}, f_{a'}^{(j)} v_{a'-1,j}) = \frac{[1]_q \cdots [j]_q}{[j]_q \cdots [1]_q} (v_{a'-1,j}, v_{a'-1,j}) = \begin{bmatrix} m \\ j \end{bmatrix}_q.
\end{aligned}$$
\[homo\] For all $0 \leq j \leq m$, let $\widehat{V}^{\lambda^j}_{\mathbb{C}}$ be the $\mathbb{C}$-form of the $q=1$ specialization of $\widehat{V}^{\lambda^j}$. Let $\tau_j \colon \widehat{V}^{\lambda^j}_{\mathbb{C}} \rightarrow V^{\lambda}_{\mathbb{C}}$ be given by $\widehat{F}^{j}_b \mapsto F^{\lambda}_{\phi_j(b)}$. Then $\tau_j$ is an $\mathfrak{sl}_{k-1}$-module homomorphism.
It follows from Theorem \[upper\] and Lemma \[gauss\] that the $U_q(\mathfrak{sl}_{k-1})$-module homomorphism $\phi_j \colon \widehat{V}^{\lambda^j} \rightarrow V^{\lambda}$ is given by $\widehat{F}^j_b \mapsto \begin{bsmallmatrix} m \\ j \end{bsmallmatrix}_q F^{\lambda}_{\phi_j(b)}$. Hence the $\mathbb{C}$-linear map $\widehat{V}^{\lambda^j}_{\mathbb{C}} \rightarrow V^{\lambda}_{\mathbb{C}}$ given by $\widehat{F}^j_b \rightarrow \binom{m}{j} F^{\lambda}_{\phi_j(b)}$ is an $\mathfrak{sl}_{k-1}$-module homomorphism.
Proof of Rhoades’s Theorem
==========================
Let $\Lambda = (\Lambda_1, \ldots, \Lambda_k)$ be a partition, and let $V^{\Lambda}$ be the irreducible $GL_k(\mathbb{C})$-representation with highest weight $\Lambda$. Set $\lambda := \Lambda_1 E_1 + \cdots \Lambda_k E_k$. Then $\lambda$ is the image of $\Lambda$ in the weight lattice of $\mathfrak{sl}_k$, and $V^{\Lambda}$ is isomorphic as an $\mathfrak{sl}_k$-representation to $V^{\lambda}_{\mathbb{C}}$. Thus, we may consider $F^{\lambda}$ a basis of $V^{\Lambda}$ (such that the $\mathfrak{sl}_k$-action on $V^{\Lambda}$ induced from the $GL_k(\mathbb{C})$-module structure agrees with that induced from the $\mathfrak{sl}_k$-action on $V^{\lambda}_{\mathbb{C}}$).
Identify $B^{\lambda}$ with $SSYT(\Lambda, k)$.
\[bz\] Set $\epsilon_{\Lambda} := (-1)^{\sum_{i=1}^k (i-1) \Lambda_i}$. For all $b \in SSYT(\Lambda, k)$, $$w_{0,k} \cdot F^{\lambda}_b = \epsilon_{\Lambda} F^{\lambda}_{\xi_k(b)}.$$
Suppose $\Lambda$ is rectangular. Let $a$ and $m$ be positive integers for which $\Lambda = (m^a)$. If $a = k$, then $SSYT(\Lambda, k)$ consists of exactly one tableau, so we may assume $a \leq k-1$. For all $0 \leq j \leq m$, set $\Lambda^j := (m^{a-1},m-j)$, and let $\widehat{V}^{\Lambda^j}$ be the irreducible $GL_{k-1}(\mathbb{C})$-representation with highest weight $\Lambda^j$.
\[ghom\] The map $\tau_j \colon \widehat{V}^{\Lambda^j} \rightarrow V^{\Lambda}$ given by $\widehat{F}^{j}_b \mapsto F^{\lambda}_{\phi_j(b)}$ is a $GL_{k-1}(\mathbb{C})$-module homomorphism.
By the Pieri rule, $V^{\Lambda} \cong \bigoplus_{j=0}^m \widehat{V}^{\Lambda^j}$, so there exist $GL_{k-1}(\mathbb{C})$-module homomorphisms $\tau'_j \colon \widehat{V}^{\Lambda^j} \rightarrow V^{\Lambda}$ such that $V^{\Lambda} = \bigoplus_{j=0}^m \tau'_j(\widehat{V}^{\Lambda^j})$. Since $\tau'_j$ and $\tau_j$ are $\mathfrak{sl}_{k-1}$-module homomorphisms (cf. Proposition \[homo\]), it follows that $\tau'_j$ agrees with $\tau_j$ up to scaling by a constant in $\mathbb{C}$.
Identify $\widehat{B}^{\lambda^j}$ with $SSYT(\Lambda^j, k-1)$ for all $j$. Given $b \in SSYT(\Lambda^j, k-1)$, note that $\phi_j(b) \in SSYT(\Lambda, k)$ is the unique tableau such that (i) the entries in the rightmost $j$ boxes in the bottom row are all equal to $k$, and (ii) the tableau obtained by removing these boxes is $b$ (cf. Lemma \[in\]). It follows that $\phi_j$ commutes with $\xi_{k-1}$.
Recall from Theorem \[res\] that $SSYT(\Lambda, k) = \bigsqcup_{j=0}^m \phi_j(SSYT(\Lambda^j, k-1))$. Thus, it suffices to describe the action of $c_k$ on the basis elements in $V^{\Lambda}$ corresponding to tableaux in $\phi_j(SSYT(\Lambda^j, k-1))$, whence Rhoades’s cyclic sieving result follows.
\[main\] For all $b \in SSYT(\Lambda^j, k-1)$, $$c_k \cdot F^{\lambda}_{\phi_j(b)} = (-1)^{(a-1)j} F^{\lambda}_{J(\phi_j(b))}.$$
Note that $\xi_{k} \circ \xi_{k-1} = J$. Invoking Theorem \[bz\] and Proposition \[ghom\], we find $$\begin{aligned}
c_k \cdot F^{\lambda}_{\phi_j(b)} & = w_{0,k} w_{0, k-1} \cdot F^{\lambda}_{\phi_j(b)} = w_{0,k} \cdot \tau_j \left(w_{0,k-1} \cdot \widehat{F}^{j}_b\right)
\\ & = w_{0,k} \cdot \tau_j\left(\epsilon_{\Lambda^j} \widehat{F}^{j}_{\xi_{k-1}(b)}\right) = w_{0,k} \cdot \epsilon_{\Lambda^j} F^{\lambda}_{\phi_j(\xi_{k-1}(b))}
\\ & = w_{0,k} \cdot \epsilon_{\Lambda^j} F^{\lambda}_{\xi_{k-1}(\phi_j(b))} = \epsilon_{\Lambda} \epsilon_{\Lambda^j} F^{\lambda}_{\xi_k(\xi_{k-1}(\phi_j(b)))}
\\ & = (-1)^{(a-1)j} F^{\lambda}_{J(\phi_j(b))}.\end{aligned}$$
Acknowledgments
===============
The author thanks Brendon Rhoades for giving us all something interesting to think about these past years.
[21]{}
A. Berenstein and A. Zelevinsky, Canonical bases for the quantum group of type $A_r$ and piecewise-linear combinatorics, *Duke Math. J.* **82** (1996), 473–502.
J. Du, Canonical bases for irreducible representations of quantum $GL_n$, *Bull. London Math. Soc.* **24** (1992), 325–334.
B. Fontaine and J. Kamnitzer, Cyclic sieving, rotation, and geometric representation theory, *Selecta Math.* **20** (2014), 609–625.
A. Garsia and T. McLarnan, Relations between Young’s natural and the Kazhdan–Lusztig representations of $S_n$, *Adv. Math.* **69** (1988), 32–92.
I. Grojnowski and G. Lusztig, A comparison of bases of quantized enveloping algebras, *Linear Algebraic Groups and Their Representations*, Providence, RI: The American Mathematical Society, 1993, pp. 11–20.
J. Hong and S.-J. Kang, *Introduction to Quantum Groups and Crystal Bases*, Providence, RI: The American Mathematical Society, 2002.
S. Hopkins, Cyclic sieving for plane partitions and symmetry, arXiv:1907.09337.
M. Kashiwara, Global crystal bases of quantum groups, *Duke Math. J.* **69** (1993), 455–485.
M. Kashiwara, On crystal bases, *Representations of Groups*, Providence, RI: The American Mathematical Society, 1995, pp. 155–197.
M. Kashiwara and T. Nakashima, Crystal graphs for representations of the $q$-analogue of classical Lie algebras, *J. Algebra* **165** (1994), 295–345.
G. Lusztig, Canonical bases arising from quantized enveloping algebras, *J. Amer. Math. Soc.* **3** (1990), 447–498.
K. Purbhoo, Wronskians, cyclic group actions, and ribbon tableaux, *Trans. Amer. Math. Soc.* **365** (2013), 1977–2030.
V. Reiner, D. Stanton, and D. White, The cyclic sieving phenomenon, *J. Combin. Theory Ser. A* **108** (2004), 17–50.
V. Reiner, D. Stanton, and D. White, What is... cyclic sieving?, *Notices Amer. Math. Soc.* **61** (2014), 169–171.
B. Rhoades, Cyclic sieving, promotion, and representation theory, *J. Combin. Theory Ser. A* **117** (2010), 38–76.
D. B. Rush, Cyclic sieving and plethysm coefficients, *Trans. Amer. Math. Soc.* **371** (2019), 923–947.
B. Sagan, The cyclic sieving phenomenon: a survey, *Surveys in Combinatorics 2011*, Cambridge, UK: Cambridge University Press, 2011, pp. 183–234.
L. Shen and D. Weng, Cyclic sieving and cluster duality for Grassmannian, arXiv:1803.06901.
M. Skandera, On the dual canonical and Kazhdan–Lusztig bases and 3412, 4231-avoiding permutations, *J. Pure Appl. Algebra* **212** (2008), 1086–1104.
J. R. Stembridge, Canonical bases and self-evacuating tableaux, *Duke Math. J.* **82** (1996), 585–606.
B. Westbury, Invariant tensors and the cyclic sieving phenomenon, *Electron. J. Combin.* **23** (2016), P4.25.
[^1]: Purbhoo [@Purbhoo] used the Wronski map; Fontaine and Kamnitzer [@Fontaine] and Westbury [@Westbury] both used invariant tensors. Fontaine and Kamnitzer recovered Rhoades’s result that promotion exhibits cyclic sieving on the set of semistandard tableaux with fixed (cyclically symmetric) content, which encompasses the set of standard tableaux.
[^2]: Let $\lambda$ be the image of $\Lambda$ in the weight lattice of $\mathfrak{sl}_k$. Then $V^{\Lambda}$ is (isomorphic to) the $\mathbb{C}$-form of the $q=1$ specialization of the $U_q(\mathfrak{sl}_k)$-representation $V^{\lambda}$ with highest weight $\lambda$. By the dual canonical basis of $V^{\Lambda}$, we mean the basis of $V^{\Lambda}$ corresponding to Lusztig’s dual canonical basis of $V^{\lambda}$ (cf. Berenstein–Zelevinsky [@Berenstein]).
[^3]: The reader following along with Rhoades [@Rhoades] should be aware that Rhoades incorrectly identifies a Kazhdan–Lusztig left cell with a fixed insertion tableau instead of a fixed recording tableau.
[^4]: We understand $j < j'$ to be consecutive if either $j' = j+1$ or all integers between $j$ and $j'$ have already been removed from the set.
[^5]: The labeling in [@Kashiwara] of the boxes of $\lambda$ differs from ours, but the theorem holds for any *admissible* labeling (cf. Theorem 7.3.6 in Hong–Kang [@Hong]). The crystal structure on tableaux is originally due to Kashiwara and Nakashima [@KashiwaraN].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Tracy-Widom and Baik-Rains distributions appear as universal limit distributions for height fluctuations in the one-dimensional Kardar-Parisi-Zhang (KPZ) *stochastic* partial differential equation (PDE). We obtain the same universal distributions in the spatiotemporally chaotic, nonequilibrium, but statistically steady state (NESS) of the one-dimensional Kuramoto-Sivashinsky (KS) *deterministic* PDE, by carrying out extensive pseudospectral direct numerical simulations to obtain the spatiotemporal evolution of the KS height profile $h(x,t)$ for different initial conditions. We establish, therefore, that the statistical properties of the 1D KS PDE in this state are in the 1D KPZ universality class.'
author:
- Dipankar Roy
- Rahul Pandit
bibliography:
- 'KSE.bib'
title: 'The one-dimensional Kardar-Parisi-Zhang and Kuramoto-Sivashinsky universality class: limit distributions'
---
Fundamental investigations of the statistical properties of hydrodynamical turbulence often use *randomly forced* versions of the *deterministic* Navier-Stokes (NS) equations (3D NSE, in three dimensions); the latter use a non-random forcing term to produce a turbulent, but nonequilibrium, statistically steady state (NESS). A randomly forced 3D, incompressible NS equation (3D RFNSE), proposed first by Edwards [@edwards1964] in 1964, has been studied extensively, via renormalization-group (RG) and other theoretical [@fns1977; @dm1979; @ff1983; @yo1986; @mw1995; @jkb1988; @aav1996; @aav1999] and numerical [@smp1998; @bclst2004] methods; these studies have shown that many statistical properties of turbulence in the 3D RFNSE are akin to their 3D NSE counterparts. In particular, the wave-number $k$ dependence of the energy spectrum [@k1941a; @k1941b; @f1995] $E(k)$, and even the mutiscaling corrections [@f1995; @pf1985; @bppv1984; @bf2010; @ms1991] to the Kolmogorov phenomenology [@k1941a; @k1941b; @f1995] of 1941 are similar in both these models.
Can we find such similarity between the statistical properties of NESSs in *deterministic* and related *stochastic* partial differential equations (PDEs) that are simpler than their 3D hydrodynamical counterparts? It has been suggested, since the 1980s, that the Kuramoto-Sivashinsky (KS) PDE, a deterministic interface-growth model for a height field $h({\bf x},t)$, which is used in studies of chemical waves, flame fronts, and the surfaces of thin films flowing under gravity [@kuramoto1976; @siva1977; @sm1980; @ruyer1998; @pz1985; @chen1984; @grinstein1996], is a simplified model for turbulence [@pz1985]. It has been conjectured [@Yakhot1981], and subsequently shown by compelling numerical studies [@hnz1986; @sneppen1992; @hayot1993; @jayaprakash1993; @2d_bch1999; @2d_kkp2015], in both one dimension (1D) and two dimensions (2D), that the long-distance and long-time behaviors of correlation functions, in the spatiotemporally chaotic NESS of the KS PDE, exhibit the same power-law scaling as their couterparts in the the Kardar-Parisi-Zhang (KPZ) equation [@kpz1986; @thhzhang1995; @thhkat2015; @quastel2015], a stochastic PDE (SPDE), in which the height field $h({\bf x},t)$ is kinetically roughened. The elucidation of the statistics of $h({\bf x},t)$ in the KPZ SPDE has played a central role in nonequilibrium statistical mechanics, in general, and interface-growth phenomena, in particular. Early KPZ studies [@kpz1986; @thhzhang1995] have concentrated on height-field correlations, the width $w(L,t)$ of the fluctuating KPZ interface, and their power-law dependences on the linear system size $L$ and time $t$, for large $L$ and $t$ (see below); especially for the 1D case, several results can be obtained analytically. The universality of the power-law exponents has been demonstrated by explicit numerical calculations, e.g., in the poly-nuclear growth (PNG) model, directed polymers in random media (DPRM), and the asymmetric simple exclusion process (ASEP), and by experiments in turbulent liquid crystals [@takeuchi2011; @takeuchi2012; @takeuchi2013], all of which lie (in suitable parameter regimes) in the KPZ universality class. The seminal work of Prähofer and Spohn work (recently referred to as “the $2^{nd}$ KPZ Revolution” [@thhkat2015]) on the PNG model [@prahofer2000] has led to a new set of studies of the 1D KPZ universality class [@sasamoto2010; @calabrese2011; @imamura2012; @corwin2012; @thhlin2014; @quastel2015; @saberi-naserabadi-krug-2019], which have led to the remarkable result that, at a point $x$ and at large times $t$, $$h(x,t) - h(x,0) \approx v_{\infty} t + ( \Gamma t)^{\upbeta_{\text{KPZ}}} \upchi_\beta +
o(t^{\upbeta_{\text{KPZ}}}) \ , \ \text{for} \ t \rightarrow \infty,
\label{eq:KPZh}$$ where $v_{\infty}$ and $\Gamma$ are model-dependent constants (Supplemental Material [@supp]), the exponent $\upbeta_{\text{KPZ}}=1/3$, and $\upchi_\beta$ is a random variable distributed according to the Tracy-Widom (TW) distribution for the Gaussian Orthogonal Ensemble (GOE) ($\beta=1$) and for the Gaussian Unitary Ensemble (GUE) ($\beta=2$), familiar from the theory of random matrices [@tracy1994], or the Baik-Rains (BR $F_{0}$) distribution [@baik2000] ($\beta=0$); the value of $\beta$ depends on the initial condition. We show, by extensive direct numerical simulations (DNSs), that the result holds for the NESS of the 1D KS PDE. Thus, the correspondence between the statistical properties of these states, in the 1D KS (PDE) and their counterparts in the 1D KPZ (SPDE), does not stop at the simple correlation functions, investigated so far [@hnz1986; @sneppen1992; @hayot1993; @jayaprakash1993]; we demonstrate that this correspondence includes the universal limit distributions obtained in “the $2^{nd}$ KPZ Revolution” [@thhkat2015]. Such a result has not been obtained hitherto for a spatiotemporally chaotic NESS of a deterministic PDE.
). We plot $S(k,
\delta t)$ for three different values of $\delta t$; we also show, for comparison, the theoretical result (orange curve PS) obtained by Prähofer and Spohn [@ps2004] for the 1D KPZ equation.[]{data-label="fig:fig2"}](tcor.pdf){width="1\linewidth"}
The KS PDE, which predates the KPZ SPDE, is $$\partial_{t}h(\mathbf{x},t) + \Delta h(\mathbf{x},t) + \Delta^{2}h(\mathbf{x},t) +\frac{1}{2}(\nabla h(\mathbf{x},t))^{2}=0, \label{eq:ks}$$ where $\nabla \equiv \partial/\partial \mathbf{x}$, $\partial_{t} \equiv
\partial/\partial t$, $\Delta \equiv \nabla^{2}$, and $h, \, \mathbf{x}$, and $t$ have been scaled such that the linear system size $L$ is the only control parameter. The dynamical and long-wavelength properties of the 1D KS PDE have been explored via DNSs in Refs. [@hnz1986; @sneppen1992; @hayot1993; @hyman1986; @kevrekidis1990]; several mathematical results have been obtained in Refs. [@collet1992; @jolly1990; @conte1989].
The 1D KPZ SPDE is $$\begin{aligned}
\partial_{t} h(x,t) &=& \nu \Delta h(x,t) +\frac{\lambda}{2} (\nabla h(x,t))^2 + \eta \ , \nonumber \\
\langle \eta(x,t) \eta(x',t') \rangle &=& D \delta(x-x')\delta(t-t')\ , \label{eq:kpz}\end{aligned}$$ where $\nu$, the diffusivity, and $\lambda$, the strength of the nonlinearity, are real parameters, and $\eta$ is a zero-mean Gaussian white noise, with variance $D$.
We solve the 1D KS PDE , with periodic boundary conditions on a domain of size $L$, by using the pseudospectral method [@cq1981; @chqz2006; @trefethen2000] and the $2/3$ dealiasing rule. For time marching we use the fourth-order, exponential time-differencing Runge-Kutta scheme ETDRK4 [@kassam2005; @cox2002]. For reliable statistics, it is important to carry out long simulations with large values of $L$; we report results with $L=2^{20}$, by far the highest spatial resolution that has been used for a DNS of the 1D KS PDE ; for this we have developed a CUDA C code that runs very efficiently on a GPU cluster with NVIDIA Tesla K80 accelerators.
From our DNSs we compute $h(x,t)$ for six different kinds of initial conditions, IC1-IC6, which we depict by plots of $h(x,0)$ versus $x$ in Figs. \[fig:fig1\] (a), (e), (i), (m), (q), and (u); we show the short-time spatiotemporal evolution of $h(x,t)$, in the interval $x \in [-100,100]$, in Figs. \[fig:fig1\] (b), (f), (j), (n), (r), and (v) (see the videos V1-V6 in the Supplemental Material [@supp]). We choose these ICs to mimic the effect of wedge, flat, stationary, wedge-to-stationary, wedge-to-flat, and flat-to-stationary geometries in the ASEP model, which are listed in Refs. [@corwin2012; @bfs2008; @cfp2010] as initial conditions for six different sub-classes of the 1D KPZ universality class. Previous numerical studies [@hayot1993; @sneppen1992] of the 1D KS PDE have shown that two-point, equal-time height-field correlations show the scaling behaviors of their 1D KPZ SPDE counterparts for times greater than a crossover time $t_{c}
\simeq 18700$ and lengths larger than the crossover size $L_{c}\simeq 3600$. Therefore, we use a very large system size $L=2^{20}$ and very long simulation times $t_{max} \geq 2\times10^5$ (see the Supplemental Material [@supp]).
Our results for two-point height correlation functions are consistent with those of earlier investigations [@hayot1993; @sneppen1992] of the statistical properties of the spatiotemporally chaotic state of the 1D KS PDE: We show, e.g., the equal-time compensated spectrum $k^2 E(k) = \langle L
\tilde{h}(k,t) \tilde{h}^{*}(k,t) \rangle_{t} $, where $\langle \cdot
\rangle_{t}$ is the time average, $\tilde{h}(k,t)$ is the spatial Fourier transform of $h(x,t)$, and $k$ is the wave number, in Fig. () of the Supplemental Material [@supp]. In addition, we calculate the time-dependent, two-point correlation function $S(k,
\delta t) = \langle k^2 \tilde{h}(k,t_{0}) \tilde{h}^{*}(k,t_{0} + \delta
t)\rangle_{t_{0}} $ in Fig. \[fig:fig2\], for the IC3 initial condition. We find that the imaginary part of $S(k, \delta t)$ fluctuates around zero and its magnitude is much smaller than that of its real part, which we plot in Fig. \[fig:fig2\]. Our data are consistent with the scaling form of $S(k,
\delta t)$ (orange curve in Fig. \[fig:fig2\]), which has been obtained analytically by Prähofer and Spohn [@ps2004] for the 1D KPZ SPDE; this comparison of $S(k, \delta t)$ for the 1D KS and 1D KPZ equations has not been made hitherto.
The scaling properties of the interface width $w(L,t)$ distinguish different universality classes in interface-growth models; $$w(l,t) = \left( \langle [\Delta_{l} h(x,t)]^2 \rangle_{x, l} \right)^{1/2} ,$$ with $\Delta_{l} h(x,t) = h(x,t) -h(x,0) - \langle h(x,t)-h(x,0) \rangle_{x,l}
$ and $\langle \cdot \rangle_{x,l}$ the spatial average over a region of spatial extent $l$. For $t\gg1$ in the 1D KPZ equation, $w(L,t) \sim
t^{\upbeta}$. Before crossover occurs in systems with $L > L_{c}$, the exponent $\upbeta$ assumes the value $\upbeta_{\text{EW}}=1/4$, which is the Edwards-Wilkinson (EW) result [@edwards1982; @thhzhang1995] for the linear SPDE with $\lambda = 0$ in Eq. (\[eq:kpz\]); finally, $\upbeta$ assumes the KPZ value $\upbeta_{\text{KPZ}}=1/3$ in the NESS (for $t>t_{c}$). Moreover, the growing KPZ surface involves the length scale $\mathcal{L}(t) \sim
t^{1/z}$, where the dynamic exponent $z = 3/2$; and the width $w(l,t)\sim
l^{\alpha}$, for $l \ll \mathcal{L}(t)$, with $\alpha=1/2$ [@takeuchi2011]. We find from our DNSs of the 1D KS equation that these Family-Vicsek scaling [@fv1985] forms are indeed satisfied as we show in Figs. \[fig:fig3\] (a), (c), and (e) for IC1-IC3 (see the Supplemental Material [@supp] for IC4-IC6).
![(Color online) Family-Vicsek scaling [@fv1985]: (a), (c), and (e) show, for IC1-IC3, respectively, plots of $w(l,t)$ versus $l$, for $l \ll L$, and $w(L,t)$ versus $t$ (in the insets); $t_{1} = 5 \times 10^{4}$, $t_{2} = 10^{5}$, $t_{3} = 1.5
\times 10^{5}$, and $t_{4} = 2 \times 10^{5}$. The dotted lines are log-log fits for $w(l,t)=A l^{\alpha}$, with $\alpha= 0.46 \pm 0.07$ for IC1-IC3. In (b), (d), and (f) we plot, for IC1-IC3, respectively, the skewness $\mu_{3}$ and the kurtosis $\mu_{4}$ (see text) versus the time $t$; black lines indicate their large-$t$ values for TW-GUE, TW-GOE, and BR $F_{0}$ PDFs in (b), (d), and (f). (See the Supplemental Material [@supp] for similar plots for IC4-IC6.)[]{data-label="fig:fig3"}](fv-scaling.pdf){width="1\linewidth"}
We define $$\mu_{n} = \langle \left(
\Delta_{L} h(x,t) \right)^{n} \rangle / \langle \left( \Delta_{L} h(x,t)
\right)^{2} \rangle^{n/2} - 3 \delta_{n,4};$$
for $n=3$ ($n=4$), $\mu_{n}$ is the skewness (kurtosis); we plot $\mu_{3}$ and $\mu_{4}$ versus time $t$ in the right panel of Fig. \[fig:fig3\]; for each initial condition, IC1-IC6, we average these quantities for $100$ surfaces, over a time interval of $10^{4}$, and five independent DNS runs; i.e., our overall sample size is $\simeq 5\times 10^8$ data points. \[For our 1D KS, $\mu_3 < 0$ because of the sign of the nonlinear term in Eq. ; we ignore the sign of $\mu_{3}$ for it can be reversed by the transformation $h(x,t) \rightarrow -h(x,t)$.\] In addition, we calculate the probability distribution function (PDF) $\textrm{P}(\upchi)$ of the shifted and rescaled fluctuations, namely, $\upchi = (h(x,t) - v_{\infty} t)/(\Gamma t)^{1/3}$, when both $\mu_{3}$ and $\mu_{4}$ are close to their standard values for the relevant TW or BR $F_{0}$ PDFs; for IC2, e.g., we compute $\textrm{P}(\upchi)$ when we have $\mu_{3} \simeq 0.27$ and $\mu_{4} \simeq 0.19$, which are close to the standard values $\mu_{3,\text{GOE}} \simeq 0.29$ and $\mu_{4,\text{GOE}}
\simeq 0.16$, respectively.
For IC1, IC2, IC3, and IC4 we compare, on semilog plots, the PDFs with TW-GUE, TW-GOE, BR $F_{0}$, and $(F_{\text{GOE}})^2$ [@corwin2012] in Figs. \[fig:fig1\] (d), (h), (l), and (p), respectively. For ease of comparison, we show in Fig. \[fig:fig4\] that the PDFs we obtain from our DNSs of the 1D KS Eq. are very close to the TW-GUE, TW-GOE, and BR $F_{0}$ PDFs over *at least three orders of magnitude*. Stricly speaking, we must collect data only from those two points ($x=L/4, 3L/4$) at which the two different type of height profiles meet in cases IC4, IC5 and IC6. However, this leads to inadequate statistics. Therefore, the PDFs of $\upchi$ for IC4-6, which we show in Figs. \[fig:fig1\] (t) and (x), have been computed by using data from the regions $ \left[ 7L/32, 9L/32\right]$ and $\left[ 23L/32, 25L/32 \right]$; we see that this averaging procedure already leads to PDFs (Figs. \[fig:fig1\] (p), (t) and (x)) that are distinctly different from TW-GUE, TW-GOE, and BR $F_{0}$ distributions.
![(Color online) Semilog plots of the PDFs $\textrm{P}(\upchi)$ from our DNSs for IC1, IC2, and IC3; we compare these with the Tracy-Widom distributions, TW-GUE and TW-GOE, and the Baik-Rains distributions (BR $F_{0}$).[]{data-label="fig:fig4"}](alldists.pdf){width="1\linewidth"}
The TW distributions, for IC1 and IC2 initial conditions in the 1D KPZ equation, have been studied in the context of $N\times N$ GOE ($\beta = 1$) and GUE ($\beta = 2$) random matrices. The largest eigenvalue (after scaling with $N$) $\Lambda$ of such random matrices is $$\Lambda = \sqrt{2} +\frac{1}{\sqrt{2}} N^{-2/3} \upchi_{\beta} \ , \label{eq:evmax}$$ where $ \upchi_{\beta}$ has the PDF [@satya2014] $$\text{P}( \Lambda, N ) \approx \begin{cases}
\exp[- \beta N^2 \phi_{-}(\Lambda)] , & \Lambda < \sqrt{2}, |\Lambda -\sqrt{2}| \sim \mathcal{O}(1),\\
\sqrt{2} N^{2/3}\textrm{P}_{\textrm{TW},\beta}(\upchi_{\beta}) , \ & |\Lambda-\sqrt{2}| \sim \mathcal{O}(N^{-2/3}), \\
\exp[- \beta N \phi_{+}(\Lambda)] , & \Lambda > \sqrt{2}, |\Lambda -\sqrt{2}| \sim \mathcal{O}(1),
\end{cases}$$ $\textrm{P}_{\textrm{TW},\beta}(\upchi_{\beta})$ denotes TW distributions, and the right and left large-deviation functions (LDFs) $\phi_{+}(\Lambda)$ and $\phi_{-}(\Lambda)$, respectively, display the following asymptotic behaviors: $$\begin{aligned}
\phi_{-}(\Lambda) &\approx \frac{1}{6 \sqrt{2}} ( \sqrt{2} -\Lambda)^{3} \ , \quad \Lambda \rightarrow - \infty; \\
\phi_{+}(\Lambda) &\approx \frac{2^{7/4}}{3} (\Lambda -\sqrt{2})^{3/2} \ , \quad \Lambda \rightarrow + \infty .
\end{aligned}
\label{eq:ldf}$$ The LDFs, which yield the probabilities of atypically large fluctuations, match smoothly with the tails of $\textrm{P}_{\textrm{TW},\beta}(\upchi_{\beta})$. Because of different behaviors of the tails of $\textrm{P}(\Lambda , N )$, a third-order transition [@satya2014] can be associated with $\Lambda$ at $\Lambda_{c}= \sqrt{2}$ by defining the *free energy* $ \propto \ln
F_{\beta}(\Lambda,N)$, $F_{\beta}(\Lambda,N)$ being the cumulative density function (CDF) for $\Lambda$, for we have [@satya2014] $$\lim_{N \rightarrow \infty} - \frac{1}{N^2} \ln F_{\beta}(\Lambda,N) = \begin{cases} \phi_{-}(\Lambda), & \Lambda < \sqrt{2}, \\
0, & \Lambda >\sqrt{2}.
\end{cases}$$ Similarly, we define, for the KS initial conditions IC1 and IC2, the free-energy function $\mathcal{F}(\overline{h})$, for $t, L \rightarrow \infty
$, as follows: $$\mathcal{F}(\overline{h}) = \lim_{t, L \rightarrow \infty} - \frac{1}{t^{2}} \mathrm{ln} \, F(\upchi,t) \ , \label{eq:freenergy} $$ where $\overline{h}= h(x,t)/t$ and $F(\upchi, t)$ is the CDF for $\upchi$ at time $t$. Therefore, for IC1 and IC2, we should obtain a third-order phase transition for $\overline{h}$ at the critical value $\overline{h}_{c}=v_{\infty}$; an explicit demonstration requires much better statistics for $\textrm{P}(\upchi)$ than is possible with our DNS.
We have shown, by extensive pseudospectral DNSs of the 1D KS deterministic PDE, that the statistical properties of its spatiotemporally chaotic NESS are in the 1D KPZ universality class. This is not limited, merely, to the power-law forms of simple correlation functions and the width of the interface. It includes, in addition, (a) the complete scaling form for the two-point time-dependent correlation function $S(k, \delta t)$ (Fig. \[fig:fig2\]), (b) the skewness and kurtosis shown in Fig. \[fig:fig2\], and (c) most important of all, the unversal limit distributions in Fig. \[fig:fig1\], obtained in “the $2^{nd}$ KPZ Revolution” [@thhkat2015]. Such results have not been obtained hitherto for a spatiotemporally chaotic NESS of any deterministic PDE. We conjecture that similar conclusions should ensue for the phase-chaos regime of the 1D Complex-Ginzburg-Landau equation [@grinstein1996]. Such studies are also being pursued for the 1D Calogero-Moser model [@aka2019].
We thank Jaya Kumar Alageshan, R. Basu, M. Brachet, P. Ferrari, T. Imamura, K. Khanin, and K. A. Takeuchi for discussions and the National Mathematics Initiative (NMI), DST, UGC, and CSIR (India) for support.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider linear rank-metric codes in ${\mathbb{F}}_{q^m}^n$. We show that the properties of being MRD (maximum rank distance) and non-Gabidulin are generic over the algebraic closure of the underlying field, which implies that over a large extension field a randomly chosen generator matrix generates an MRD and a non-Gabidulin code with high probability. Moreover, we give upper bounds on the respective probabilities in dependence on the extension degree $m$.'
author:
- Alessandro Neri
- 'Anna-Lena Horlemann-Trautmann'
- Tovohery Randrianarisoa
- Joachim Rosenthal
bibliography:
- './network\_coding\_stuff.bib'
title: 'On the Genericity of Maximum Rank Distance and Gabidulin Codes[^1]'
---
Introduction
============
Codes in the rank-metric have been studied for the last four decades. For linear codes a Singleton-type bound can be derived for these codes. In analogy to MDS codes in the Hamming metric, we call rank-metric codes that achieve the Singleton-type bound MRD (maximum rank distance) codes. Since the works of Delsarte [@de78] and Gabidulin [@ga85a] we know that linear MRD codes exist for any set of parameters. The codes they describe are called Gabidulin codes.
The question, if there are other general constructions of MRD codes that are not equivalent to Gabidulin codes, has been of large interest recently. Some constructions of non-Gabidulin MRD codes can be found e.g. in [@co15; @cr15; @sh15], where many of the derived codes are not linear over the underlying field but only linear over some subfield of it. For some small parameter sets, constructions of linear non-Gabidulin MRD codes were presented in [@ho16]. On the other hand, in the same paper it was shown that all MRD codes in ${\mathbb{F}}_{2^4}^4$ are Gabidulin codes. In general, it remains an open question for which parameters non-Gabidulin MRD codes exist, and if so, how many such codes there are.
In this paper we show that the properties of being MRD (maximum rank distance) and non-Gabidulin are generic. This implies that over a large field extension degree a randomly chosen generator matrix generates an MRD and a non-Gabidulin code with high probability. Moreover, we give an upper bound on the respective probabilities in dependence on the extension degree.
The paper is structured as follows. In Section \[sec:preliminaries\] we give some preliminary definitions and results, first for rank-metric codes and then for the notion of genericity. Section \[sec:topology\] contains topological results, showing that the properties of being MRD and non-Gabidulin are generic. In Section \[sec:prob\] we derive some upper bounds on the probability of these two code properties in dependence on the extension degree of the underlying finite field. We conclude in Section \[sec:conclusion\].
Preliminaries {#sec:preliminaries}
=============
Finite Fields and Their Vector Spaces
-------------------------------------
The following definitions and results can be found in any textbook on finite fields, e.g. [@li94]. We denote the finite field of cardinality $q$ by ${\mathbb{F}}_q$. It exists if and only if $q$ is a prime power. Moreover, if it exists, ${\mathbb{F}}_q$ is unique up to isomorphism. An extension field of extension degree $m$ is denoted by ${\mathbb{F}}_{q^m}$. If $\alpha$ is a root of an irreducible monic polynomial in ${\mathbb{F}}_q[x]$ of degree $m$, then $${\mathbb{F}}_{q^m} \cong {\mathbb{F}}_q[\alpha].$$ We now recall some basic theory on the trace function over finite fields.
Let ${\mathbb{F}}_q$ be a finite field and ${\mathbb{F}}_{q^m}$ be an extension field. For $\alpha \in {\mathbb{F}}_{q^m}$, the *trace* of $\alpha$ over ${\mathbb{F}}_q$ is defined by $$\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}(\alpha) := \sum_{i=0}^{m-1}\alpha^{q^i}.$$
For every integer $0<s<m$ with $\gcd(m,s)=1$, we denote by $\varphi_s$ the map given by $$\begin{array}{rcl}
\varphi_s:{\mathbb{F}}_{q^m} &\longrightarrow & {\mathbb{F}}_{q^m} \\
\alpha & \longmapsto & \alpha^{q^s}-\alpha.
\end{array}$$ The following result relates the trace with the maps $\varphi_s$.
\[lem:trace\] The trace function satisfies the following properties:
1. $\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}(\alpha) \in {\mathbb{F}}_q$ for all $\alpha \in {\mathbb{F}}_{q^m}$.
2. $\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}$ is a linear surjective transformation from ${\mathbb{F}}_{q^m}$ to ${\mathbb{F}}_q$, where ${\mathbb{F}}_{q^m}$ and ${\mathbb{F}}_q$ are considered as ${\mathbb{F}}_q$-vector spaces.
3. For every $\alpha \in {\mathbb{F}}_{q^m}^*$, the map $\mathrm{T}_{\alpha}$ defined by $$\beta \longmapsto \mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}(\alpha\beta)$$ is a linear surjective transformation from ${\mathbb{F}}_{q^m}$ to ${\mathbb{F}}_q$, where ${\mathbb{F}}_{q^m}$ and ${\mathbb{F}}_q$ are considered as ${\mathbb{F}}_q$-vector spaces.
4. $\varphi_s$ is a linear transformation from ${\mathbb{F}}_{q^m}$ to itself, considered as ${\mathbb{F}}_q$-vector space.
5. For every $s$ coprime to $m$, $\varphi_s(\alpha)=0$ if and only if $\alpha \in {\mathbb{F}}_q$.
6. $\ker (\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q})=\mathrm{Im}(\varphi_s)$ for every $s$ coprime to $m$ and has cardinality $q^{m-1}$.
The statements of 1., 2. and 3. can be found e.g. in [@li94 Theorems 2.23 and 2.24].
1. For $\alpha, \beta \in {\mathbb{F}}_{q^m}$, $\varphi_s(\alpha+\beta)=(\alpha+\beta)^{q^s}-(\alpha+\beta)=
\alpha^{q^s}-\alpha
+\beta^{q^s}-\beta=\varphi_s(\alpha)+\varphi_s(\beta)$. Moreover, for every $\alpha \in{\mathbb{F}}_{q^m}$, $c\in {\mathbb{F}}_q$, $\varphi_s(\alpha)=c^{q^s}\alpha^{q^s}-c\alpha=c\left(\alpha^{q^s}-\alpha\right)=c\varphi_s(\alpha)$.
2. We have $\varphi_s(\alpha)=\alpha^{q^s}-\alpha=0$ if and only if $\alpha \in{\mathbb{F}}_{q^s}$. Since $\alpha \in{\mathbb{F}}_{q^m}$, this is true if and only if $\alpha \in {\mathbb{F}}_{q^m}\cap {\mathbb{F}}_{q^s}={\mathbb{F}}_q$.
3. First we show that $\mathrm{Im}(\varphi_s)\subseteq\ker
(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q})$. Consider an element $\alpha\in\mathrm{Im}(\varphi_s)$. Then there exists $\beta\in
{\mathbb{F}}_{q^m}$ such that $\alpha=\beta^{q^s}-\beta$. Now $$\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}(\alpha)=\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}(\beta^{q^s}-\beta)=
\sum_{i=0}^{m-1}(\beta^{q^s}-\beta)^{q^i}=\sum_{i=0}^{m-1}\beta^{q^{s+i}}-\sum_{i=0}^{m-1}\beta^{q^i}.$$ We observe now that if $i\equiv j \mod m$, then $\beta^{q^i}=\beta^{q^j}$. Hence the sum $\sum_{i=0}^{m-1}\beta^{q^{s+i}}$ is a rearrangement of $\sum_{i=0}^{m-1}\beta^{q^i}$ and $\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}(\alpha)=0$. At this point observe that the trace function is a polynomial of degree $q^{m-1}$ and so it has at most $q^{m-1}$ roots. This means that $|\ker
(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q})|\leq q^{m-1}$. By part $4$ and $5$ of this Lemma $$|\mathrm{Im}(\varphi_s)|=\frac{|{\mathbb{F}}_{q^m}|}{|\ker(\varphi_s)|}=q^{m-1}$$ and therefore $\mathrm{Im}(\varphi_s)$ and $\ker
(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q})$ must be equal.
We denote by ${\textnormal{GL}}_n(q):=\{A\in {\mathbb{F}}_q^{n\times n} \mid {\textnormal{rk}}(A) =n\}$ the general linear group of degree $n$ over ${\mathbb{F}}_q$. Furthermore we need the Gaussian binomial $ \binom{n}{k}_q$, which is defined as the number of $k$-dimensional vector spaces of ${\mathbb{F}}_q^n$. It is well-known that $$\binom{n}{k}_q = \prod_{i=0}^{k-1} \frac{q^n-q^i}{q^k-q^i}=\frac{\prod_{i=0}^{k-1}(q^n-q^i)}{|{\textnormal{GL}}_k(q)|}.$$ Moreover, the following fact is well-known and easy to see.
\[lem:intersection\] Let $k, n$ be two integers such that $0<k\leq n$, and let ${\mathcal{U}}$ be a $k$-dimensional vector subspace of ${\mathbb{F}}_q^n$. Then, for every $r=0,\ldots,k$, the number of $k$-dimensional subspaces that intersect ${\mathcal{U}}$ in a $(k-r)$-dimensional subspace is $$\binom{k}{k-r}_q \binom{n-k}{r}_q q^{r^2} .$$
There are $ \binom{k}{k-r}_q$ many subspaces ${\mathcal{U}}'$ of ${\mathcal{U}}$ of dimension $(k-r)$ that can be the intersection space. Now, in order to complete ${\mathcal{U}}'$ to a $k$-dimensional vector space, intersecting ${\mathcal{U}}$ only in ${\mathcal{U}}'$, we have $\prod_{i=0}^{r-1}(q^n-q^{k+i})$ choices for the remaining basis vectors. For counting how many of these bases span the same space we just need to count the number of $k\times k$ matrices of the form $$\left[\begin{array}{cc}
I_{k-r} & 0 \\
A & B
\end{array}\right],$$ where $A\in {\mathbb{F}}_q^{r\times (k-r)}$ and $B\in {\textnormal{GL}}_r(q)$. Hence the final count is given by $$\begin{aligned}
\binom{k}{k-r}_q\frac{\prod_{i=0}^{r-1}(q^n-q^{k+i})}{q^{r(k-r)}|{\textnormal{GL}}_r(q)|}&=
\binom{k}{k-r}_q\frac{q^{kr}\prod_{i=0}^{r-1}(q^{n-k}-q^i)}{q^{r(k-r)}|{\textnormal{GL}}_r(q)|}\\
&=\binom{k}{k-r}_q\binom{n-k}{r}_q q^{r^2}.
\end{aligned}$$
Rank-metric Codes
-----------------
Recall that there always exists $\alpha\in {\mathbb{F}}_{q^m}$, such that ${\mathbb{F}}_{q^m}\cong {\mathbb{F}}_q[\alpha] $. Moreover, ${\mathbb{F}}_{q^m}$ is isomorphic (as a vector space over ${\mathbb{F}}_q$) to the vector space ${\mathbb{F}}_q^m$. One then easily obtains the isomorphic description of matrices over the base field ${\mathbb{F}}_q$ as vectors over the extension field, i.e.${\mathbb{F}}_q^{m\times n}\cong {\mathbb{F}}_{q^m}^n$.
The *rank distance* $d_R$ on ${\mathbb{F}}_q^{m\times n}$ is defined by $$d_R(X,Y):= {\textnormal{rk}}(X-Y) , \quad X,Y \in {\mathbb{F}}_q^{m\times n}.$$ Analogously, we define the rank distance between two elements $\boldsymbol x,\boldsymbol y \in {\mathbb{F}}_{q^m}^n$ as the rank of the difference of the respective matrix representations in ${\mathbb{F}}_q^{m\times n}$.
In this paper we will focus on $ {\mathbb{F}}_{q^m}$-linear rank-metric codes in ${\mathbb{F}}_{q^m}^n$, i.e. those codes that form a vector space over $
{\mathbb{F}}_{q^m}$.
An ${\mathbb{F}}_{q^m}$-*linear rank-metric code $\mathcal C$* of length $n$ and dimension $k$ is a $k$-dimensional subspace of ${\mathbb{F}}_{q^m}^n$ equipped with the rank distance. A matrix $G\in{\mathbb{F}}_{q^m}^{k\times n}
$ is called a *generator matrix* for the code $\mathcal C$ if $$\mathcal C={\mathrm{rs}}(G),$$ where ${\mathrm{rs}}(G)$ is the subspace generated by the rows of the matrix $G$, called the *row space* of $G$.
Whenever we talk about linear codes in this work, we will mean linearity over the extension field $ {\mathbb{F}}_{q^m}$. The well-known Singleton bound for codes in the Hamming metric implies also an upper bound for codes in the rank-metric:
[@ga85a Section 2] Let $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^{n}$ be a linear matrix code with minimum rank distance $d$ of dimension $k$. Then $$k\leq n-d+1 .$$
A code attaining the Singleton bound is called a *maximum rank distance (MRD) code*.
[@ho16 Lemma 5.3]\[lem:systematic\] Any linear MRD code ${\mathcal{C}}\subseteq {\mathbb{F}}_{q^m}^n$ of dimension $k$ has a generator matrix $G \in {\mathbb{F}}_{q^m}^{k\times n}$ in systematic form, i.e. $$G = \left[\begin{array}{c|c}
I_k & X
\end{array}
\right]$$ Moreover, all entries in $X$ are from ${\mathbb{F}}_{q^m} \backslash
{\mathbb{F}}_q$.
For some vector $(v_1,\dots, v_n) \in {\mathbb{F}}_{q^m}^n$ we denote the $k
\times n$ *$s$-Moore matrix* by $$M_{s,k}(v_1,\dots, v_n) := \left( \begin{array}{cccc} v_1 & v_2
&\dots &v_n \\ v_1^{[s]} & v_2^{[s]} &\dots &v_n^{[s]} \\
\vdots&&&\vdots \\ v_1^{[s(k-1)]} & v_2^{[s(k-1)]} &\dots
&v_n^{[s(k-1)]} \end{array}\right) ,$$ where $[i]:= q^i$.
\[def:Gab\] Let $g_1,\dots, g_n \in {\mathbb{F}}_{q^m}$ be linearly independent over ${\mathbb{F}}_q$ and let $s$ be coprime to $m$. We define a *generalized Gabidulin code* $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^{n}$ of dimension $k$ as the linear block code with generator matrix $M_{s,k}(g_1,\dots, g_n)$. Using the isomorphic matrix representation we can interpret $\mathcal{C}$ as a matrix code in ${\mathbb{F}}_q^{m\times n}$.
Note that for $s=1$ the previous definition coincides with the classical Gabidulin code construction. The following theorem was shown for $s=1$ in [@ga85a Section 4], and for general $s$ in [@ks05].
\[thm:GabisMRD\] A generalized Gabidulin code $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^{n}$ of dimension $k$ over ${\mathbb{F}}_{q^m}$ has minimum rank distance $n-k+1$. Thus generalized Gabidulin codes are MRD codes.
The dual code of a code $\mathcal{C}\subseteq {\mathbb{F}}_{q^{m}}^{n}$ is defined in the usual way as $$\mathcal{C}^{\perp} := \{\boldsymbol{u} \in {\mathbb{F}}_{q^{m}}^{n} \mid
\boldsymbol{u}\boldsymbol{c}^T=0 \quad \forall \boldsymbol{c}\in
\mathcal{C}\}.$$ In his seminal paper Gabidulin showed the following two results on dual codes of MRD and Gabidulin codes. The result was generalized to $s>1$ later on by Kshevetskiy and Gabidulin.
[@ga85a Sections 2 and 4][@ks05 Subsection IV.C]\[prop:dual1\]
1. Let $\mathcal{C}\subseteq {\mathbb{F}}_{q^{m}}^{n}$ be an MRD code of dimension $k$. Then the dual code $\mathcal{C}^{\perp}\subseteq
{\mathbb{F}}_{q^{m}}^{n}$ is an MRD code of dimension $n-k$.
2. Let $\mathcal{C}\subseteq {\mathbb{F}}_{q^{m}}^{n}$ be a generalized Gabidulin code of dimension $k$. Then the dual code $\mathcal{C}^{\perp}\subseteq {\mathbb{F}}_{q^{m}}^{n}$ is a generalized Gabidulin code of dimension $n-k$.
For more information on bounds and constructions of rank-metric codes the interested reader is referred to [@ga85a].
Denote by $\mathrm{Gal}({\mathbb{F}}_{q^m}/{\mathbb{F}}_q)$ the *Galois group* of ${\mathbb{F}}_{q^m}$, consisting of the automorphisms of ${\mathbb{F}}_{q^m}$ that fix the base field ${\mathbb{F}}_q$ (i.e., for $\sigma \in \mathrm{Gal}({\mathbb{F}}_{q^m}/{\mathbb{F}}_q)$ and $\alpha \in {\mathbb{F}}_q$ we have $\sigma(\alpha) = \alpha$). It is well-known that $\mathrm{Gal}({\mathbb{F}}_{q^m}/{\mathbb{F}}_q)$ is generated by the *Frobenius map*, which takes an element to its $q$-th power. Hence the automorphisms are of the form $x\mapsto x^{[i]}$ for some $0\leq i \leq m$.
Given a matrix (resp. a vector) $A\in {\mathbb{F}}_{q^m}^{k \times n}$, we denote by $A^{([s])}$ the component-wise Frobenius $A$, i.e., every entry of the matrix (resp. the vector) is raised to its $q^s$-th power. Analogously, given some $\mathcal C \subseteq {\mathbb{F}}_{q^m}^{k
\times n}$, we define $$\mathcal C^{([s])}:=\left\{\mathbf{c}^{([s])}\mid \mathbf{c}\in \mathcal C \right\}.$$
The (semi-)linear rank isometries on ${\mathbb{F}}_{q^m}^{n}$ are induced by the isometries on ${\mathbb{F}}_q^{m\times n}$ and are hence well-known, see e.g.[@be03; @mo14; @wa96]:
[@mo14 Proposition 2]\[isometries\] The semilinear ${\mathbb{F}}_q$-rank isometries on ${\mathbb{F}}_{q^m}^{n}$ are of the form $$(\lambda, A, \sigma) \in \left( {\mathbb{F}}_{q^m}^* \times {\textnormal{GL}}_n(q) \right)
\rtimes \mathrm{Gal}({\mathbb{F}}_{q^m}/{\mathbb{F}}_q) ,$$ acting on $ {\mathbb{F}}_{q^m}^n \ni
(v_1,\dots,v_n)$ via $$(v_1,\dots,v_n) (\lambda, A, \sigma) = (\sigma(\lambda
v_1),\dots,\sigma(\lambda v_n)) A .$$ In particular, if $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^n$ is a linear code with minimum rank distance $d$, then $$\mathcal{C}' = \sigma(\lambda \mathcal{C}) A$$ is a linear code with minimum rank distance $d$.
One can easily check that ${\mathbb{F}}_q$-linearly independent elements in ${\mathbb{F}}_{q^m}$ remain ${\mathbb{F}}_q$-linearly independent under the actions of ${\mathbb{F}}_{q^m}^*, {\textnormal{GL}}_n(q)$ and $\mathrm{Gal}({\mathbb{F}}_{q^m}/{\mathbb{F}}_q)$. Moreover, the $s$-Moore matrix structure is preserved under these actions, which implies that the class of generalized Gabidulin codes is closed under the semilinear isometries. Thus a code is semilinearly isometric to a generalized Gabidulin code if and only if it is itself a generalized Gabidulin code.
In this work we need the following criteria for both the MRD and the Gabidulin property. The following criterion for MRD codes was given in [@ho16], which in turn is based on a well-known result given in [@ga85a]:
\[prop:MRDCrit\] Let $G\in {\mathbb{F}}_{q^m}^{k\times n}$ be a generator matrix of a rank-metric code $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^n$. Then $\mathcal{C}$ is an MRD code if and only if $${\textnormal{rk}}(VG^T) =k$$ for all $V\in {\mathbb{F}}_q^{k\times n}$ with ${\textnormal{rk}}(V)=k$.
Furthermore, we need the following criterion for the generalized Gabidulin property:
[@ho16 Theorem 4.8]\[thm:GabCrit\] Let $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^n$ be an MRD code of dimension $k$. $\mathcal{C}$ is a generalized Gabidulin code if and only if there exists $s$ with $\gcd(s,m)=1$ such that $$\dim (\mathcal{C} \cap \mathcal{C}^{([s])}) = k-1 .$$
The Zariski Topology over Finite Fields
---------------------------------------
Consider the polynomial ring ${\mathbb{F}}_q[x_1,\dots,x_r]$ over the base field ${\mathbb{F}}_q$ and denote by $\bar{\mathbb{F}}_q$ the algebraic closure of ${\mathbb{F}}_q$, necessarily an infinite field. For a subset $S\subseteq {\mathbb{F}}_q[x_1,\dots,x_r]$ one defines the algebraic set $$V(S): = \{{\boldsymbol}x \in \bar{\mathbb{F}}_q^r \mid f({\boldsymbol}x) = 0, \forall f \in S\} .$$
It is well-known that the algebraic sets inside $\bar{\mathbb{F}}_q^r$ form the *closed sets* of a topology, called the *Zariski topology*. The complements of the Zariski-closed sets are the *Zariski-open* sets.
One says that a subset $G\subset\bar{\mathbb{F}}_q^r$ defines a *generic set* if $G$ contains a non-empty Zariski-open set.
If the base field are the real number ($\mathbb{R}$) or complex numbers ($\mathbb{C}$), then a generic set inside $\mathbb{R}^r$ (respectively inside $\mathbb{C}^r$) is necessarily dense and its complement is contained in an algebraic set of dimension at most $r-1$.
Over a finite field ${\mathbb{F}}_q$ one has to be a little bit more careful. Indeed for every subset $T\subset{\mathbb{F}}_q^r$ one finds a set of polynomials $S\subseteq {\mathbb{F}}_q[x_1,\dots,x_r]$ such that $$\{{\boldsymbol}x \in {\mathbb{F}}_q^r \mid f({\boldsymbol}x) = 0, \forall f \in S\} =T.$$ This follows simply from the fact that a single point inside ${\mathbb{F}}_q^r$ forms a Zariski-closed set and any subset $T\subset{\mathbb{F}}_q^r$ is a finite union of points. However if one has an algebraic set $V(S)$, as defined in the beginning of this subsection, then the ${\mathbb{F}}_{q^m}$-rational points defined through $$V(S;{\mathbb{F}}_{q^m}): = \{{\boldsymbol}x \in {\mathbb{F}}_{q^m}^r \mid f({\boldsymbol}x) = 0, \forall f \in S\}$$ become in proportion to the whole vector space ${\mathbb{F}}_{q^m}^r$ thinner and thinner, as the extension degree $m$ increases. This is a consequence of the Schwartz-Zippel Lemma which we will formulate, for our purposes, over a finite field. The lemma itself will be crucial for our probability estimations in Section \[sec:prob\].
[@sc80 Corollary 1]\[lem:SZ\] Let $f\in {\mathbb{F}}_q[x_1,x_2,\dots,x_r]$ be a non-zero polynomial of total degree $d \geq 0$. Let ${\mathbb{F}}_{q^n}$ be an extension field and let $S\subseteq {\mathbb{F}}_{q^n}$ be a finite set. Let $v_1,
v_2, \dots, v_r$ be selected at random independently and uniformly from $S$. Then $$\Pr\big(f(v_1,v_2,\ldots,v_r)=0\big)\leq\frac{d}{|S|}.$$
Topological Results {#sec:topology}
===================
The idea of this section is to show that the properties of being MRD and non-Gabidulin are generic properties. Recall that, by Lemma \[lem:systematic\], every linear MRD code in ${\mathbb{F}}_{q^m}^n$ of dimension $k$ has a unique representation by its generator matrix $G\in {\mathbb{F}}_{q^m}^{k\times n}$ in systematic form $$G= [\;I_k \mid X\;].$$ Thus we have a one-to-one correspondence between the set of linear MRD codes in ${\mathbb{F}}_{q^m}^n$ and a subset of the set of matrices $
{\mathbb{F}}_{q^m}^{k\times (n-k)}$. Therefore we want to investigate how many matrices $X\in {\mathbb{F}}_{q^m}^{k\times (n-k)}$ give rise to an MRD or a generalized Gabidulin code, when plugged into the above form of a systematic generator matrix.
However, to make sense of the definition of genericity, we need to do this investigation over the algebraic closure of ${\mathbb{F}}_{q^m}$. Unfortunately though, some results in the rank-metric, in particular the definition of and results related to generalized Gabidulin codes, do not hold over infinite fields. Therefore we will actually show that the set of matrices fulfilling the criteria of Corollary \[prop:MRDCrit\] (for being MRD) and Theorem \[thm:GabCrit\] (for being a generalized Gabidulin code) are generic sets over the algebraic closure.
We first show that the set of generator matrices fulfilling the MRD criterion of Corollary \[prop:MRDCrit\] is generic.
\[thm:topMRD\] Let $1\leq k \leq n-1$. The set $$S_\mathrm{MRD} := \{X \in \bar {\mathbb{F}}_{q^m}^{k\times (n-k)} \mid \forall A \in {\mathbb{F}}_q^{n\times k} \textnormal{ of rank } k: \det([I_k \mid X ] A)\neq 0 \}$$ is a generic subset of $ \bar {\mathbb{F}}_{q^m}^{k\times (n-k)}$.
We need to show that $S_\mathrm{MRD}$ contains a non-empty Zariski-open set. In fact we will show that $S_\mathrm{MRD}$ is a non-empty Zariski-open set. The non-empty-ness follows from the existence of Gabidulin codes for every set of parameters. Hence it remains to show that it is Zariski-open.
If we denote the entries of $X\in \bar{\mathbb{F}}_{q^m}^{k(n-k)}$ as the variables $x_{1},\dots, x_{k(n-k)}$, then, for a given $A \in
{\mathbb{F}}_q^{n\times k}$, we have $\det([I_k \mid X ] A) \in
{\mathbb{F}}_{q}[x_1,\dots,x_{k(n-k)}]$. Hence we can write $$\begin{aligned}
S_\mathrm{MRD} & = \mathop{\bigcap_{A \in {\mathbb{F}}_q^{n\times k}}}_{{\textnormal{rk}}(A)=k}\{X \in \bar {\mathbb{F}}_{q^m}^{k\times (n-k)} \mid \det([I_k \mid X ] A)\neq 0 \} \\
& = \mathop{\bigcap_{A \in {\mathbb{F}}_q^{n\times k}}}_{{\textnormal{rk}}(A)=k} V(\det([I_k \mid X ] A))^C ,
\end{aligned}$$ i.e., it is a finite intersection of Zariski-open sets. Therefore $S_{MRD}$ is a Zariski-open set.
In Theorem \[thm:topMRD\] we chose the MRD criterion of Corollary \[prop:MRDCrit\] to show that the MRD property (if seen over some finite extension field) is generic. One can do the same by using the MRD criterion of Horlemann-Trautmann-Marshall from [@ho16 Corollary 3].
We now turn to generalized Gabidulin codes. Firstly we rewrite the criterion from Theorem \[thm:GabCrit\] in a more suitable way.
\[lem:reformulation\] Let $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^n$ be an MRD code of dimension $k$ and let $0<s<m$ with $\gcd(s,m)=1$. $\mathcal{C}$ is a generalized Gabidulin code with parameter $s$ if and only if ${\textnormal{rk}}(X^{(q^s)}-X) = 1$.
We know from Theorem \[thm:GabCrit\] that an MRD code ${\mathcal{C}}={\mathrm{rs}}[I_k \mid X] \subseteq {\mathbb{F}}_{q^m}^n$ is a generalized Gabidulin code if and only if $\dim (\mathcal{C} \cap \mathcal{C}^{(q^s)}) =
k-1$. We get $$\begin{aligned}
\dim (\mathcal{C} \cap \mathcal{C}^{(q^s)}) &= k-1 \\
\iff {\textnormal{rk}}\left[ \begin{array}{c|l} I_k & X \\ I_k & X^{(q^s)} \end{array}\right] &= k+1 \\
\iff {\textnormal{rk}}\left[ \begin{array}{c|c} I_k & X \\ 0 & X^{(q^s)} - X \end{array}\right] &= k+1\\
\iff {\textnormal{rk}}(X^{(q^s)} - X) &= 1 .
\end{aligned}$$
The following theorem shows that the set of generator matrices not fulfilling the generalized Gabidulin criterion of Lemma \[lem:reformulation\] is generic over the algebraic closure.
\[thm:topGab\] Let $1\leq k \leq n-1$ and $0<s<m$ with $\gcd(s,m)=1$. Moreover, let $S_\mathrm{MRD}\subseteq \bar {\mathbb{F}}_{q^m}^{k\times (n-k)}$ be as defined in Theorem \[thm:topMRD\]. The set $$S_{\mathrm{Gab},s} := \{X \in \bar {\mathbb{F}}_{q^m}^{k\times (n-k)} \mid {\textnormal{rk}}(X^{(q^s)}- X) =1 \}\cap S_\mathrm{MRD}$$ is a Zariski-closed subset of the Zariski-open set $S_\mathrm{MRD}$.
Let $X\in S_{\mathrm{Gab},s}$. Since $X\in S_\mathrm{MRD}$, it follows from Lemma \[lem:systematic\] that $X_{ij}\not\in {\mathbb{F}}_q$ for $i=1,\dots, k$ and $j=1,\dots, n-k$. Then the condition ${\textnormal{rk}}(X^{(q^s)}- X) = 1$ is equivalent to ${\textnormal{rk}}(X^{(q^s)}- X) < 2$, which in turn is equivalent to the condition that all $2\times
2$-minors of $(X^{(q^s)}-X)$ are zero. If we denote the entries of $X\in \bar{\mathbb{F}}_{q^m}^{k(n-k)}$ as the variables $x_{1},\dots, x_{k(n-k)}$, then these $2\times 2$-minors of $(X^{(q^s)}-X)$ are elements of $
{\mathbb{F}}_q[x_1,\dots,x_{k(n-k)}]$. Let us call the set of all these minors $S'$. Then $$\begin{aligned}
S_{\mathrm{Gab},s} &= \left\{X \in \bar {\mathbb{F}}_{q^m}^{k\times (n-k)} \mid f(x_{1},\dots,x_{k(n-k)})= 0 , \forall f \in S' \right\}\cap S_\mathrm{MRD} \\
&=V(S')\cap S_\mathrm{MRD}.\end{aligned}$$ Hence it is a Zariski-closed subset of $S_\mathrm{MRD}\subseteq \bar
{\mathbb{F}}_{q^m}^{k\times (n-k)}$.
Theorem \[thm:topGab\] implies that the complement of $S_{\mathrm{Gab},s} $, i.e., the set of matrices that fulfill the MRD criterion but do not fulfill the generalized Gabidulin criterion, is a Zariski-open subset of $S_{\mathrm{MRD}} \subset \bar{\mathbb{F}}_{q^m}^{k\times
(n-k)}$. Thus, if it is non-empty, then the complement of $S_{\mathrm{Gab},s} $ is a generic set. The non-empty-ness of this set will be shown in the following section, in Theorem \[thm:main\]. In other words, over the algebraic closure, a randomly chosen generator matrix gives rise to a code that does not fulfill the generalized Gabidulin criterion with high probability.
Probability Estimations {#sec:prob}
=======================
In the previous section we have used the Zariski topology to show that a randomly chosen linear code over $\bar{\mathbb{F}}_{q^m}$ fulfills most likely the MRD criterion but not the generalized Gabidulin criterion. Intuitively this tells us that over a finite, but large, extension field of ${\mathbb{F}}_{q}$ a randomly chosen linear code is most likely an MRD code but not a generalized Gabidulin code. In this section we derive some bounds on the probability that this statement is true, in dependence of the field extension degree $m$.
Probability for MRD codes
-------------------------
Here we give a lower bound on the probability that a random linear rank-metric code in ${\mathbb{F}}_{q^m}^n$ is MRD. A straight-forward approach gives the following result.
\[thm:probMRDrough\] Let $X\in {\mathbb{F}}_{q^m}^{k(n-k)}$ be randomly chosen. Then $$\mathrm{Pr}\big(\; {\mathrm{rs}}[I_k \mid X ] \mbox{ is an MRD code } \big)
\geq 1-\frac{k\prod_{i=0}^{k-1} (q^n-q^i)}{q^m} \geq 1-kq^{kn-m} .$$
It follows from Corollary \[prop:MRDCrit\] that ${\mathrm{rs}}[I_k \mid X
] $ is a non-MRD code if and only if $$p^*:=\mathop{\prod_{A\in {\mathbb{F}}_{q}^{n\times k}}}_{{\textnormal{rk}}(A)=k} \det([I_k \mid X ] A) = 0.$$ If we see the entries of $X$ as the variables $x_1,\dots,
x_{k(n-k)}$, then every variable $x_i$ is contained in at most one row of the matrix $$[I_k\,|\,X]A = (\sum_{\ell=1}^k A_{ \ell j} + \sum_{\ell=k+1}^n X_{i\ell} A_{\ell j})_{i,j}.$$ Thus $ \det([I_k \mid X ] A) \in {\mathbb{F}}_q[{x_1,\ldots,x_{k(n-k)}}]$ has degree at most $k$. The number of matrices in $ {\mathbb{F}}_{q}^{n\times k}$ of rank $k$ is $\prod_{i=0}^{k-1} (q^n-q^i) \leq q^{kn}$, hence the degree of $p^*$ is at most $k\prod_{i=0}^{k-1} (q^n-q^i)$. It follows from Lemma \[lem:SZ\] that $$\mathrm{Pr}\big( \;{\mathrm{rs}}[I_k \mid X ] \mbox{ is not an MRD code } \big) \leq \frac{\deg p^*}{q^m}$$ and hence $$\mathrm{Pr}\big( \;{\mathrm{rs}}[I_k \mid X ] \mbox{ is an MRD code }
\big) \geq 1-\frac{\deg p^*}{q^m} \geq 1-\frac{k\prod_{i=0}^{k-1}
(q^n-q^i)}{q^m} \geq 1-kq^{kn-m} .$$
In the remainder of this subsection we want to improve the bound obtained in Theorem \[thm:probMRDrough\]. To do so we introduce the set $$\mathcal T(k,n)=\left\{E\in {\mathbb{F}}_q^{k\times n}\,|\, E \mbox{ is in reduced row echelon form and } {\textnormal{rk}}(E)=k \right\}.$$ With this notation we can formulate a variation of Corollary \[prop:MRDCrit\]:
\[prop:improvedMRD\] Let $G\in {\mathbb{F}}_{q^m}^{k\times n}$ be a generator matrix of a rank-metric code $\mathcal{C}\subseteq {\mathbb{F}}_{q^m}^n$. Then $\mathcal{C}$ is an MRD code if and only if $${\textnormal{rk}}(EG^T) =k$$ for all $E\in \mathcal T(k,n)$.
For every matrix $V\in {\mathbb{F}}_q^{k\times n}$ consider its reduced row echelon form $E_V$. I.e., there exists a matrix $R \in {\textnormal{GL}}_k(q)$ such that $V=RE_V$. Then $$\det(VG^T)=\det(RE_VG^T)=\det(R)\det(E_VG^T),$$ and since $\det(R)\neq0$ we obtain that ${\textnormal{rk}}(VG^T)=k$ if and only if ${\textnormal{rk}}(E_VG^T)=k$. By Corollary \[prop:MRDCrit\] the statement follows.
For $E\in \mathcal T(k,n)$ we define the polynomial $$f_E(x_1,\ldots,x_{k(n-k)}) := \det([I_k\,|\,X]E^T) \in {\mathbb{F}}_{q^m}[x_1,\ldots, x_{k(n-k)}] ,$$ and we furthermore define $$f^*({x_1,\ldots,x_{k(n-k)}}):= \mathrm{lcm}\left\{f_E(x_1,\ldots,x_{k(n-k)})\,|\, E \in \mathcal T(k,n) \right\},$$ where, as before, the entries of $X$ are the variables $x_1,\ldots,
x_{k(n-k)}$. We can easily observe the following.
The set of linear non-MRD codes of dimension $k$ in ${\mathbb{F}}_{q^m}^n$ is in one-to-one correspondence with the algebraic set $$V(\{f^*\})=\left\{(v_1,\ldots,v_{k(n-k)}) \in {\mathbb{F}}_{q^m}^{k(n-k)}\,|\,f^*(v_1,\ldots,v_{k(n-k)})=0\right\} .$$
It follows from Proposition \[prop:improvedMRD\] that the set of linear non-MRD codes of dimension $k$ in ${\mathbb{F}}_{q^m}^n$ is in one-to-one correspondence with the algebraic set $$\begin{aligned}
V&=\bigcup_{E\in \mathcal T(k,n)}\left\{(v_1,\ldots,v_{k(n-k)}) \in {\mathbb{F}}_{q^m}^{k(n-k)}\mid f_E(v_1,\ldots,v_{k(n-k)})=0\right\} \\
&=\left\{(v_1,\ldots,v_{k(n-k)}) \in {\mathbb{F}}_{q^m}^{k(n-k)} \mid \prod_{E \in \mathcal T(k,n)} f_E(v_1,\ldots,v_{k(n-k)})=0\right\} \\
&=\left\{(v_1,\ldots,v_{k(n-k)}) \in {\mathbb{F}}_{q^m}^{k(n-k)}\mid
f^*(v_1,\ldots,v_{k(n-k)})=0\right\} ,
\end{aligned}$$ where the last two equalities follow from the well-known fact that $$V(\{f\})\cup V(\{g\})=V(\{fg\})=V(\{\mathrm{lcm}(f,g)\})$$ for any $ f,g \in {\mathbb{F}}_q[x_1,\ldots,x_{k(n-k)}]$.
Note that in the definition of an algebraic set, it suffices to use the square-free part of the defining polynomial(s). In the above definition of $V$ however, $f^*({x_1,\ldots,x_{k(n-k)}})$ is already square-free, as we show in the following.
For every $E\in \mathcal T(k,n)$ the polynomial $f_E({x_1,\ldots,x_{k(n-k)}})$ is square-free. In particular, the polynomial $f^*({x_1,\ldots,x_{k(n-k)}})$ is square-free.
As in the proof of Theorem \[thm:probMRDrough\], every variable $x_i$ is contained in at most one row of the matrix $[I_k\,|\,X]E^T$. Hence, in the polynomial $f_E({x_1,\ldots,x_{k(n-k)}})$ the degree with respect to every variable is at most $1$. Thus $f_E({x_1,\ldots,x_{k(n-k)}})$ cannot have multiple factors.
We now determine an upper bound on the degree of the defining polynomial $f^*$.
\[lem:deg\] Let $E\in \mathcal T(k,n)$ and let ${\mathcal{U}}_0$ be the subspace of ${\mathbb{F}}_q^n$ defined by $${\mathcal{U}}_0 := {\mathrm{rs}}[\; I_k \mid 0 \;]=\left\{(u_1,\ldots,u_n) \in {\mathbb{F}}_q^n\,|\,u_{k+1}=u_{k+2}=\ldots=u_n=0 \right\}.$$ Then $$\deg f_E=k-\dim \left({\textnormal{rs}}(E)\cap {\mathcal{U}}_0 \right) .$$
Let $r:=k-\dim \left({\textnormal{rs}}(E)\cap {\mathcal{U}}_0 \right)$ with $0\leq
r\leq k$. We can write $$E^T=\left[\begin{array}{c}
E_1 \\
\hline
E_2
\end{array}
\right],$$ where $E_1\in {\mathbb{F}}_q^{k\times k}, E_2\in {\mathbb{F}}_q^{(n-k)\times
k}$. Since $\dim \left({\textnormal{rs}}(E)\cap {\mathcal{U}}_0 \right)=k-r$, we have ${\textnormal{rk}}(E_2)=r$. Thus there exists a matrix $R\in{\textnormal{GL}}_k(q)$ such that the first $r$ columns of $E_2R$ are linearly independent and the last $k-r$ columns are zero. Then $$f_E({x_1,\ldots,x_{k(n-k)}})= \det( [\; I_k \mid X \;] E^T)=\det(R)^{-1}\det(E_1R+XE_2R).$$ The last $k-r$ columns of the matrix $XE_2R$ are zero, i.e., the last $k-r$ columns of $E_1R+XE_2R$ do not contain any of the variables $x_i$. On the other hand, the entries of the first $r$ columns are polynomials in ${\mathbb{F}}_q[{x_1,\ldots,x_{k(n-k)}}]$ of degree $1$, since $$E_1R+XE_2R = \left(\sum_{\ell=1}^n (E_1)_{i\ell} R_{\ell j} +
\sum_{\ell=1}^k \sum_{\ell'=1}^n X_{i\ell'} (E_2)_{\ell'\ell}
R_{\ell j} \right)_{i,j}.$$ Hence we have $\deg f_E\leq r$.
Now consider the matrix $E_2R$. We can write
$$E_2R=\left[\begin{array}{c|c}
\tilde{E}_2 & 0
\end{array}\right]$$ where $\tilde{E}_2$ is an $(n-k)\times r$ matrix of rank $r$. Hence
$$XE_2R=\left[\begin{array}{c|c}
X\tilde{E}_2 & 0
\end{array}\right].$$ First we prove that the entries of the matrix $X\tilde{E}_2$ are algebraically independent over ${\mathbb{F}}_q$. Fix $1\leq i \leq k$ and denote by $(X\tilde{E}_2)_i$ the $i$-th row of the matrix $X\tilde{E}_2$. Then consider the polynomials $(X\tilde{E}_2)_{ij},$ for $j=1,\ldots, r$, that only involve the variables $x_{(i-1)(n-k)+1},\ldots, x_{i(n-k)}$ . The Jacobian of these polynomials is $\tilde{E}_2^T$, whose rows are linearly independent over ${\mathbb{F}}_q$. Therefore the elements in every row are algebraically independent over ${\mathbb{F}}_q$. Moreover different rows involve different variables, hence we can conclude that the entries of the matrix $X\tilde{E}_2$ are algebraically independent over ${\mathbb{F}}_q$.
At this point consider the set of all $r\times r$ minors of $X\tilde{E}_2 $. These minors are all different and hence linearly independent over ${\mathbb{F}}_q$, otherwise a non-trivial linear combination of them that gives $0$ would produce a non-trivial polynomial relation between the entries of $X\tilde{E}_2R$. Now observe that the degree $r$ term of $f_E$ is a linear combination of these minors. If we write $$E_1R=\left[\begin{array}{c|c}
* & \tilde{E}_1
\end{array}\right],$$ where $\tilde{E}_1\in {\mathbb{F}}_q^{k\times (k-r)}$, then the coefficients of this linear combination are given by the $(k-r)\times (k-r)$ minors of $\tilde{E}_1$, multiplied by $\det(R)^{-1}$. Since $E^TR$ has rank $k$ and the last $k-r$ columns of $E_2R$ are $0$, it follows that the columns of $\tilde{E}_1$ are linearly independent, and hence at least one of the coefficients of the linear combination is non-zero. This proves that the degree $r$ term of $f_E$ is non-zero, and hence $\deg
f_E=r$.
We can now give the main result of this subsection, an upper bound on the probability that a random generator matrix generates an MRD code:
\[thm:probMRD\] Let $X\in {\mathbb{F}}_{q^m}^{k(n-k)}$ be randomly chosen. Then $$\mathrm{Pr}\big( \;{\mathrm{rs}}[I_k \mid X ] \mbox{ is an MRD code } \big) \geq 1-\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}q^{-m} .$$
For every $r=0,1,\ldots,k$ we define the set $$\mathcal T_r=\left\{E\in \mathcal T(k,n)\,|\,\dim \left({\mathcal{U}}_0\cap {\textnormal{rs}}(E)\right)=k-r\right\},$$ where $${\mathcal{U}}_0 := {\mathrm{rs}}[\; I_k \mid 0 \;]=\left\{(u_1,\ldots,u_n) \in {\mathbb{F}}_q^n\,|\,u_{k+1}=u_{k+2}=\ldots=u_n=0 \right\}.$$ By Lemma \[lem:intersection\] we have $$\left|\mathcal T_r \right|=\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}.$$ Moreover, by Lemma \[lem:deg\], if $E\in\mathcal T_r$, then $\deg
f_E=r$. Hence, by the definition of $f^*({x_1,\ldots,x_{k(n-k)}})$, we have $$\deg f^*\leq \sum_{E\in\mathcal T(k,n)}\deg f_E=\sum_{r=0}^k\sum_{E \in \mathcal T_r}\deg f_E=\sum_{r=0}^k r\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}.$$ With Lemma \[lem:SZ\], the statement follows.
Remember that we know how to construct MRD codes, namely as Gabidulin codes, for any set of parameters. Hence the probability that a randomly chosen generator matrix generates an MRD code is always greater than zero. However, the lower bound of Theorem \[thm:probMRD\] is not always positive. In particular, for $$m<k(n-k)+\log_qk$$ we get $$\begin{aligned}
&1-\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_q q^{r^2}q^{-m} \\
= &1-q^{-m}\left( k\binom{n-k}{k}_qq^{k^2} + \sum_{r=1}^{k-1}r\binom{k}{k-r}_q\binom{n-k}{r}_q q^{r^2} \right)\\
\leq& 1-q^{-m}\left( kq^{k(n-k)}\right)< 0,
\end{aligned}$$ i.e., the bound is not tight (and not sensible) in these cases. Figure \[plotMRD2\] depicts the lower bounds of Theorem \[thm:probMRDrough\] and Theorem \[thm:probMRD\] for small parameters. One can see that the bounds of Theorem \[thm:probMRD\] really is an improvement over the bound of Theorem \[thm:probMRDrough\].
![Lower bounds on the probability that a randomly chosen generator matrix in ${\mathbb{F}}_{2^m}^{2\times 4}$ (left) and ${\mathbb{F}}_{2^m}^{2\times 5}$ (right) generates an MRD code.[]{data-label="plotMRD2"}](probs_MRD3_n4k2q2.pdf "fig:"){width="7.25cm"} ![Lower bounds on the probability that a randomly chosen generator matrix in ${\mathbb{F}}_{2^m}^{2\times 4}$ (left) and ${\mathbb{F}}_{2^m}^{2\times 5}$ (right) generates an MRD code.[]{data-label="plotMRD2"}](probs_MRD3_n5k2q2.pdf "fig:"){width="7.25cm"}
Probability for Gabidulin codes
-------------------------------
We have seen in Theorem \[thm:topGab\] that the set of matrices in ${\mathbb{F}}_{q^m}^{k\times n}$ in systematic form that generate a generalized Gabidulin code with parameter $s$ (such that $0<s<m$ with $\gcd(s,m)=1$) is in one-to-one correspondence with a subset of the set $$\left\{X \in {\mathbb{F}}_{q^m}^{k \times (n-k)}\,| \; {\textnormal{rk}}(X^{(q^s)}-X)=1 \right\},$$ namely with the elements that represent an MRD code. By Lemma \[lem:systematic\] we furthermore know that, if $X$ has entries from ${\mathbb{F}}_q$, then ${\mathrm{rs}}[\;I_k\mid X\;]$ is not MRD. Hence the set of matrices in systematic form that generate a Gabidulin code is in one-to-one correspondence with a subset of the set $$\mathcal G(s):=\left\{X \in ({\mathbb{F}}_{q^m}\smallsetminus {\mathbb{F}}_q)^{k
\times (n-k)}\,| \; {\textnormal{rk}}(X^{(q^s)}-X)=1 \right\}.$$ For simplicity we make the following estimation of the probability that a randomly chosen generator matrix generates a generalized Gabidulin code.
\[lem:probGabunion\] Let $X\in{\mathbb{F}}_{q^m}^{k \times (n-k)}$ be randomly chosen. Then $$\mathrm{Pr}\big( \;{\mathrm{rs}}[I_k\,|\,X] \mbox{ is a gen.\ Gabidulin code } \big)\leq
\mathop{\sum_{0<s<m}}_{\gcd(s,m)=1}\mathrm{Pr}\big(X\in\mathcal
G(s)\big) =\mathop{\sum_{0<s<m}}_{\gcd(s,m)=1}\frac{|\mathcal
G(s)|}{q^{mk(n-k)}} .$$
The inequality follows from the fact that the set of generalized Gabidulin codes is in one-to-one correspondence with a subset of the set $$\mathop{\bigcup_{0<s<m}}_{\gcd(s,m)=1} \mathcal G(s) .$$ Since $|{\mathbb{F}}_{q^m}^{k(n-k)}| = q^{mk(n-k)}$, the statement follows.
For every integer $0<s<m$ with $\gcd(m,s)=1$, we now define the map $\varPhi_s$ given by $$\begin{array}{rcl}
\varPhi_s:{\mathbb{F}}_{q^m}^{k\times (n-k)} &\longrightarrow & {\mathbb{F}}_{q^m}^{k\times (n-k)} \\
X & \longmapsto & X^{(q^s)}-X.
\end{array}$$ Observe that $\varPhi_s$ is exactly the function that maps every entry $X_{ij}$ of the matrix $X$ to $\varphi_s(X_{ij})$. Moreover we define the sets $$\begin{aligned}
\mathcal R_1 &: =\left\{A\in {\mathbb{F}}_{q^m}^{k\times (n-k)}\,|\, {\textnormal{rk}}(A)=1\right\}, \\
\mathcal R_1^*&:=\left\{A\in ({\mathbb{F}}_{q^m}^*)^{k\times (n-k)}\,|\, {\textnormal{rk}}(A)=1\right\}, \\
\mathcal K
&:=\left(\ker\left(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}\right)\right)^{k\times(n-k)}
.\end{aligned}$$
We state now the crucial results that will help us to compute an upper bound on the cardinality of the sets $\mathcal G(s)$.
\[lem:phi\]
1. Given a matrix $A \in {\mathbb{F}}_{q^m}^{k\times (n-k)}$, there exists a matrix $X \in {\mathbb{F}}_{q^m}^{k\times (n-k)}$ such that $\varPhi_s(X)=A$ if and only if $A\in\mathcal K$.
2. If $A \in \mathcal R_1$, then $$|\varPhi_s^{-1}(A)|=\begin{cases}
0 & \mbox{ if } A\notin \mathcal K \\
q^{k(n-k)} & \mbox{ if } A\in \mathcal K.
\end{cases}$$
3. For every integer $s$ coprime to $m$ $$\mathcal
G(s)=\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K),$$ and $$|\mathcal G(1)|=|\mathcal G(s)|=q^{k(n-k)}|\mathcal R_1^*\cap \mathcal K|.$$
<!-- -->
1. Since $\varPhi_s$ is the function that maps every entry $X_{ij}$ of the matrix $X$ to $\varphi_s(X_{ij})$, we have that $A
\in \mathrm{Im}(\varPhi_s)$ if and only if every entry $A_{ij}$ of $A$ belongs to $\mathrm{Im}(\varphi_s)$. By Lemma \[lem:trace\] part $6$ this is true if and only if every $A_{ij}$ belongs to $\ker\left(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}\right)$.
2. If $A\notin \mathcal K$, then by part $1$ this means that $\varPhi_s^{-1}(A)=\emptyset$. Otherwise, again by part $1$, $\varPhi_s^{-1}(A)\neq\emptyset$. In this case every entry $A_{ij}$ belongs to $\mathrm{Im}(\varphi_s)$, and since $\varphi_s$ is linear over ${\mathbb{F}}_q$, $|\varphi_s^{-1}(A_{ij})|=|\ker(\varphi_s)|$. Since, by Lemma \[lem:trace\], $$|\ker(\varphi_s)|=\frac{|{\mathbb{F}}_{q^m}|}{|\mathrm{Im}(\varphi_s)|}=q,$$ and $A$ has $k(n-k)$ entries, we get $|\varPhi_s^{-1}(A)|=q^{k(n-k)}$.
3. Observe that $\mathcal R_1^*=\mathcal R_1\cap
\left({\mathbb{F}}_{q^m}^*\right)^{k\times (n-k)}$. Moreover $$\varPhi_s^{-1}(\mathcal R_1)=\left\{X \in {\mathbb{F}}_{q^m}^{k \times (n-k)}\,| \; {\textnormal{rk}}(X^{(q^s)}-X)=1 \right\}$$ and, by Lemma \[lem:trace\] part $5$, $$\varPhi_s^{-1}(\left({\mathbb{F}}_{q^m}^*\right)^{k\times (n-k)})=\left({\mathbb{F}}_{q^m}\smallsetminus {\mathbb{F}}_q\right)^{k \times (n-k)}.$$ Hence $$\varPhi_s^{-1}(\mathcal R_1^*)=\varPhi_s^{-1}(\mathcal R_1\cap
\left({\mathbb{F}}_{q^m}^*\right)^{k\times (n-k)})=\varPhi_s^{-1}(\mathcal
R_1)\cap \varPhi_s^{-1}(\left({\mathbb{F}}_{q^m}^*\right)^{k\times
(n-k)})=\mathcal G(s).$$ Now we can write $$\mathcal R_1^*=(\mathcal R_1^*\cap\mathcal K)\cup(\mathcal R_1^*\cap \mathcal K^c)$$ and by part $1$ we have that $\varPhi_s^{-1}(\mathcal R_1^*\cap
\mathcal K^c)=\emptyset$. Then $$\mathcal G(s)= \varPhi_s^{-1}(\mathcal R_1^*)=
\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal
K)\cup\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal
K^c)=\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K).$$ By part $2$ we have $\left|\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal
K)\right|=q^{k(n-k)}|\mathcal R_1^*\cap \mathcal K|$, which proves the statement.
In analogy to the previous subsection we now first derive a straight-forward upper bound on the probability that a random generator matrix gives rise to a generalized Gabidulin code. Afterwards we will improve this bound.
\[thm:probGabrough\] Let $X\in {\mathbb{F}}_{q^m}^{k(n-k)}$ be randomly chosen. Denote by $\phi(m)$ the Euler-$\phi$-function. Then $$\mathrm{Pr}\big( \;{\mathrm{rs}}[I_k \mid X ] \mbox{ is a generalized Gabidulin code } \big)
\leq \phi(m) ( 2q^{1-m})^{\lfloor\frac{k}{2}\rfloor
\lfloor\frac{n-k}{2}\rfloor}$$
We want to derive the cardinality of $\mathcal G(s)$ for any valid $s$. For this, by Lemma \[lem:phi\] part $3$, we note that these cardinalities are all equal to the cardinality of $\mathcal G(1)$. Now for any $X \in ({\mathbb{F}}_{q^m}\smallsetminus {\mathbb{F}}_q)^{k \times (n-k)}$ the rank of $X^{(q)}-X$ is greater than zero. Therefore we can also write $$\mathcal G(1)=\left\{X \in ({\mathbb{F}}_{q^m}\smallsetminus {\mathbb{F}}_q)^{k \times (n-k)}\,| \; {\textnormal{rk}}(X^{(q)}-X)\leq 1 \right\}.$$ The condition that ${\textnormal{rk}}(X^{(q)}-X)\leq 1$ is equivalent to that any $2\times 2$-minor of $X^{(q)}-X$ is zero. Hence a necessary condition is that any set of non-intersecting minors is zero. We have $\lfloor\frac{k}{2}\rfloor \lfloor\frac{n-k}{2}\rfloor$ many such non-intersecting minors, each of which has degree at most $2q$ if we see the entries of $X$ as the variables $x_1,\dots, x_{k(n-k)}$. With Lemma \[lem:SZ\] we get for each minor $M_{ij}$, $$\Pr(M_{ij} = 0) \leq 2q^{1-m}.$$ Since the non-intersecting minors are independent events, the probability that all of these are zero is at most $$( 2q^{1-m})^{\lfloor\frac{k}{2}\rfloor \lfloor\frac{n-k}{2}\rfloor}.$$ With Lemma \[lem:probGabunion\] and the fact that the number of $s$ with $\gcd(s,m)=1$ is given by $\phi(m)$, the statement follows.
To improve the above bound we need the following lemma.
\[lem:R1\] The set $\mathcal R_1^*\cap \mathcal K$ is in one-to-one correspondence with the set $$\begin{aligned}
V_R :=& \left\{\left({\boldsymbol}\alpha,{\boldsymbol}\beta\right)\in {\mathbb{F}}_{q^m}^k
\times {\mathbb{F}}_{q^m}^{n-k-1}\,|\,\alpha_i, \alpha_i\beta_j \in
\ker \left(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}\right)\smallsetminus\{0\}\right\} \\
=& \left\{\left({\boldsymbol}\alpha,{\boldsymbol}\beta\right)\in {\mathbb{F}}_{q^m}^k \times
{\mathbb{F}}_{q^m}^{n-k-1}\,|\,\alpha_i, \in \ker
\left(\mathrm{Tr}_{{\mathbb{F}}_{q^m}/{\mathbb{F}}_q}\right)\smallsetminus\{0\},
\beta_j \in \bigcap_{i=1}^k\ker
\left(\mathrm{T}_{\alpha_i}\right)\smallsetminus\{0\}\right\}
\end{aligned}$$ via the map $\psi:V_R \longrightarrow \mathcal R_1^*\cap \mathcal
K$, given by $$({\boldsymbol}\alpha,{\boldsymbol}\beta)\longmapsto\left[\begin{array}{c}\alpha_1 \\
\vdots \\
\alpha_k
\end{array}\right]\left[1, \beta_1,\ldots, \beta_{n-k-1}\right],$$ and hence $$|\mathcal R_1^*\cap \mathcal K|\leq (q^{m-1}-1)^{n-1}$$
From the definition of the set $V_R$ it is clear that the map $\psi$ is well-defined, i.e., it maps every element in $V_R$ to an element in $\mathcal R_1^*\cap \mathcal K$.
Let $({\boldsymbol}\alpha,{\boldsymbol}\beta)$, $({\boldsymbol}\gamma,{\boldsymbol}\delta)$ be two elements that have the same image. Then the first column of $\psi({\boldsymbol}\alpha,{\boldsymbol}\beta)$ and the first column of $\psi({\boldsymbol}\gamma,{\boldsymbol}\delta)$ are equal, hence ${\boldsymbol}\alpha={\boldsymbol}\gamma$. Also the first rows of $\psi({\boldsymbol}\alpha,{\boldsymbol}\beta)$ and $\psi({\boldsymbol}\gamma,{\boldsymbol}\delta)$ are equal, thus $\alpha_1\beta_j=\gamma_1\delta_j$ for every $j=1,\ldots, n-k-1$, and since $\alpha_1=\gamma_1\neq 0$ we get ${\boldsymbol}\beta={\boldsymbol}\delta$ and this shows the injectivity of the map $\psi$.
In order to show the surjectivity consider a rank $1$ matrix $A\in\mathcal R_1^*\cap \mathcal K$ with entries $A_{ij}$. Consider the vectors ${\boldsymbol}\alpha=(A_{11},\ldots,A_{k1})^T$ and $${\boldsymbol}\beta=A_{11}^{-1}(A_{12},\ldots,A_{1(n-k)})^T.$$ It is clear that $({\boldsymbol}\alpha, {\boldsymbol}\beta)\in V_R$, and that $\psi({\boldsymbol}\alpha,{\boldsymbol}\beta)=A$.
At this point for every $\alpha_i$ we have $q^{m-1}-1$ possible choices, while for every $\beta_i$ we have a number of choices that is less or equal to $|\ker(T_{\alpha_1})\smallsetminus\{0\}|$, that is again $q^{m-1}-1$. Therefore we get $$|\mathcal R_1^*\cap \mathcal K|\leq (q^{m-1}-1)^{n-1}.$$
We can now formulate the main result concerning the probability that a random linear rank-metric code is a generalized Gabidulin code.
\[thm:mainprobGab\] Let $X\in {\mathbb{F}}_{q^m}^{k\times (n-k)}$ be randomly chosen. Then $$\Pr\big( \;{\mathrm{rs}}[I_k\,|\,X] \mbox{ is a gen.\ Gabidulin code }\big)\leq
\phi(m)q^{-(m-1)(n-k-1)(k-1)},$$ where $\phi$ denotes the Euler-$\phi$ function.
We have already seen in Lemma \[lem:probGabunion\] that $$\Pr\big( \;{\mathrm{rs}}[I_k\,|\,X] \mbox{ is a gen Gabidulin code }\big)\leq
\mathop{\sum_{0<s<m}}_{(s,m)=1}\frac{|\mathcal
G(s)|}{q^{mk(n-k)}}.$$ By Lemma \[lem:phi\] part $3$, the sets $\mathcal G(s)$ all have cardinality $q^{k(n-k)}|\mathcal R_1^*|$, thus $$\mathop{\sum_{0<s<m}}_{(s,m)=1}\frac{|\mathcal G(s)|}{q^{mk(n-k)}}=\phi(m)\frac{q^{k(n-k)}|\mathcal R_1^*\cap \mathcal K|}{q^{mk(n-k)}}.$$ Moreover by Lemma \[lem:R1\], we know that $|\mathcal R_1^*\cap
\mathcal K|\leq (q^{m-1}-1)^{n-1}\leq q^{(m-1)(n-1)} $. Combining all the inequalities implies the statement.
We can now give the final main result of this work, that proves the existence of linear MRD codes that are not generalized Gabidulin codes for almost every set of parameters.
\[thm:main\]
- For any prime power $q$, and for any $1<k<n-1$, there exists an integer $M(q,k,n)$ such that, for every $m\geq M(q,k,n)$, there exists a $k$-dimensional linear MRD code in ${\mathbb{F}}_{q^m}^{n}$ that is not a generalized Gabidulin code.
- An integer $M(q,k,n)$ with this property can be found as the minimum integer solution of the inequality $$\label{eq:main}
1-\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}q^{-m}>(m-1)q^{-(m-1)(n-k-1)(k-1)}$$ taken over all $m\in \mathbb N$.
For fixed $q$, $k$ and $n$ consider the function $$\begin{aligned}
F(m) & =\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}q^{-m}+(m-1)q^{-(m-1)(n-k-1)(k-1)} \\
& =aq^{-m}+(m-1)q^{-c(m-1)},
\end{aligned}$$ where $$a:=\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}, \;\;\;\;c:=(n-k-1)(k-1).$$ Since $k\neq 1,n-1$, we have $c>0$. In this case $F(m)$ is the sum of two non-increasing functions and hence it is non-increasing. Therefore the function $1-F(m)$ is non-decreasing. Moreover it is easy to see that $$\lim_{m\rightarrow +\infty} 1-F(m)=1.$$ This means that the set of the solutions of Inequality (\[eq:main\]) is non-empty. Then it has a minimum solution $M(q,k,n)$. Since the function $1-F(m)$ is non-decreasing, every $m\geq M(q,k,n)$ is also a solution of (\[eq:main\]). Hence, by Theorems \[thm:probMRD\] and \[thm:mainprobGab\], we have the following chain of inequalities for every $m\geq M(q,k,n)$, $$\Pr\big( {\mathrm{rs}}[I_k\,|\,X] \mbox{ is MRD}\big)\geq 1-aq^{-m}>(m-1)q^{-c(m-1)}\geq \Pr\big( {\mathrm{rs}}[I_k\,|\,X] \mbox{ is gen.\ Gabidulin}\big),$$ which concludes the proof.
In Figures \[experimental3\] and Figures \[gabidulin2\] we compare the bounds derived in this section with experimental results, which we got by randomly generating over $500$ rank-metric codes. The continuous lines show the bounds, the dotted lines show the experimental probabilities. In Figure \[experimental3\] we see that Gabidulin codes are very few among all MRD codes when the extension degree $m$ is large. The probabilities for generalized Gabidulin codes decrease so quickly for increasing parameters that we show them separately, in logarithmic scale, in Figure \[gabidulin2\]. Notice that from $m=10$ it is very difficult to generate a generalized Gabidulin code randomly and thus, experimentally we got a probability zero. This is why the experimental result was shown only up to $m=9$.
![Bounds and experimental results for MRD and generalized Gabidulin codes in ${\mathbb{F}}_{2^m}^{2\times 4}$ and ${\mathbb{F}}_{3^m}^{2\times
4}$.[]{data-label="experimental3"}](experimental-q2n4k2.png "fig:"){width="7.25cm"} ![Bounds and experimental results for MRD and generalized Gabidulin codes in ${\mathbb{F}}_{2^m}^{2\times 4}$ and ${\mathbb{F}}_{3^m}^{2\times
4}$.[]{data-label="experimental3"}](experimental-q3n4k2.png "fig:"){width="7.25cm"}
![Bounds and experimental results for generalized Gabidulin codes in ${\mathbb{F}}_{2^m}^{2\times 5}$ and ${\mathbb{F}}_{3^m}^{2\times 4}$.[]{data-label="gabidulin2"}](gabidulin-q2n5k2-novert.png "fig:"){width="7.25cm"} ![Bounds and experimental results for generalized Gabidulin codes in ${\mathbb{F}}_{2^m}^{2\times 5}$ and ${\mathbb{F}}_{3^m}^{2\times 4}$.[]{data-label="gabidulin2"}](gabidulin-q3n4k2-novert.png "fig:"){width="7.25cm"}
Conclusion {#sec:conclusion}
==========
In this work we have shown that, over the algebraic closure of a given finite field, MRD codes and non-Gabidulin codes are generic sets among all linear rank-metric codes. For this we have used two known criteria for these two properties, which give rise to algebraic descriptions of the respective sets. Afterwards we have used the same two criteria to establish a lower bound on the probability that a randomly chosen systematic generator matrix generates an MRD code, and an upper bound on the probability that a randomly chosen systematic generator matrix generates a generalized Gabidulin code. With these two bounds we were then able to show that non-Gabidulin MRD codes exists for any length $n$ and dimension $1<k<n-1$, as long as the underlying field size is large enough.
[^1]: This work was supported by SNF grant no.149716.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Several designs for micro-devices for chemotaxis based on nano-motors are proposed. The nano- or micro-motors are the conventional Janus rods or spheres that are powered by the catalytic reaction of fuels such as hydrogen peroxide. It is shown how these can be linked to make a device that can follow a concentration gradient of the fuel. The feasibility of assembling the devices using micromanipulation or metallic deposition is discussed. A possible design principle is suggested for a device that follows the concentration gradient of an analyte other than the fuel.'
author:
- Phil Attard
date: '3 September, 2012, [email protected], arXiv:cond-mat'
title: 'Design of Chemotaxis Devices Using Nano-Motors'
---
Introduction
============
Chemotaxis is the movement of biological organisms in response to a chemical concentration gradient. Three things are required for the phenomenon: an ability to sense the gradient, a means of steering or orientation, and a means of propulsion. It is a challenge to modern micro-mechanical techniques to manufacture an artificial chemotaxis device on similar scales to those that occur in real organisms. One might imagine in the long term that such devices might play a role in targeted drug delivery or in chemical detection and localization. A first step on the long journey to such possible applications is the conceptual design of a micron sized chemotaxis device.
In recent years micro-motors have been manufactured in the form of Janus spheres or rods that differentially catalyze reactions at their surface. The solvent or solute provide the fuel, and the differential reaction is the basis of the propulsion mechanism. A common example is the catalytic decomposition of hydrogen peroxide, in which case both non-conducting (e.g. Pt/SiO$_2$, Pt/TiO$_2$) and conducting (e.g. Au/Pt) combinations have been used.[@Paxton05; @Kline05; @Ozin05; @Howse07; @Burdick08; @Gibbs09] In the conducting case, the propulsion mechanism is thought to be electrokinetic.[@Moran11]
One limitation of the artificial micro-motors that have been manufactured to date is that they have no method of sensing a concentration gradient or of aligning themselves with it. Although the propulsion force and hence the speed of the differential catalytic motors is proportional to the local concentration of fuel, this in itself does not provide directed motion because random Brownian impulses rapidly disorient the motor and there is no restoring mechanism to realign it in the preferred direction. Hence the motor follows a random walk through the solvent, with speed proportional to the local concentration of fuel, but with the velocity on average zero. Directional control has been achieved by using an external magnetic field and incorporating a ferromagnetic segment in the micro-motor,[@Kline05; @Burdick08] but obviously as a model of chemotaxis it would be preferable if the steering mechanism could be based on the chemical concentration gradient itself.
The present note proposes some relatively simple designs that link differential catalytic micro-motors to achieve directional motion due to a chemical concentration gradient. The following section gives two designs for such chemotaxis micro-devices. In the third section, the feasibility of manufacture and other practical considerations are discussed.
Chemotaxis Micro-Devices
========================
Figure \[mm-element\] is a sketch of a micro-motor. The convention used in this note is that the force on the motor is from the dark end to the light end, and hence the motion tends in the direction signified by the arrow. To be concrete, if the micro-motor was a Pt/Au Janus rod and the fuel was H$_2$O$_2$, then gold would be dark and platinum light. As mentioned in the introduction, the micro-motor has no preferred orientation and so it always travels in whatever direction its axis happens to be pointing irrespective of any concentration gradient. Random fluctuations frequently reorient the motor. The propulsive force is proportional to the local concentration. In practice the motor reaches a terminal speed in which the drag force is equal and opposite to the propulsive force. Typical dimensions of the motor are 2$\,\mu$m long and 300$\,$nm in diameter, and speeds can exceed 200$\,\mu$m$\,$s$^{-1}$.[@Moran11]
The basic concept for the chemotaxis devices is to use micro-motors to create a torque in the presence of a concentration gradient. The simplest such device is sketched in Fig. \[mm-ataxis\]. (Practical issues regarding the manufacture and performance of the chemotaxis devices will be discussed in the following section.) The device consists of two parallel micro-motors, oriented perpendicular to a neutral lever arm and rigidly attached at the ends. It makes no difference whether the attached motors are Janus cylinders or spheres. In the orientation shown, the device moves against the concentration gradient, (i.e. from high concentrations to low), which is called negative chemotaxis. The reason for this particular arrangement is that it is stable to random fluctuations. For example, in Fig. \[mm-ataxis-rotate\] the device is rotated clockwise 45$^\circ$. Both motors create a propulsive force in the SW direction, but, due to the concentration gradient, the left motor experiences a higher concentration and hence creates a larger force. The consequent net turning moment on the couple acts in the counter-clockwise direction, which tends to restore the device to the original orientation.
For simplicity, in Fig. \[mm-ataxis\] the device is sketched in the plane of the page, and the discussion focussed on angular fluctuations within the plane of the page. In reality one has to be concerned with the orientation of the device in three dimensional space. The device as drawn in Fig. \[mm-ataxis\] would experience random twists about the long axis that would provide a stochastic contribution to its motion superimposed upon the deterministic negative chemotaxis. For full stability, one ought attach to the middle of the device in Fig. \[mm-ataxis\] an identical device oriented perpendicular to the page (all four motors parallel). Such an X-shaped device is stable with respect to all angular fluctuations.
A limitation of the simple device sketched in Fig. \[mm-ataxis\] (and its three dimensional, cross-shaped analogue) is that it only performs negative chemotaxis. It is desirable to have the possibility of positive chemotaxis. This can be accomplished by using steering elements of the form shown in Fig. \[mm-steering\]. A steering element consists of two antiparallel motors rigidly attached at the ends of a rod with which they are aligned. In the orientation shown, the elements move either with (positive chemotaxis, left element) or against (negative chemotaxis, right element) the concentration gradient. Even though the motors on each element oppose each other, due to the concentration gradient the motor at the higher concentration provides the larger propulsive force and hence the direction of motion. Neither steering element on its own has any reorientation ability and so the configuration shown is not stable.
Joining the two types of steering elements with a rigid rod, as in Fig. \[mm-stable\], provides the necessary orientational stability. For example, in Fig. \[mm-stable-rotate\] a counter-clockwise 45$^\circ$ rotation creates a NE pointing force on the upper steering element, and a SW pointing force on the lower steering element, due to the way these elements have been designed to move in a concentration gradient. The coupling of these opposing forces gives a restoring clockwise torque on the device. As in Fig. \[mm-ataxis\], only the simplest planar configuration is shown explicitly here; full three dimensional stability can be obtained by attaching identical steering elements perpendicular to the page. To obtain motion (positive or negative chemotaxis), one can attach a motor with the desired orientation to the central rod.
Practical Considerations
========================
An important consideration is the size of the device. In general terms the larger the lever arm, the greater the torque, and hence the greater the orientation stability of the chemitaxis devices described above. However a larger lever arm also increases the drag, which reduces the terminal velocity.
The devices have been sketched above with micro-motors attached to the arms. Micro-manipulation techniques in the field of atomic force microscopy currently allow the routine gluing of colloidal spheres on the order of 10$\,\mu$m in diameter to cantilevers on the order of 100$\,\mu$m long (see, for example, Ref. and references therein). Micro-motors have currently been made in the form of rods 2$\,\mu$m long and 300$\,$nm in diameter,[@Paxton06] and in the form of 2$\,\mu$m colloidal spheres.[@Gibbs10] Larger motors are likely easier to manufacture, and such small motors have only been pursued to date for reasons of buoyancy (see next) and for the kudos associated with achieving the smallest working micro-motor.
Since the motion of the micro-motors and of the chemotaxis devices occurs in three-dimensional liquid, it is important that the effects of gravity be minimised in order to avoid either sedimentation or flotation of the device during its motion. For the 2$\,\mu$m motors mentioned above, the effects of gravity are small compared to the thermal fluctuations in the liquid, and buoyancy effects can be neglected over experimental time scales. In the case of the chemotaxis devices proposed here, micro-manipulation techniques demand somewhat larger micro-motors and lever arms, and one may possibly have to consider buoyancy effects. One solution is to choose connecting rods with positive buoyancy (e.g. polymer), such that in combination with the negatively buoyant attached motors the chemotaxis device is overall neutrally buoyant. One could also possibly attach negatively (e.g. tungsten) or positively (e.g. polymer) buoyant micro-spheres to the lever arms to achieve neutral buoyancy.
Alternatively, it might be possible to avoid the attachment step and to manufacture a smaller chemotaxis device by making the lever arms themselves the motors. This is illustrated in Fig. \[mm-evap\], where three elements for chemotaxis devices are shown. The elements could be made by metal evaporation and deposition, as has already been used to manufacture 2$\,\mu$m spherical micromotors.[@Gibbs10] For the case of negative chemotaxis (left element, Fig. \[mm-evap\]), the element is shown rotated from its stable position in order to indicate the restoring torque. It may be seen that it makes no conceptual difference whether the turning moment arises from a pair of motors attached at the ends of the lever arm, or from a continuum of motors distributed along its length. In the case of the steering elements it is imagined that the second metal can be deposited on top of the first at the ends of the rod, either by appropriate orientation of the rod, or by the use of shadow masks. It is not necessary that the entire rod be coated with the first metal.
Finally, although the devices proposed above can be expected to display chemotaxis, they suffer from a significant limitation, namely that the chemical used to power the motor is the same as that used to fix the orientation. In the real world, organisms that display chemotaxis respond to a given chemical via its concentration gradient, and their means of locomotion is quite independent of this stimulus. The next step in chemotaxis devices would be to separate these functions so that that motor fuel is different from the chemical gradient being followed.
Figure \[mm-bio\] sketches one possible design for a chemotaxis device that separates the functions of detection and locomotion. The idea is that a micro-motor attached to the center of a rod moves the device according to the concentration of the chosen propellent and is insensitive to any concentration gradients. Along the rod are distributed binding sites specific for the analyte of interest. It is assumed that this analyte binds reversibly to these sites, such that the fraction of occupied sites at any time depends on the local concentration of the analyte. It is further assumed that bound analytes increase the drag force on the rod, which may be reasonable for a polymer or biological macromolecule. These two design goals are undeniably challenging, but to the extent that they are satisfied, and with the addition of an identical arm perpendicular to the page, the device would have orientation stability, and it will display positive chemotaxis for the chosen analyte. Obviously there are many challenges in tuning the parameters to obtain a working device, but at least conceptually it illustrates one possible way that locomotion and steering can be separated in an artificial chemotaxis device.
[99]{}
W. F. Paxton, A. Sen, and T. E. Mallouk, Chem.-Euro. J. [**11**]{}, 6462 (2005).
T. R. Kline, W. F. Paxton, T. E.Mallouk, and A. Sen, Angew. Chem. Intl Edn [**44**]{}, 744 (2005).
G. A. Ozin, I. Manners, S. Fournier-Bidoz, and A. Arsenault, Adv. Mater. (Weinheim, Ger.) [**17**]{}, 3011 (2005). J. R. Howse, R. A. L. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, and R. Golestanian, Phys. Rev. Let. [**99**]{}, 048102 (2007).
J. Burdick, R. Laocharoensuk, P. M. Wheat, J. D. Posner, and J. Wang, J. Amer. Chem. Soc. [**130**]{}, 8164 (2008).
J. G. Gibbs and Y. P. Zhao, Appl. Phys. Let. [**94**]{}, 163104 (2009).
J. L. Moran and J. D. Posner, J. Fluid Mech. [**680**]{}, 31 (2011).
L. Zhu, P. Attard, and C. Neto, Langmuir, [**27**]{}, 6712 (2011).
W. F. Paxton, P. T. Baker, T. R. Kline, Y. Wang, T. E. Mallouk, and A. Sen J. Am. Chem. Soc. [**128**]{}, 14881 (2006).
J. G. Gibbs, N. A. Fragnito, and Y. Zhao, Appl. Phys. Lett. [**97**]{}, 253107 (2010).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We review some aspects of recurrence in topological dynamics and focus on two open problems. The first is an old one concerning the relation between Poincaré and Birkhoff recurrence; the second, due to Boshernitzan, is about moving recurrence. We provide a partial answer to a topological version of the moving recurrence problem.'
address:
- |
Department of Mathematics\
Rice University\
Huston, Tx 77005\
USA
- |
Department of Mathematics\
Tel Aviv University\
Ramat Aviv\
Israel
author:
- Michael Boshernitzan and Eli Glasner
date: 'September 28, 2008'
title: On two recurrence problems
---
[^1]
Introduction {#introduction .unnumbered}
============
Poincaré’s recurrence theorem is the first and most basic theorem of ergodic theory. It asserts that given a measure preserving (invertible) dynamical system $(X,\mu,{\mathcal{X}},T)$ and $A\in {\mathcal{X}}$ with $\mu(A)>0$, the set $N(A,A)=\{n\in {\mathbb{Z}}:\mu(T^n A\cap A)>0\}$ meets every set of the form $(L - L)\setminus\{e\}
=\{n - m: n, m \in L, n \ne m\}$ with infinite $L\subset {\mathbb{Z}}$. The proof of this surprising fact is straightforward: The sets $T^n A, n\in L$, having the same (positive) measure, can not be all disjoint (mod $\mu$). If $\mu(T^n A \cap T^m A)>0$ then $\mu(T^{n -m}A\cap A)>0$ hence $n -m \in N(A,A)$.
This basic measure theoretic recurrence theorem has a topological counterpart due to G. D. Birkhoff. If $(X,T)$ is a topological dynamical system ($X$ is a compact metric space and $T\colon X \to X$ is a homeomorphism of $X$ onto itself), then there is a [*recurrent point*]{} in $X$; i.e. there is a point $x \in X$ such that for every ${\epsilon}>0$ there is some $n \ge 1$ with $d(x, T^nx) < {\epsilon}$. A purely topological proof of this theorem (i.e. one which does not use the fact that such a system always admits an invariant probability measure and then applies Poincaré’s theorem) follows from the fact that minimal subsystems always exist. One first applies Zorn’s lemma to show that every compact topological system admits a minimal subset and then uses the characterization of a point whose orbit closure is minimal as a [*uniformly recurrent point*]{} (see Lemma \[ur\] below).
Poincaré’s and Birkhoff’s recurrence theorems obtained more recently a deep and far reaching generalization in the form of Furstenberg’s [*multiple recurrence theorem*]{}, from which Furstenberg was able to deduce the famous theorem of Szemerédi: a subset $A \subset {\mathbb{N}}$ of positive upper Banach density contains arbitrarily long arithmetical progressions (see Furstenberg [@Fur]).
In the present work we review some aspects of recurrence in topological dynamics as developed by Gottschalk and Hedlund [@GH], and Furstenberg [@Fur], including several “folklore" theorems, and then focus on two particular open problems. The first is an old one concerning the relation between Poincaré and Birkhoff recurrence (see Problems ($A$), ($A'$) and ($A''$), or \[prob:A\], \[prob:A1\] and \[prob:A2\], respectively) and the second, due to Boshernitzan, is about ‘moving recurrence’ (see Section \[sec:mr\]). While the original “moving recurrence" problem remains open, we provide here a partial answer to a topological version of the problem (Theorem \[mr-thm\]). The paper also contains other new results on topological recurrence. In particular, in Section \[sec:tr\] we introduce the notion of $r$-Birkhoff sets (approximating that of Birkhoff sets) and present some preliminary results concerning these sets. Finally in Section \[Sec-UR\] we show that “absolute moving recurrence" is equivalent to uniform rigidity. For related works see [@Fur], [@Ru], [@A], [@G], [@W], [@AG], [@HY], [@G1] and [@BFW]. A comprehensive review by Frantzikinakis and McCutcheon on the subject of recurrence in dynamics is forthcoming [@FM].
This work was done while both authors participated in the program “Ergodic Theory and Additive Combinatorics" at MSRI in the summer of 2008. We thank MSRI for its support. We also thank Benjy Weiss for very helpful discussions.
A reminder of some preliminary definitions and basic results
============================================================
Let $(X,T)$ be a dynamical system, where $X$ is a compact metric space and is a homeomorphism of $X$ onto itself. For $A$ and $B$ subsets of $X$, we let $$N(A,B)=\{n \in {\mathbb{Z}}: T^n A \cap B \ne\emptyset\}.$$ When $A =\{x\}$ is a singleton we write $N(A,B)=N(x, B)$, thus $$N(x,B)=\{n \in {\mathbb{Z}}: T^n x \in B\}.$$ For a point $x \in X$ we write ${\mathcal{O}_T}(x)=\{T^n x : n \in {\mathbb{Z}}\}$ for the [*orbit of $x$*]{} and ${{\bar{\mathcal{O}}}_T}(x)$ for the closure of ${\mathcal{O}_T}(x)$. We say that the system $(X,T)$ is [*point transitive*]{} if there is a point $x\in X $ with ${\mathcal{O}_T}(x)$ dense. Such a point is called [*transitive*]{}. We say that the system $(X,T)$ is [*topologically transitive*]{} (or just [*transitive*]{}) if the set $N(U,V)$ is nonempty for every pair $U$ and $V$ of nonempty open subsets of $X$. Clearly point transitivity implies topological transitivity and using Baire’s category theorem one can show that (for metric systems) conversely, in a topologically transitive system the set $X_{tr}$ of points whose orbit is dense forms a dense $G_{\delta}$ subset of $X$. A point $x \in X$ is a [*recurrent point*]{} if the set $N(x,U)\setminus\{0\}$ is nonempty for every neighborhood $U$ of $X$. A dynamical system is called [*minimal*]{} if every point is transitive.
A dynamical system $(X,T)$ is [*equicontinuous*]{} if the collection of maps $\{T^n: n \in {\mathbb{Z}}\}$ is equicontinuous. A minimal equicontinuous system is called a [*Kronecker system*]{}. We have the following classical theorem:
\[Kro\]
1. A (metrizable) dynamical system $(X,T)$ is equicontinuous if and only if there is a compatible metric on $X$ with respect to which $T$ is an isometry.
2. A (metrizable) dynamical system is Kronecker if and only if it is isomorphic to a system of the form $(G,R_{a})$, where $G$ is a compact second countable monothetic topological group, $a \in G$ is a topological generator (meaning that the cyclic subgroup $\{a^n: n \in {\mathbb{Z}}\}$ is dense in $G$), and the transformation $R_{a}$ is defined by $R_{a}g= ga$.
There exists a largest monothetic compact topological group $b{\mathbb{Z}}$ called the [*Bohr compactification of the integers*]{}. If we let $\phi: {\mathbb{Z}}\to b{\mathbb{Z}}$ be the canonical map $\phi: {\mathbb{Z}}\to b{\mathbb{Z}}$, then $a = \phi(1)$ is a topological generator of the group $b{\mathbb{Z}}$ and one can associate to $b{\mathbb{Z}}$ a dynamical system $(b{\mathbb{Z}},R_a)$ as above. This system is minimal and equicontinuous, but non-metrizable.
For more information on these basic notions of topological dynamics refer e.g. to chapter one of [@Gl].
Some families of subsets of ${\mathbb{Z}}$ and a famous open problem
====================================================================
In order to avoid some tedious repetitions we introduce the notation ${\mathbb{Z}}_* = {\mathbb{Z}}\setminus \{0\}$.
Let $L \subset {\mathbb{Z}}_*$.
1. $L$ is a [*Poincaré set*]{} if whenever $(X,{\mathcal{X}},\mu,T)$ is a probability preserving system and $A \subset X$ is a positive set (i.e. $A \in {\mathcal{X}}$ and $\mu(A)>0$), then $N(A,A) \cap L \not=\emptyset$. Let $\bf{Po}$ denote the collection of Poincaré subsets of ${\mathbb{Z}}_*$.
2. It is a [*Birkhoff set*]{} (or [*a set of topological recurrence*]{}) if whenever $(X,T)$ is a minimal dynamical system and $U \subset X$ a nonempty open set, then $N(U,U) \cap L\not=\emptyset$. Let $\bf{Bir}$ denote the collection of Birkhoff subsets of ${\mathbb{Z}}_*$.
3. It is a [*Bohr set*]{} if whenever $(X,T)$ is a Kronecker dynamical system and $V \subset X$ a nonempty open set, then $N(V,V) \cap L\not=\emptyset$. Let $\bf{Bo}$ denote the collection of Bohr subsets of ${\mathbb{Z}}_*$.
<!-- -->
1. A subset ${{\mathcal{F}}}$ of the power set ${{\mathcal{P}}}$ of ${\mathbb{Z}}_*$ is called a [*family*]{} when it is hereditary upwards. That is, $F_1 \subset F_2$ and $F_1 \in {{\mathcal{F}}}$ imply $F_2 \in {{\mathcal{F}}}$.
2. If ${\mathcal{E}}$ is any nonempty subset of ${\mathcal{P}}$ we let ${\mathcal{F}}({\mathcal{E}})$ be the smallest family containing ${\mathcal{E}}$.
3. If ${{\mathcal{E}}}$ is any nonempty subset of ${\mathcal{P}}$ we let its [*dual*]{} ${\mathcal{E}}^*$ be defined by $${\mathcal{E}}^* = \{F: F \cap E \neq
\emptyset \ {\text{for all}}\ E \in {{\mathcal{E}}}\}.$$ It is easy to check that ${\mathcal{E}}^*$ is a family and that ${\mathcal{F}}({\mathcal{E}})^* = {\mathcal{E}}^*$. Clearly ${{\mathcal{F}}}_1 \subset {{\mathcal{F}}}_2
\Rightarrow {{\mathcal{F}}}^*_2 \subset {{\mathcal{F}}}^*_1$ and finally for a family ${{\mathcal{F}}} = {\mathcal{F}}^{**}$.
<!-- -->
1. Let ${\mathcal{E}}_{Po}$ be the collection of all subsets of ${\mathbb{Z}}_*$ of the form $N(A,A)$, whenever $(X,\mu,T)$ is a probability preserving system and $A \subset X$ is a positive set ($\mu(A)>0$). We then have $\bf{Po} = {\mathcal{F}}({\mathcal{E}}_{Po})^* = {\mathcal{E}}_{Po}^* $.
2. Let ${\mathcal{E}}_{Bir}$ be the collection of all subsets of ${\mathbb{Z}}_*$ of the form $N(U,U)$, whenever $(X,T)$ is a minimal system and $U \subset X$ is a nonempty open subset of $X$. We have $\bf{Bir} = {\mathcal{F}}({\mathcal{E}}_{Bir})^*= {\mathcal{E}}_{Bir}^*$.
3. Let ${\mathcal{E}}_{Bo}$ be the collection of all subsets of ${\mathbb{Z}}_*$ of the form $N(V,V)$, whenever $(X,T)$ is a Kronecker system and $V \subset X$ is a nonempty open subset of $X$. We have $\bf{Bo} = {\mathcal{F}}({\mathcal{E}}_{Bo})^*={\mathcal{E}}_{Bo}^*$.
\[lem:compare3\] $${\mathcal{E}}_{Po} \supset {\mathcal{E}}_{Bir} \supset {\mathcal{E}}_{Bo},$$ whence $$\bf{Po} \subset \bf{Bir} \subset \bf{Bo}.$$
If $(X,T)$ is a minimal system then the collection $M_T(X)$ of Borel probability measures on $X$ is never empty and if $U \subset X$ is open and nonempty, then $\mu(U) >0$ for every $\mu \in M_T(X)$. This implies ${\mathcal{E}}_{Po} \supset {\mathcal{E}}_{Bir}$. The inclusion ${\mathcal{E}}_{Bir} \supset {\mathcal{E}}_{Bo}$ follows trivially from the definitions. Finally the last two inclusions follow by duality.
A beautiful result of Kriz [@K] (see also [@McC]) shows that ${\mathcal{F}}({\mathcal{E}}_{Po}) \supsetneq {\mathcal{F}}({\mathcal{E}}_{Bir})$.
\[prob:A\] Is it also true that ${\mathcal{F}}({\mathcal{E}}_{Bir}) \supsetneq {\mathcal{F}}({\mathcal{E}}_{Bo})$?
\[remark\] Since $\bf{Po} = {\mathcal{F}}({\mathcal{E}}_{Po})^*$ and $\bf{Bir} = {\mathcal{F}}({\mathcal{E}}_{Bir})^*$. Kriz’ result is the same as the statement $ \bf{Po} \subsetneq \bf{Bir}$, and Problem ($A$) is equivalent to the question whether $ \bf{Bir} \subsetneq \bf{Bo}$.
Recall that a collection ${\mathcal{E}}$ of subsets of ${\mathbb{Z}}$ is [*divisible [@G-80]*]{} (or has the [*Ramsey property*]{} [@Fur]) if whenever $A$ is in ${\mathcal{E}}$ and $A = C \cup D$, then at least one of the sets $C$ and $D$ is in ${\mathcal{E}}$.
1. The collection ${\mathcal{E}}_{Bir}$ forms a filter base; hence ${\mathcal{F}}({\mathcal{E}}_{Bir})$ is a filter.
2. The family $\bf{Bir}$ is divisible.
1. Let $(X,T)$ and $(Y,T)$ be minimal dynamical systems, and $U \subset X$, $V\subset Y$ nonempty open sets. Let $M_0\subset X \times Y$ be a minimal subset of the product system $(X \times Y, T \times T)$. Since clearly the ${\mathbb{Z}}^2$-action defined on $X \times Y$ by the group $\{T^i \times T^j: (i,j) \in {\mathbb{Z}}^2\}$ is minimal, there is a pair $(i,j)$ such that $(T^i \times T^j )M_0 \cap U \times V\ne \emptyset$. Set $M = (T^i \times T^j )M_0$ and $W = (U\times V) \cap M$. Then the system $(M, T \times T)$ is minimal, the set $W$ is a nonempty open subset of $M$, and clearly $$N(U,U) \cap N(V,V) \supset N(W,W).$$ Thus ${\mathcal{E}}_{Bir}$ is indeed a filter base. It follows that ${\mathcal{F}}({\mathcal{E}}_{Bir})$, which is defined as the smallest family containing ${\mathcal{E}}_{Bir}$, is a filter. We leave it as an exercise to show that the dual family of a filter has the Ramsey property (and vice versa). Therefore ${\bf{Bir}} = {\mathcal{F}}({\mathcal{E}}_{Bir})^*$ has the Ramsey property.
Show that the families ${\bf {Po}}$ and ${\bf {Bo}}$ are also divisible.
Examples {#sec:exs}
========
- For every infinite $L \subset {\mathbb{Z}}$ the difference set $ \{n - m : m, n \in L,\ n > m\}$ is a Poincaré set. In fact this statement is just Poincaré’s recurrence theorem.
- Let $p(t)$ be a polynomial with real coefficients taking integer values on the integers and such that $p(0)=0$. Then the sequence $\{p(n)\}_{n\ge 1}$ is Poincaré (see [@Fur theorem 3.16]). In particular the sequence $\{n^2\}_{n \ge 1}$ is Poincaré. It is easy to see that the sequence $\{n^2+1\}_{n \ge 1}$ is not Poincaré.
- Every thick set $L \subset {\mathbb{Z}}$ is Poincaré [@Fur page 74], (see definition \[syn\].2 below). For the reader’s convenience let us reproduce one of the proofs given in [@Fur]. Let $(X,{\mathcal{X}},\mu,T)$ be a measure preserving system and $A \in {\mathcal{X}}$ with $0 < \mu(A)$. If $A$ is not invariant (i.e. $\mu(TA {\bigtriangleup}A)>0$) then there exists an $N \ge 1$ such that $\mu(\bigcup_{j=0}^N T^j A) > \mu(\bigcup_{j=0}^\infty T^j A) -
\mu(A)$. Then, for any $M \ge 1$ $$\mu(\bigcup_{j=M}^{M+N} T^j A) > \mu(\bigcup_{j=0}^\infty T^j A) - \mu(A).$$ This implies that $\mu(\bigcup_{j=M}^{M+N} T^j A \cap A) > 0$, for otherwise $$\mu(\bigcup_{j=0}^\infty T^j A) \ge
\mu(A) + \mu(\bigcup_{j=M}^{M+N} T^j A) > \mu(\bigcup_{j=0}^\infty T^j A) .$$ Thus each sufficiently long interval of integers includes an $n$ with $\mu(T^n A \cap A)>0$, and this proves that a thick set is Poincaré.
- Recall that a sequence of integers $\{n_k\}_{k=1}^\infty$ is [*lacunary*]{} if . A theorem of Y. Katznelson (see [@W theorem 5.3]) asserts that a lacunary sequence is never Bohr. In fact, there is a stronger result (see [@Pol], [@DeM]) according to which for any lacunary sequences of integers [$\{n_k\}_{k=1}^\infty$]{}, there always exists an irrational $\alpha\in{\mathbb{R}}$ such that $\inf_k \|n_k\alpha\|>0$. (We write $\|x\|=\min_{i\in{\mathbb{Z}}} |x-i|$ for the distance of a real $x$ from ${\mathbb{Z}}$, the set of integers.) This answered a question of Erdös’s in [@E].
- The results in the above example do not extend to slower growing sequences (see [@AHK], [@Bos]). An increasing sequence of integers $\{n_k\}_{k=1}^\infty$ is called [*sublacunary*]{} if . There are various results ([@AHK], [@Bos], [@Bou]) which indicate that for a “generic" sublacunary sequence $\{n_k\}_{k=1}^\infty$ the limit $\lim_{k\to\infty}\limits \tfrac1N\sum_{k=1}^N \exp(2\pi i\,n_k\alpha)$ exists and vanishes for all real $\alpha\notin{\mathbb{Z}}$. Such sequences are known to be Poincaré and, in particular, Bohr. (In the above three quoted papers the term “generic" has various probabilistic meanings).
A second formulation of the problem
===================================
\[syn\]
1. A subset $S \subset {\mathbb{Z}}$ is called [*syndetic*]{} if there is a positive integer $N$ such that $S + \{0,1,2,\dots,N\} = {\mathbb{Z}}$.
2. A subset $R \subset {\mathbb{Z}}$ is called [*thick*]{} (or [*replete*]{}) if for every positive integer $N$ there is an $n \in {\mathbb{Z}}$ such that $\{n,n+1,n+2,\dots,n+N\} \subset R$.
3. A point $x \in X$, where $(X,T)$ is a dynamical system, is [*uniformly recurrent*]{} if $N(x,U)$ is syndetic for every neighborhood $U$ of $x$. (In [@GH] a uniformly recurrent point is called an [*almost periodic point*]{}.)
Let ${\mathcal{S}}$ and ${\mathcal{T}}$ denote the families of syndetic and thick sets respectively. Show that ${\mathcal{S}}$ and ${\mathcal{T}}$ are dual families.
We have the following important lemma (see [@GH]).
\[ur\] Let $(X,T)$ be a dynamical system and $x_0 \in X$. Then ${{\bar{\mathcal{O}}}_T}(x_0)$ is a minimal subset of $X$ if and only if $x_0$ is uniformly recurrent.
Suppose first that $x_0 \in X$ has a minimal orbit closure $Y={{\bar{\mathcal{O}}}_T}(x_0)$. Let $U$ be a neighborhood of $x_0$ in $X$. By minimality there is an $N \ge 1$ such that $Y \subset \bigcup_{j=0}^N T^jU$. Now given $n \in {\mathbb{Z}}$ there is some $0\le j \le N$ with $T^n x_0 \in T^jU$. Thus $T^{n -j}x_0 \in U$, hence $n-j \in N(x_0,U)$, hence $n = m +j$ for some $m \in N(x_0,U)$.
Conversely, suppose $x_0$ is uniformly recurrent. Set $Y={{\bar{\mathcal{O}}}_T}(x_0)$ and let $M \subset Y$ be a minimal subset of $Y$. Suppose $M \not = Y$, then $x_0 \not\in M$. Let $U$ and $V$ be open subsets of $X$ such that $x_0 \in U$, $V \supset M$ and $U \cap V=\emptyset$. Pick some $y_0 \in M$. Then the whole orbit of $y_0$ is contained in $V$ and for every $N \ge 1$ we can find $n_N$ with $T^{n_N}x_0$ sufficiently close to $y_0$ to ensure that $T^{n_N}x_0, T^{n_N+1}x_0, \dots, T^{n_N+N}x_0$ are all in $V$. This argument shows that the set $N(x_0,U)$ is not syndetic, contradicting our assumption that $x_0$ is uniformly recurrent.
\[NUU\] Let $(X,T)$ be a dynamical system, $U \subset X$ a nonempty open subset and $x \in X$. Then $N(U, U) \supset N(x,U) - N(x,U)$. If moreover, $(X,T)$ is minimal then $N(U, U)= N(x,U) - N(x,U)$
If $T^mx \in U$ and $T^nx \in U$ then $T^{n-m}T^mx \in U$, so that $N(U, U) \supset N(x,U) - N(x,U)$. Conversely, if $n \in N(U,U)$, there is some $y \in U$ with $T^n y \in U$. By minimality there is some $m \in {\mathbb{Z}}$ such that $T^m x$ is sufficiently close to $y$ to ensure that both $T^m x \in U$ and $T^nT^m x \in U$. Then, $n = (n+m) - m$ and both $n+m$ and $m$ are in $N(x,U)$.
\[symbolic\] If $S \subset {\mathbb{Z}}$ is syndetic then there is a minimal system $(Y,T)$ and an open nonempty $U\subset Y$ such that $S - S \supset N(U,U)$.
Let ${\Omega}=\{0,1\}^{\mathbb{Z}}$ and ${\sigma}: {\Omega}\to {\Omega}$ the shift transformation: $ ({\sigma}{\omega})_n= {\omega}_{n+1}$. Set $Y' = {\bar{\mathcal{O}}}_{{\sigma}}({\mathbf{1}}_S)$ and $U'=\{{\omega}\in {\Omega}:
{\omega}_0 = 1\}$. It is not hard to check that $Y'$ contains a minimal subset $Y \subset Y'$ such that $U=Y \cap U'$ is not empty. If $n \in N(U,U)
=\{n: {\sigma}^n U \cap U \ne\emptyset\}$ then there is a point $y_0 \in U$ with ${\sigma}^n y_0 \in U$. There exists an $m \in {\mathbb{Z}}$ such that ${\sigma}^m {\mathbf{1}}_S$ is sufficiently close to $y_0$ to ensure that both ${\sigma}^m {\mathbf{1}}_S \in U'$ and ${\sigma}^n{\sigma}^m {\mathbf{1}}_S\in U'$. Thus both $m$ and $n+m$ are in $S$ and $n = n+m - m$ is in $S - S$.
The next lemma follows easily from the characterization of Kronecker systems given in theorem \[Kro\]; we leave the details to the reader.
\[Bo\] Let $(X,T)$ be a Kronecker system and $V \subset X$ a nonempty open subset. Then for every point $x_0\in V$ there exists an open neighborhood $x_0 \in V_0 \subset V$ such that $$N(V,V) \supset N(x_0, V) \supset N(V_0,V_0).$$ Thus, denoting by ${\mathcal{E}}'_{Bo}$ the collection of subsets of the form $N(x,V)$, where $(X,T)$ is Kronecker, $x \in X$ and $V$ is an open neighborhood of $x$, we have ${\mathcal{F}}({\mathcal{E}}_{Bo})={\mathcal{F}}({\mathcal{E}}'_{Bo})$, hence $\bf{Bo} = {\mathcal{F}}({\mathcal{E}}_{Bo})^* = {\mathcal{E}}^*_{Bo}= {{\mathcal{E}}'}_{Bo}^*$.
Let ${\alpha}= ({\alpha}_1,{\alpha}_2,\dots,{\alpha}_k)$, a finite sequence of real numbers, and ${\epsilon}>0$ be given. Set $$B({\alpha}_1,{\alpha}_2,\dots,{\alpha}_k; {\epsilon})=\{n \in {\mathbb{Z}}: \|n{\alpha}\| < {\epsilon}\}.$$ Here ${\alpha}$ is considered as an element of the $k$-torus, ${\mathbb{T}}^k = ({\mathbb{R}}/{\mathbb{Z}})^k$ and for $x \in {\mathbb{R}}^k$, $\|x\|$ denotes the Euclidian distance of $x$ from ${\mathbb{Z}}^k$. We say that a subset $B$ of ${\mathbb{Z}}$ is a [*Bohr neighborhood of zero*]{} if it contains some $B({\alpha}_1,{\alpha}_2,\dots,{\alpha}_k; {\epsilon})$.
Since by Kronecker’s theorem the equicontinuous dynamical system $({\mathbb{T}}^k, T)$, where $Tx= x +{\alpha}\pmod 1$ with $\{1,{\alpha}_1,{\alpha}_2,\dots,{\alpha}_k\}$ independent over the rational numbers, is a minimal system, it follows that every Bohr neighborhood of zero is in ${\mathcal{F}}({\mathcal{E}}_{Bo})$. (Take $V=B_{\epsilon}(0) \subset {\mathbb{T}}^k$, so that $B({\alpha}_1,{\alpha}_2,\dots,{\alpha}_k; {\epsilon})= N(0,V)$.) With a little more effort one can prove the following characterizations of Bohr neighborhoods of zero.
\[Bo=BN\] The following conditions on a subset $B \subset {\mathbb{Z}}$ are equivalent:
1. $B$ is a Bohr neighborhood of zero.
2. $B$ is in ${\mathcal{F}}({\mathcal{E}}_{Bo})$; i.e. $B$ contains a subset of the form $N(V,V)$ where $(X,T)$ is a Kronecker system and $V$ a nonempty open subset of $X$.
3. ${{\rm{cls\,}}}{\phi(B)}$ is a neighborhood of the zero element in the compact monothetic group $b{\mathbb{Z}}$. Here $b{\mathbb{Z}}$ is the Bohr compactification of the integers and $\phi: {\mathbb{Z}}\to b{\mathbb{Z}}$ is the natural embedding.
\[prob:A1\] Given a syndetic subset $S \subset {\mathbb{Z}}$, is $S - S$ a Bohr neighborhood of zero? That is, is there a set $B = B({\alpha}_1,{\alpha}_2,\dots,{\alpha}_k; {\epsilon})$ with $S - S \supset B$?
[**Claim.**]{} Problem ($A'$) is a reformulation of Problem ($A$).
To see this assume first that the answer to Problem ($A'$) is in the affirmative. Let $(X,T)$ be a minimal system and $U \subset X$ a nonempty open subset. Then, by Lemma \[NUU\], $N(U,U) = S - S$, where $S = N(x_0,U)$ for some (any) $x_0 \in X$. By Lemma \[ur\], $S$ is syndetic and by our assumption $S - S$ and therefore also $N(U,U)$ contain a Bohr neighborhood of zero. By Proposition \[Bo=BN\] we conclude that every $N(U,U)$, i.e. every member of ${\mathcal{E}}_{Bir}$, contains a member of ${\mathcal{E}}_{Bo}$, whence ${\mathcal{F}}({\mathcal{E}}_{Bir}) = {\mathcal{F}}({\mathcal{E}}_{Bo})$, and $\bf{Bir} = \bf{Bo}$ (see Remark \[remark\] above).
Conversely, assume now that $\bf{Bir} = \bf{Bo}$, and let $S \subset {\mathbb{Z}}$ be a syndetic subset. By Lemma \[symbolic\], $S - S$ contains a set of the form $N(U,U)$ for some minimal system $(X,T)$ and an open nonempty $U \subset X$. If the set $S - S$ is not a Bohr neighborhood of zero then, for every Kronecker system $(Y,T)$ and nonempty open $V \subset Y$, $N(V,V) \cap N(U,U)^c \ne\emptyset$, and therefore $N(U,U)^c$ is in $\bf{Bo}$. This contradicts our assumption since $N(U,U) \cap N(U,U)^c = \emptyset$ implies that $N(U,U)^c$ is not in $\bf{Bir}$.
We do have the following facts:
Let $S \subset {\mathbb{Z}}$ be a syndetic subset.
1. (Veech [@Veech]) There exists a Bohr neighborhood of zero $B$ such that $(S - S) {\bigtriangleup}B$ is a subset of upper Banach density zero.
2. (Ellis and Keynes [@EK]) There exists a Bohr neighborhood of zero $B$ with $S - S + S - s\supset B$ for some $s \in S$.
Recall that a topological group $G$ is called [*minimally almost periodic*]{} (MAP) if it admits no nontrivial continuous homomorphism into a compact group. Or, equivalently, if it admits no nontrivial minimal equicontinuous action on a compact space. There are many examples of MAP monothetic Polish groups (see e.g. [@AHK]). A topological group $G$ has the [*fixed point on compacta*]{} property (FPC) if every compact $G$ dynamical system has a fixed point; see [@GrM] and [@G]. Some authors call this property [*extreme amenability*]{}. Recently the theory of Polish groups with the fixed point on compacta property received a lot of attention and new and exciting connections with other branches of mathematics (like Ramsey theory, Gromov’s theory of mm-spaces, and concentration of measure phenomena) were discovered; see V. Pestov’s book [@P]. In [@G] it is shown that the Polish group $G$ of all measurable functions $f$ from a nonatomic Lebesgue measure space $({\Omega},{\mathcal{B}},m)$ into the circle $\{z \in {\mathbb{C}}: |z|=1\}$, with pointwise product and the topology of convergence in measure, is monothetic and has the FPC property. Of course every topological group with the FPC property is also MAP. The following problem is posed in [@G].
\[prob:A2\] Is there a Polish monothetic group which is MAP but does not have the fixed point on compacta property?
It is shown there that a positive answer to problem $(A'')$ would provide a negative answer to problem $(A')$.
More on topological recurrence {#sec:tr}
==============================
Let $(X,T)$ be a dynamical system, where $X$ is a compact metric space and $T : X \to X$ is a homeomorphism of $X$ onto itself. We fix a compatible metric $d$ on $X$. Recall the following familiar definition. A point $x \in X$ is [*recurrent*]{} if for every ${\epsilon}>0$ there is a $n\in {\mathbb{Z}}\setminus \{0\}$ with $d(T^nx,x) <{\epsilon}$. Equivalently, setting $$\phi(x)=\inf \{d(T^nx,x): n\in {\mathbb{Z}}\setminus \{0\}\},$$ we see that $x$ is recurrent iff $\phi(x)=0$. More generally, given an [*infinite*]{} subset $L \subset {\mathbb{Z}}\setminus\{0\}$, set $$\phi_L(x)=\inf \{d(T^nx,x): n\in L\},$$ and call a point $x \in X$, [*$L$-recurrent*]{} when $\phi_L(x)=0$. Let us remark that the role of the metric $d$ in these definitions is not essential. It is not hard to show that although the functions $\phi_L$ usually depend on the choice of a compatible metric $d$, the sets of $L$-recurrent points do not. We say that a subset $A \subset X$ is [*wandering*]{} if there is an infinite set $J \subset {\mathbb{Z}}$ such that the sets $T^j A; j \in J$, are pairwise disjoint. We say that the system $(X,T)$ is [*non-wandering*]{} if $X$ contains no nonempty wandering open subsets. Following Furstenberg, [@Fur Theorem 1.27] , we have:
\[five\]
1. The function $\phi_L$ is upper-semi-continuous.
2. The set of $L$-recurrent points is a $G_{\delta}$ subset of $X$.
3. If $(X,T)$ is non-wandering then the set of recurrent points is a dense $G_{\delta}$ subset of $X$.
4. If there is a $T$-invariant probability measure $\mu$ on $X$ with full support (i.e. $\mu(U)>0$ for every nonempty open $U$) and $L$ is a Poincaré set then the set of $L$-recurrent points is a dense $G_{\delta}$ subset of $X$.
5. If $(X,T)$ is minimal and $L$ is a Birkhoff set then the set of $L$-recurrent points is a dense $G_{\delta}$ subset of $X$.
We leave the proofs of the claims (1) and (2) as an exercise. For (3) see Furstenberg, [@Fur Theorem 1.27] (or adapt the following proof). For the proof of claim (4) we first recall that an upper-semicontinuous function on $X$ has a dense $G_{\delta}$ set of continuity points. Let $X_L \subset X$ be the dense $G_{\delta}$ set of continuity points of $\phi_L$. Suppose $\phi_L(x_0)=a >0$ for some $x_0\in X_L$. Then, by continuity, there is a $0< \delta < a/4$ such that $\phi(x) > a/2$ for every $x$ is an open ball $U$ of radius ${\delta}$ around $x_0$. Since $\mu(U)>0$ and $L$ is Poincaré we have $L \cap N(U,U) \ne \emptyset$. For $n$ in this intersection there are $u_1,u_2 \in U$ with $T^{n} u_1 = u_2$, hence $d(u_1,T^{n} u_1)< a/2$. In particular $\phi_L(u_1) < a/2$. This contradicts our choice of $U$ and we conclude that $\phi_L(x)=0$ for every $x \in X_L$. This completes the proof of claim (4). A similar argument will prove claim (5).
In the next two theorems we establish several characterizations of Birkhoff sets. We will use the following lemma which is valid for every minimal system.
\[M\] Let $(X,T)$ be a minimal system and let $\eta>0$ be given. Then there exists a positive integer $M \ge 1$ such that for every $x \in X$ the set $\{T^j x \}_{j=0}^M$ is $\eta$-dense in $X$; i.e. for every $x'\in X$ there is some $0 \le j \le M$ with $d(x',T^jx)< \eta$.
Assuming the contrary we would have for each $n$, points $x_n, y_n \in X$ such that $B_\eta(y_n)\cap
\{T^jx_n\}_{j=0}^n =\emptyset$. By compactness there are convergent subsequences say, $x_{n_j} \to x$ and $y_{n_j} \to y$. By minimality there is a positive $m \ge 1$ such that $d(T^m x, y) < \eta/3$. We now choose $j$ so large that: (i) $n_j > m$, (ii) $d(y,y_{n_j}) < \eta/3$, and (iii) $x_{n_j}$ is sufficiently close to $x$ to ensure that $d(T^m x_{n_j}, T^m x) < \eta/3$. With this choice of $j$ we now have: $$d(T^m x_{n_j}, y_{n_j}) < d(T^m x_{n_j}, T^m x) + d(T^m x, y)
+ d(y, y_{n_j}) < \eta/3+\eta/3+\eta/3=\eta.$$ Since $n_j > m$, this contradicts the choice of $x_{n_j}$ and $y_{n_j}$.
\[rec-thm\] The following conditions on a subset $L \subset {\mathbb{Z}}_*$ are equivalent.
1. $L$ is Birkhoff.
2. $L \cap (S - S) \ne \emptyset$ for every syndetic subset $S \subset {\mathbb{Z}}$.
3. For every minimal dynamical system $(X,T)$, the set of $L$-recurrent points is dense and $G_{\delta}$.
4. For every dynamical system $(X,T)$ and ${\epsilon}>0$ there are $x \in X$ and $m \in L$ with $d(T^mx,x) < {\epsilon}$.
From Lemmas \[NUU\] and \[symbolic\] we easily deduce the equivalence of properties (1) and (2). The implication (1) ${\Rightarrow}$ (3) is proven in Theorem \[five\].5. Next assume (3). Given a minimal system $(X,T)$ and a nonempty open subset $U \subset X$ we clearly have $L \cap N(U,U)\ne\emptyset$, whence $L$ is Birkhoff. Thus we have (3) ${\Rightarrow}$ (1).
As every dynamical system has a minimal subsystem we clearly have (3) ${\Rightarrow}$ (4). Finally we show that (4) implies (3). Let $(X,T)$ be a minimal system. For ${\epsilon}>0$ set $$V_L({\epsilon})=\{x \in X: \exists\ m \in L \ {\text{with}}\ d(T^mx,x)< {\epsilon}\}.$$ Clearly $V_L({\epsilon})$ is open and assuming (4) we know that it is nonempty. Given $\eta >0$ there is, by Lemma \[M\], $M \ge 1$ such that for every $x \in X$ the set $\{T^i x\}_{i=0}^M$ is $\eta$-dense. Let ${\delta}>0$ be such that $d(x,x') < {\delta}$ implies $d(T^ix,T^ix')< {\epsilon}$ for every $0 \le i \le M$. It now follows that for every $x \in V_L({\delta})$, we have $\{T^i x\}_{i=0}^M\subset V_L({\epsilon})$, and consequently, that $V_L({\epsilon})$ is $\eta$-dense. Since $\eta$ is arbitrary, we conclude that $V_L({\epsilon})$ is dense. By Baire’s theorem we conclude that $X_0 = \bigcap_{{\epsilon}>0} V_L({\epsilon})$ is a dense $G_{\delta}$ subset of $X$. Clearly every $x \in X_0$ is $L$-recurrent.
In order to achieve additional characterizations for Birkhoff sets we introduce the following definition. For $r\in{\mathbb{N}}$ we denote ${\mathbb{N}}_r=\{1,2,\ldots,r\}$.
\[def:color\] Let $r\in {\mathbb{N}}$. A subset $L\subset{\mathbb{Z}}_*$ is said to be [*$r$-Birkhoff*]{} (notation: $L\in{\bf Bir}_r$) if the following two equivalent conditions hold:
1. For every sequence $\{z_i\}_{i\in{\mathbb{Z}}}$ over ${\mathbb{N}}_r$, there are $m\in L$ and $i\in{\mathbb{Z}}$ such that $z_i=z_{i+m}$.
2. For every coloring $c\colon {\mathbb{Z}}\to{\mathbb{N}}_r$ there are $i,j\in {\mathbb{Z}}$, with $c(i)=c(j)$ and $i-j\in L$.
In the above definition one can replace ${\mathbb{Z}}$ by ${\mathbb{N}}$.
\[rec-thm2\] The following conditions on a subset $L \subset {\mathbb{Z}}_*$ are equivalent:
1. $L$ is Birkhoff.
2. For any compact metric space $Z$, every sequence $\{z_i\}_{i \in {\mathbb{Z}}}$, with $z_i \in Z$, and every ${\epsilon}>0$, there are $m \in L$ and $i \in {\mathbb{Z}}$ such that $d(z_i, z_{i +m}) < {\epsilon}$.
3. $L$ is $r$-Birkhoff for all $r\in{\mathbb{N}}$.
Thus, by the above theorem, [**Bir**]{} $=\bigcap_{r\in{\mathbb{N}}}\, $[**Bir**]{}$_r$.
\(1) ${\Rightarrow}$ (2): Suppose $L$ is Birkhoff and let $Y = {{\bar{\mathcal{O}}}_T}(\zeta)$, where ${\Omega}=Z^{\mathbb{Z}}$, $T:{\Omega}\to {\Omega}$ is the shift and the element $\zeta\in {\Omega}$ is defined by $\zeta(i)=z_i$. Let $M \subset Z$ be a minimal subset. Applying Theorem \[rec-thm\] we see that there is a point $x \in M \subset Y$ which is $L$-recurrent. Fix a compatible metric $d$ on ${\Omega}$ and let $0 < {\delta}$ be such that $d({\omega},{\omega}')< {\delta}$ implies $d({\omega}(0),{\omega}'(0))<{\epsilon}$. Let $m \in L$ be such that $d(T^m x,x) < {\delta}$. Let $i \in {\mathbb{Z}}$ be chosen so that $T^i\zeta$ is sufficiently close to $x$ to ensure that also $d(T^m T^i \zeta, T^i\zeta)< {\delta}$. By our choice of ${\delta}$ we have $d(z_{m+i},z_i)=d(\zeta(m + i),\zeta(i))< {\epsilon}$.
\(2) ${\Rightarrow}$ (3): Take $Z ={\mathbb{N}}_r=\{1,2,\dots,r\}$. Let $d(i,j)={\delta}_{ij}$ for $i,j\in {\mathbb{N}}_r$ and take ${\epsilon}=1/2$. Then $d(z_i, z_{i +m}) < {\epsilon}$ implies $z_i = z_{i +m}$. (3) ${\Rightarrow}$ (1): We will show that condition (4) in Theorem \[rec-thm\] is satisfied. So let $(X,T)$, a minimal system, and ${\epsilon}>0$ be given. Let $\{V_i\}_{i=1}^r$ be an open cover of $X$ by balls of radius ${\epsilon}/2$. Fix $x_0 \in X$ and choose a sequence $z_i \in {\mathbb{N}}_r$ such that $T^i x_0 \in U_{z_i}$ for every $i \in {\mathbb{Z}}$. By (3) we have $m \in L$ and $i\in {\mathbb{Z}}$ such that $z_{m+i}=z_i=j$, whence $T^i x_0$ and $T^{m+i} x_0$ are both in $U_j$. Thus $d(T^m T^i x_0, T^i x_0) < {\epsilon}$, and taking $x = T^i x_0$ we have the required $x$.
The last condition in Theorem \[rec-thm2\] can be formulated as a coloring property: For every $r$ and every coloring $c: {\mathbb{Z}}\to \{1,2,\dots,r\}$ there are $i, j \in {\mathbb{Z}}$, with $c(i) = c(j)$ and $i -j \in L$. See [@W] for a graph theoretical interpretation of this coloring property.
We will now consider some basic properties of $r$-Birkhoff sets. The first statement we leave as an easy exercise.
For $r\geq1$, every $r$-Birkhoff set contains a finite $r$-Birkhoff subset.
For any $r\in {\mathbb{N}}$, each of the sets $k{\mathbb{N}}_r=\{k,2k,\ldots,rk\}$, $k\in{\mathbb{N}}$, is . Indeed, let $(z_i)$ be an arbitrary sequence in ${\mathbb{N}}_r$. Since card$(k{\mathbb{N}}_{r+1})=r+1>r
=$ card$({\mathbb{N}}_r)$, there are $i, j\in k{\mathbb{N}}_{r+1}$, $i\neq j$, such that $z_i=z_j$. Assuming, with no loss of generality, that $m=j-i>0$, we get $z_i=z_{i+m}$, with some $m\in k{\mathbb{N}}_r$, completing the proof (see Definition \[def:color\], first condition).
On the other hand, for finite subsets $M\subset {\mathbb{Z}}_*$ the following implication holds: $$\text{card}(M)=r\geq1\quad \implies \quad M\notin \text{\bf Bir}_{r+1}.$$ With no loss of generality we may assume that $M\subset {\mathbb{N}}$ (by replacing $M$ by the set $(M\cup(-M))\cap{\mathbb{N}}$). Construct a sequence $\{z_i\}_{i\in{\mathbb{Z}}}$ over the set ${\mathbb{N}}_{r+1}=\{1,2,\ldots,r+1\}$ as follows. For $i\leq0$, set $z_i=1$; for $i\geq1$, set inductively: $$z_i=\min X_i, \quad \text{where }\ X_i=\{x\in{\mathbb{N}}_{r+1}\mid x\neq z_{i-m}, \text{ for all } m\in M\}.$$ (Clearly, $X_i\neq\emptyset$ for $i\geq1$, because card$(M)=r<r+1=$card$({\mathbb{N}}_{r+1})$). The above construction implies that $z_i=z_{i+m}$ has no solutions in $i\in{\mathbb{N}}$ and $m\in M$. It follows that $M\notin$ [**Bir**]{}$_{r+1}$ (see Definition \[def:color\] and the subsequent remark).
We conclude that the sets $k{\mathbb{N}}_r=\{k,2k,\ldots,rk\}$, $k,r\in{\mathbb{N}}$, provide examples of sets which are not $(r+1)$-Birkhoff. More refined examples will be provided next (see below).
A subset $M\subset Z_*$ is called [*stably*]{} $r$-Birkhoff (notation: $M\in \text{\bf Bir}'_r$) if for every finite subset $F\subset {\mathbb{Z}}$, the difference set $M\setminus F$ is $r$-Birkhoff.
Define the sets $$\label{eq:lr}
L_r=\Big\{n(r+2)^k\mid n\in\{1,2,\ldots,r\}, k\geq0\Big\}\subset{\mathbb{N}}, \quad\text{for }\ r\in{\mathbb{N}}.$$
We claim that, for every $r \ge 2$, the set $L_r$
1. 1. is lacunary;
2. is stably $r$-Birkhoff;
3. is not $(r+1)$-Birkhoff.
The fact that $L_r$ is lacunary is clear. In fact, if $\{x_1<x_2<\ldots\}$ is the linear ordering of $L_r$, then $\min_{k\geq1} \frac{x_{k+1}}{x_k}=\frac r{r-1}$, for $r\geq2$.
The set $L_r$ is stably $r$-Birkhoff because $L_r$ can be represented as a disjoint infinite union $L_r=\bigcup_{k\geq0} L_{r,k}$ where each $L_{r,k}=(r+2)^k {\mathbb{N}}_r$ is $r$-Birkhoff (as proved earlier).
Finally, to prove that $L_r$ is not $(r+1)$-Birkhoff, define a sequence $\{z_k\}_{k\in{\mathbb{Z}}}$ over ${\mathbb{N}}_{r+1}$ by the condition $z_i\equiv i$ (mod$(r+1)$). We claim that $z_i=z_{i+m}$ has no solution in $m\in L_r$ and $i\in {\mathbb{Z}}$. Indeed, otherwise $$i\equiv i+m\text{ (mod }(r+1)) \implies m\equiv 0\text{ (mod }(r+1)) \implies
\tfrac m{r+1}\in{\mathbb{Z}}$$ which is impossible (see ). This completes the proof that $L_r\notin$ [**Bir**]{}$_r$ (see the first condition in Definition \[def:color\]).
The fact that the sets $L_r\in\text{\bf Bir}'_r$ are lacunary should be compared with the fact that no set in $\text{\bf Bo}$ (which by Lemma \[lem:compare3\] contains $\text{\bf Bir}$) is lacunary (see the examples in Section \[sec:exs\]).
The moving recurrence problem {#sec:mr}
=============================
The following question was recently posed by Boshernitzan, and is still open.
Let $(X,T)$ be a dynamical system, $\mu \in M_T(X)$ a $T$-invariant probability measure on $X$ and $(n_k)$ an infinite sequence of nonzero integers. Define $$\psi_{(n_k)}(x) = \inf_{k\ge 1} d(T^{n_k}x, T^{n_k +k}x).$$ Is it true that $\psi{(n_k)}(x)=0$, $\mu$-a.e.?
In this section we prove a topological analogue using the tools developed in the previous sections.
For a sequence $(n_k)$ of elements of ${\mathbb{Z}}$, let $$\psi_{(n_k)}(x) = \inf_{k\ge 1} d(T^{n_k+k} x,T^{n_k} x).$$ More generally, given two sequences $(n_k)$ and $(r_k)$ of elements of ${\mathbb{Z}}$, let $$\psi_{(n_k,r_k)}(x) = \inf_{k\ge 1} d(T^{n_k+r_k} x,T^{n_k} x).$$ We say that a point $x\in X$ is [*$(n_k)$-moving recurrent*]{} if $\psi_{(n_k)}(x) =0$. It is [*$(n_k,r_k)$-moving recurrent*]{} when $\psi_{(n_k,r_k)}(x) =0$. Note that $\psi_{(n_k)}=\psi_{(n_k,k)}$.
Again we have:
\[usc\] The function $\psi_{(n_k,r_k)}$ is upper-semi-continuous and the set of $(n_k,r_k)$-moving recurrent points is a $G_{\delta}$ subset of $X$.
\[mr-thm\] Let $(r_k)$ be a Birkhoff set. Then for every sequence $(n_k)$, and every minimal dynamical system $(X,T)$ the set of $(n_k,r_k)$-moving recurrent points is dense and $G_{\delta}$. In particular, taking $r_k =k$ we see that for every minimal dynamical system $(X,T)$ the set of $(n_k)$-moving recurrent points is dense and $G_{\delta}$.
[**Step 1:**]{} Let $X_0\subset X$ denote the dense $G_{\delta}$ set of continuity points of $\psi_{(n_k,r_k)}$ (Lemma \[usc\]). Let $x_0 \in X_0$ and assume that $\psi_{(n_k,r_k)}(x_0)=2{\epsilon}> 0$. Since $x_0$ is a continuity point we can find a ball $U$ around $x_0$ such that $\psi_{(n_k,r_k)}(x)> {\epsilon}$ for every $x \in U$.
[**Step 2:**]{} We will show that the set $$V({\epsilon})=\{x \in X: \psi_{(n_k,r_k)}(x) < {\epsilon}\}$$ is dense.
Fix $\eta > 0$ and use Lemma \[M\] to find $M \ge 1$ such that for every $x \in X$ the set $\{T^j x \}_{j=0}^M$ is $\eta$-dense in $X$. Next choose ${\delta}> 0$ such that $d(x,x') < {\delta}$ implies $d(T^jx, T^jx') < {\epsilon}$ for every $0 \le j \le M$.
Next observe that there exit $x \in X$ and $k \ge 1$ with $d(T^{n_k + r_k}x,T^{n_k}x) < {\delta}$. In fact, since $(r_k)$ is Birkhoff, we can (applying Theorem \[rec-thm\]) pick an $(r_k)$-recurrent point $x' \in X$ and then find $k \ge 1$ with $d(T^{r_k} x', x') < {\delta}$. Set $x = T^{-r_k}x'$, so that $x' = T^{r_k}x$ and $$d(T^{n_k + r_k}x,T^{n_k}x) = d(T^{r_k}x', x') < {\delta}.$$
Now, by the choice of ${\delta}$, we have: $$d(T^{n_k + r_k}T^j x,T^{n_k}T^j x) < {\epsilon},
\ {\text{for all}} \ 0 \le j \le M.$$ Thus $\psi_{(n_k,r_k)}(T^j x) < {\epsilon}$ for all $0 \le j \le M$ and we conclude that $V({\epsilon})$ is $\eta$-dense. As this holds for every $\eta> 0$ we conclude that $V({\epsilon})$ is dense.
[**Step 3:**]{} We now have $U \cap V({\epsilon}) \ne \emptyset$ and we achieved the conflict ${\epsilon}< \psi_{(n_k,r_k)}(x) < {\epsilon}$ for any point $x$ in this intersection. Since the assumption $\psi_{(n_k,r_k)}(x_0) > 0$ leads to a conflict we conclude that $\psi_{(n_k,r_k)}(x_0) = 0$ for every $x_0 \in X_0$, as required.
Recall the following definition from [@GW].
A dynamical system $(X,T)$ is called an [*$M$-system*]{} if (i) it is topologically transitive and (ii) the union of the minimal subsystems of $X$ is dense in $X$.
The class of $M$-systems is very large, e.g. it contains every topologically transitive system with a dense set of periodic points. (The latter systems are called [*chaotic in the sense of Devaney*]{}, or [*$P$-systems*]{}.)
Let $(X,T)$ be an $M$-system and $(n_k)$ an infinite sequence in ${\mathbb{Z}}$. Then there is a dense $G_{\delta}$ subset $X_0 \subset X$ such that $\psi_{(n_k)}(x) = 0$ for every $x \in X_0$. In particular the set $X_{tr} \cap X_0$ of $(n_k)$-moving recurrent transitive points is a dense $G_{\delta}$ subset of $X$.
The set $X_0 = \{x \in X : \psi_{(n_k)}(x)=0\}$ is a $G_{\delta}$ subset of $X$ . By Theorem \[mr-thm\] for every minimal subset $M \subset X$ the set $M_0 = M \cap X_0$ is a dense $G_{\delta}$ subset of $M$. Thus $\bigcup\{M_0: M \ {\text{is a minimal subset of }}\ X\}\subset X_0$ is dense in $\bigcup\{M: M \ {\text{is a minimal subset of }}\ X\}$. In turn, the latter is dense in $X$ and it follows that $X_0$ is dense in $X$. Finally, as the set $X_{tr}$ of transitive points in an $M$-system is always dense and $G_{\delta}$ we conclude that so is $X_{tr} \cap X_0$.
Absolute moving recurrence {#Sec-UR}
==========================
We will say that a system $(X,T)$ is [*absolutely moving recurrent*]{} if for every infinite sequence $(n_k)\subset {\mathbb{Z}}$, $\psi_{(n_k)} \equiv 0$ (i.e. every point of $X$ is $(n_k)$-moving recurrent).
Recall the following definition from [@GM]:
A dynamical system $(X,T)$ is called [*uniformly rigid*]{} if there exists a sequence $m_i \nearrow \infty$ in ${\mathbb{Z}}$ such that $$\lim_{i \to \infty} \sup_{x \in X} d(x, T^{m_i}x) =0.$$
For any dynamical system $(X,T)$ let $${\Lambda}(X,T)={\rm {unif{\text{-}}}} {{\rm{cls\,}}}\{T^n: n\in {\mathbb{Z}}\} \subset
{{\rm{Homeo\,}}}(X)$$ be the uniform closure of the powers of $T$ in the Polish group ${{\rm{Homeo\,}}}(X)$. Of course ${\Lambda}$ is a Polish monothetic group, and the system [$(X,T)$ ]{}is uniformly rigid iff it is not discrete.
A topologically transitive dynamical system $(X,T)$ is absolutely moving recurrent if and only if it is uniformly rigid.
Suppose first that $(X,T)$ is uniformly rigid with $T^{m_i} \overset{\text{unif}}{\longrightarrow} {{\rm{Id}}}$. Let $(n_k)\subset {\mathbb{Z}}$ be an arbitrary infinite sequence. Then for every ${\epsilon}>0$ there exists an $i_0$ such that for $i > i_0$, $d(T^{m_i}x,x) < {\epsilon}$ for every $x \in X$. In particular then, $$d(T^{n_{m_i}+ m_i}x,T^{n_{m_i}}x) =
d(T^{m_i}(T^{n_{m_i}}x),T^{n_{m_i}}x)< {\epsilon}.$$ Thus, $\liminf_{k}d(T^{n_k+ k}x,T^{n_k}x)=0$ for every $x \in X$, and we have shown that $(X,T)$ is absolutely moving recurrent. (Note that topological transitivity is not needed in this direction.)
Conversely, suppose $(X,T)$ is not uniformly rigid, that is: $${\text{there exists}}\ {\epsilon}_0 > 0, \ {\text{such that}}\ \forall k, \exists x_k \in X,
\ {\text{with}} \ d(T^kx_k,x_k) > {\epsilon}_0.$$ Fix $x_0 \in X_{tr}$ and for each $k$ choose $n_k\in{\mathbb{N}}$ such that $T^{n_k}x_0$ is sufficiently close to $x_k$ to ensure that also $d(T^{n_k+k}x_0,T^{n_k}x_0) =
d(T^kT^{n_k}x_0,T^{n_k}x_0)> {\epsilon}_0$. For the sequence $(n_k)$ we have $\psi_{(n_k)}(x_0) \ge {\epsilon}_0$, hence $(X,T)$ is not absolutely moving recurrent.
[WWW]{}
M. Ajtai, I. Havas and J. Komlos, [*Every group admits a bad topology*]{}, in [*Studies in pure mathematics*]{} (P. Erdos, ed.) Birkhäuser, Basel (1983).
E. Akin, [*Recurrence in topological dynamics, Furstenberg families and Ellis actions*]{}, Plenum Press, New York and London, 1997.
E. Akin and E. Glasner, [*Residual properties and almost equicontinuity*]{}, J. d’Analyse Math., (2001), 243-286.
V. Bergelson, H. Furstenberg and B. Weiss, [*Piecewise-Bohr Sets of Integers and Combinatorial Number Theory*]{}, in Topics in Discrete Mathematics Dedicated to Jarik Nesetril on the Occasion of his 60th Birthday ed. by M. Klazar, J. Kratochvil, M. Loebel, J. Matousek, R. Thomas and P. Valtr, pp.13-37, Springer, Berlin Heidelberg New York, 2006.
M. Boshernitzan, [*Homogeneously distributed sequences and Poincare sequences of integers*]{}, Monatsh. Math. [**96**]{}, (1983), 173-181.
J. Bourgain, [*On the maximal ergodic theorem for certain subsets of the integers*]{}, Israel J. Math. [**61**]{}, No. 1 (1988), 39-72.
R. Ellis and H. Keynes, [*Bohr compactifications and a result of F[ø]{}lner*]{}, Israel J. Math. [**12**]{}, (1972), 314-330.
P. Erdös, [*Repartition modulo 1*]{}, Lecture Notes in Mathematics, Vol. [**[475]{}**]{}, Springer-Verlag, New York, 1975.
N. Frantzikinakis and R. McCutcheon, [*Ergodic Theory: Recurrence*]{}. To appear in the Encyclopedia of Complexity and System Science, arXiv:0705.0033 (math.DS)
H. Furstenberg, [*Recurrence in ergodic theory and combinatorial number theory*]{}, Princeton university press, Princeton, N.J., 1981.
E. Glasner, [*Divisible properties and the Stone-Čech compactification*]{}, Canad. J. Math. [**32**]{}, (1980), 993-1007.
E. Glasner, [*On minimal actions of Polish groups*]{}, Topology and Its Applications, [**85**]{}, (1998), 119-125.
E. Glasner, [*Ergodic theory via joinings*]{}, AMS, Surveys and Monographs, [**[101]{}**]{}, 2003.
E. Glasner, [*Classifying dynamical systems by their recurrence properties*]{}, Topol. Methods Nonlinear Anal., [**[24]{}**]{}, (2004), 21-40.
E. Glasner and D. Maon, [*Rigidity in topological dynamics*]{}, Ergod. Th. Dynam. Sys. [**9**]{}, (1989), 309-320.
E. Glasner and B. Weiss, [*Sensitive dependence on initial conditions*]{}, Nonlinearity [**6**]{}, (1993), 1067-1075.
W. H. Gottschalk and G. A. Hedlund, [*Topological Dynamics*]{}, AMS Colloquium Publications, Vol. 36, 1955.
M. Gromov and V. D. Milman, [*A topological application of the isoperimetric inequality*]{}, Amer. J. Math. [**105**]{}, (1983), 843-854.
W. Huang, and X. Ye, [*An explicit scattering, non-weakly mixing example and weak disjointness*]{}, Nonlinearity [**15**]{}, (2002), 849-862.
I. Kriz, [*Large independent sets in shift-invariant graphs. Solution of Bergelson’s problem*]{}, Graphs and combinatorics [**3**]{}, (1987), 145-158.
B. De Mathan, [*Numbers contravening a condition in density modulo 1*]{}, Acta Math. Acad. Sci. Hungar. [**36**]{}, (1980), 237-241.
R. McCutcheon, [*Three results in recurrence*]{}, Proceedings of the 1993 Alexandria conference, Ergodic theory and its connections with harmonic analysis, Editors: K. E. Petersen and I. A. Salama, LMS Lecture note Series [**205**]{}, Cambridge University Press, Cambridge, 1995, 349-358.
V. Pestov, [*Dynamics of infinite dimensional groups. The Ramsey-Dvoretzky-Milman phenomenon*]{}, Amer. Math. Soc. University Lecture Series [**40**]{}, 2006.
A. D. Pollington, [*On density of sequence $\{n_k\xi\}$*]{}, Illinois J. Math. [**23**]{}, No. 4, (1979), .
I. Z. Ruzsa, [*Uniform distribution, positive trigonometric polynomials and difference sets*]{}, Seminar on Number Theory, 1981/1982, Exp. No. 18, 18 pp., Univ. Bordeaux I, Talence, 1982.
B. Weiss, [*Single orbit dynamics*]{}, CBMS, Regional Conference Series in Math. [**95**]{}, Amer. Math. Soc. Providence RI, 2000.
W. A. Veech, [*The equicontinuous structure relation for minimal Abelian transformation groups*]{}, Amer. J. of Math. [**90**]{}, (1968), 723-732.
[^1]: [*2000 Mathematics Subject Classification.*]{} Primary 37B20, 54H20
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
$^{a}$ Physics Department, Lomonosov Moscow State University, Leninskie gory, Moscow 119991, Russia\
$^{b}$ [Moscow Institute of Physics and Technology, State University, Dolgoprudniy, Moscow region, Russia]{}\
$^{c}$[Faculty of Science and Technology and MESA+, Institute for Nanotechnology, University of Twente, Enschede, The Netherlands]{}\
$^{d}$ Skobeltsyn Institute of Nuclear Physics, Moscow, Lomonosov Moscow State University, Leninskie gory, Moscow 119991, Russia\
$^f$ Institute of Physics, Kazan (Volga region) Federal University, Kremlevskaya ul. 18, Kazan, 420008 Russia
author:
- 'S.V. Bakurskiy$^{\,a, \,b}$, A.A Golubov$^{\,c, \,b}$, N.V. Klenov$^{\,a}$, M.Yu. Kupriyanov[^1]$^{\,d, \,b, \,f}$, I.I. Soloviev$^{\,d}$'
title: Josephson effect in SIFS tunnel junctions with domain walls in weak link region
---
It is well known that properties of Josephson structures with ferromagnetic (F) material in a weak link region depends on relation between the complex decay length, $\xi,$ ( $\xi^{-1} =\xi _{1}^{-1}+i\xi _{2}^{-1})$ and geometrical parameters of these junctions [@RevG]-[@RevV]. If F metal is in the dirty limit and exchange energy, $H,$ sufficiently exceeds the critical temperature of superconducting (S) electrodes, $\pi T_{C},$ then from Usadel equations it follows that $\xi
_{1}\approx \xi _{2}.$ However, it was demonstrated experimentally [@Kontos]-[@Blum] that there could be a noticeable difference between $\xi _{1}\ $and $\xi _{2}.$ Previously the difference has been attributed either to the presence of strong paramagnetic scattering in the F layer [@ryazanov1], or to violation of the dirty limit conditions in ferromagnetic material [@Blum], [@Pugach]. However, application of the first of the mechanisms for the experimental data interpretation requires the existence of unreasonably strong paramagnetic scattering in the weak link material [@ryazanov1]. The relation between an electron mean free, $\ell ,$ and $\xi _{1},$ $\xi _{2}$ in typical experimental situation is also closer to the dirty limit conditions, $\ell \lesssim \xi _{1},$ $\xi _{2}$ rather than to the clean one.
In this article we prove that the existence of a ferromagnetic domain walls in F layer can also lead to appearance of substantial differences between $\xi _{1}\ $ and $\xi _{2}$ even in the absence of strong scattering by paramagnetic impurities, and under the fulfilment of the dirty limit conditions in the F material.
![Fig.\[fig:fig1\]. Geometry of the considered SIFS Josephson junction and its enlarged part, which includes two halves of domains and domain wall separating them. The insulating barrier I has a small transparency (shown by a blue line). []{data-label="fig:fig1"}](Fig1.eps){width="50.00000%"}
**Model.** Consider multilayered SIFS structure presented in Fig.1. It consists of superconductor electrode (S), insulator (I) and FS bilayer as an upper electrode. We assume that the F film has a thickness, $d_{F},$ and that it subdivides into domain structure with antiparallel direction of magnetization vector in the neighboring domains. The width of the domains is $W$ and they separated by atomically sharp domain walls oriented perpendicular to SF interfaces. Due to periodicity of the structures we, without any loss of generality, can perform our analysis within its half of the period, that is from $-W/2$ to $W/2.$ This element is enlarged in Fig.1. It consists of two halves of domains and domain wall separating them.
We will suppose that the condition of dirty limit is fulfilled for all metals and that effective electron-phonon coupling constant is zero in F material. We will assume further that either temperature $T$ is close to the critical temperature of superconducting electrodes $T_{C}$ or the suppression parameters $\gamma
_{BS}=R_{BS}\mathcal{A}_{BN}/\rho _{F}\xi _{F} $ at SF interface is large enough to permit the use of the linearized Usadel equations in F film of the structure. We will characterize the FF interface (domain wall) by the suppression parameter $\gamma=1,$ and the suppression parameter $\gamma _{BF}=R_{BF}\mathcal{A}_{BF}/\rho _{F}\xi _{F},$ which can take any value. Here $R_{BS},R_{BF}$ and $\mathcal{A}_{BN},\mathcal{A}_{BF}$ are the resistances and areas of the SF and FF interfaces, $\xi _{S},$ and $\xi
_{F}=(D_{F}/2\pi T_{C})^{1/2}$ are the decay lengths of S, F materials, while $\rho _{S}$ and $\rho _{F}$ are their resistivities, $D_{F}$ is diffusion coefficient in the F metal.
Under the above conditions the proximity problem in the SF part of SIFS junction $(0\leq x\leq d_{F})$ reduces to solution of the set of linearized Usadel equations [@RevG]-[@RevV], [@Usadel]$$\begin{aligned}
\left\{ \frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial
y^{2}}\right\} F_{F}-\widetilde{\Omega }_{+}F_{F} &=&0,~0\leq y\leq \frac{W}{2}, \label{EqFfp1} \\
\left\{ \frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial
y^{2}}\right\} F_{F}-\widetilde{\Omega }_{-}F_{F} &=&0,~-\frac{W}{2}\leq
y\leq 0, \label{EqFfp2}\end{aligned}$$where $\Omega =\omega /\pi T_{C},\widetilde{\Omega }_{\pm }=|\Omega |\pm ih\operatorname{sgn}(\omega )$, $h=H/\pi T_{C},$ $H,$ is exchange energy of ferromagnetic material, $\omega =\pi T(2n+1)$ are Matsubara frequencies. The spatial coordinates in (\[EqFfp1\]), (\[EqFfp2\]) are normalized on decay length $\xi _{F}$. To write these equations we have chosen the $x$ and $y$ axis in the directions perpendicular and parallel to the SF plane and put the origin in the middle of SF interface to the point, which belongs to the domain wall (see Fig.1).
Equations (\[EqFfp1\]), (\[EqFfp2\]) must be supplemented by the boundary conditions [@KL]. They have the form $$\begin{aligned}
\gamma _{BS}\frac{\partial }{\partial x}F_{F} &=&-G_{0}\frac{\Delta }{\omega
},\ x=0,~-\frac{W}{2}\leq y\leq \frac{W}{2}, \notag \\
\frac{\partial }{\partial x}F_{F} &=&0,\ x=d_{F},~-\frac{W}{2}\leq y\leq
\frac{W}{2}. \label{BCN(0)}\end{aligned}$$At FF interface $(y=0,~0\leq x\leq d_{F})$ and in the middle of the domains $(y=\pm W/2, ~0\leq x\leq d_{F})$ we also have $$\gamma _{BF}\frac{\partial }{\partial y}F_{F}(x,+0)=F_{F}(x,+0)-F_{F}(x,-0),\label{BCF(0)}$$$$\frac{\partial }{\partial y}F_{F}(x,+0)=\frac{\partial }{\partial y}F_{F}(x,-0),$$$$\frac{\partial }{\partial y}F_{F}(x,\frac{W}{2})=\frac{\partial }{\partial y}F_{F}(x,-\frac{W}{2})=0. \label{BCW}$$Here $W$ is the width of the domains, $G_{0}=\omega /\sqrt{\omega
^{2}+\Delta ^{2}},$ $\Delta $ is the modulus of the order parameter of superconducting electrodes. The critical current density, $J_{C},$ of SIFS Josephson junction is determined by s-wave superconducting correlations at IF interface, which is even function of the Matsubara frequencies$$\frac{eJ_{C}R_{N}}{2\pi T_{C}}=\frac{T}{WT_{C}}\sum_{\omega >0}\frac{G_{0}\Delta }{\omega }\Phi(y), \label{currentD}$$where $\Phi(y)= (F_{F,+\omega }(d_{F},y)+F_{F,-\omega
}(d_{F},y))/2,$ while the full critical current, $I_{C},$ is the result of integration of $J_{C}(y)$ over width of the junction.$$\frac{eI_{C}R_{N}}{2\pi T_{C}}=\frac{T}{WT_{C}}\sum_{\omega >0}\frac{G_{0}\Delta }{\omega }\int_{-W/2}^{W/2}\Phi(y) dy. \label{FullC}$$Here, $R_{N},$ is the normal junction resistance.
**Solution of Usadel equations in FS electrode.** Solution of two-dimensional boundary value problem (\[EqFfp1\])-(\[BCW\]) in the F layer $(0\leq x\leq d_{F})$ is convenient to find in the form of the Fourier series expansion$$F_{F}(x,y)=\sum_{n=-\infty }^{\infty }A_{n}(y)\cos \frac{\pi nx}{d_{F}},~0\leq y\leq \frac{W}{2}, \label{Fpl}$$$$F_{F}=\sum_{n=-\infty }^{\infty }B_{n}(y)\cos \frac{\pi nx}{d_{F}},~-\frac{W}{2}\leq y\leq 0, \label{Fmi}$$where $$\begin{aligned}
A_{n}(y) &=&\frac{Z}{q_{+}^{2}}+a_{n}\cosh (q_{+}\left( y-\frac{W}{2}\right)
), \label{A} \\
B_{n}(y) &=&\frac{Z}{q_{-}^{2}}+b_{n}\cosh (q_{-}\left( y+\frac{W}{2}\right)
), \label{B}\end{aligned}$$and coefficients $a_{n}$ and $b_{n}$ $$a_{n}=-\left[ \frac{1}{q_{+}^{2}}-\frac{1}{q_{-}^{2}}\right] \frac{Zq_{-}S_{-}}{\delta}, ~q_{\pm }=\sqrt{\widetilde{\Omega }_{\pm }+\left( \frac{\pi n}{d_{F}}\right) ^{2}}, \label{an}$$$$b_{n}=\left[ \frac{1}{q_{+}^{2}}-\frac{1}{q_{-}^{2}}\right] \frac{Zq_{+}S_{+}}{\delta},~ Z=\frac{\Delta G_{0}}{\gamma _{BS}d_{F}\omega }
\label{bn}$$are determined from boundary conditions (\[BCF(0)\]). Here the coefficients $\delta,$ $C_{\pm }$ and $S_{\pm }$ are defined by expressions $$\delta=q_{-}q_{+}\gamma _{BF}S_{+}S_{-}+q_{-}C_{+}S_{-}+q_{+}S_{+}C_{-},$$$$C_{\pm }=\cosh (\frac{q_{\pm }W}{2}),~S_{\pm }=\sinh (\frac{q_{\pm }W}{2}).\label{koeff}$$Taking into account the symmetry relation $q_{-}(-\omega )=q_{+}(\omega )$ for s-wave superconducting component in the F layer at $x=d_{F}$ it is easy to get $$\Phi(y\geq0)=\frac{Z}{2}\sum_{n=-\infty }^{\infty }(-1)^{n}\left[ \frac{1}{q_{+}^{2}}+\frac{1}{q_{-}^{2}}-
\left[ \frac{1}{q_{+}^{2}}-\frac{1}{q_{-}^{2}}\right]
\frac{\delta_{+}}{\delta}\right], \label{splus}$$$$\Phi(y\leq0)=\frac{Z}{2}\sum_{n=-\infty }^{\infty }(-1)^{n}\left[ \frac{1}{q_{+}^{2}}+\frac{1}{q_{-}^{2}}-\left[ \frac{1}{q_{+}^{2}}-\frac{1}{q_{-}^{2}}\right]
\frac{\delta_{-}}{\delta}\right], \label{sminus}$$$$\delta _{\pm }=q_{-}S_{-}\cosh (q_{+}\frac{2y\mp W}{2})-q_{+}S_{+}\cosh
(q_{-}\frac{2y\mp W}{2}).$$
Finally for the critical current from (\[FullC\]), (\[splus\]) and ([sminus]{}) we have$$\frac{eI_{C}R_{N}}{2\pi T_{C}}=\frac{T}{2WT_{C}}\sum_{\omega >0}\frac{ZG_{0}\Delta }{\omega }S(\omega ),\label{CurrF}$$$$S(\omega )=\sum_{n=-\infty }^{\infty }(-1)^{n}\left[ \frac{W}{q_{+}^{2}}+\frac{W}{q_{-}^{2}}-\frac{2S_{-}S_{+}\left( q_{-}^{2}-q_{+}^{2}\right) ^{2}}{\delta q_{+}^{3}q_{-}^{3}}\right].$$It is seen that the critical current can be represented as the sum of two terms. The first is the contributions from individual domains separated by fully opaque FF wall $$\frac{eI_{C1}R_{N}}{2\pi T_{C}}=\frac{T}{T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}\omega ^{2}}\Real\frac{1}{\sqrt{\widetilde{\Omega }_{+}}\sinh \left( d_{F}\sqrt{\widetilde{\Omega }_{+}}\right) },
\label{Ic1}$$while the second$$\frac{eI_{C2}R_{N}}{2\pi T_{C}}=\frac{4h^{2}T}{Wd_{F}T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}\omega ^{2}}\sum_{n=-\infty
}^{\infty }\frac{(-1)^{n}S_{-}S_{+}}{q_{+}^{3}q_{-}^{3}\delta }
\label{IcD}$$gives the contribution from the domain wall. Here $\Real(a)$ denotes the real part of $a.$ Expression (\[Ic1\]) reproduces the well-known result previously obtained for single-domain SIFS structures [@Baladie]-[@Vasenko] thereby demonstrating the independence of the critical current on the orientation of the domains magnetization vectors, if they are collinear oriented and the FF interface is fully opaque for electrons.
**Limit of large $\gamma _{BF}.$** For large values of suppression parameter $\gamma _{BF}\gg \max \left\{
1,(Wq_{\pm })^{-1}\right\} $ expression (\[IcD\]) transforms to$$\frac{eI_{C2}R_{N}}{2\pi T_{C}}=\frac{4h^{2}T}{Wd_{F}T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BF}\gamma _{BS}\omega ^{2}}\sum_{n=-\infty }^{\infty }\frac{(-1)^{n}}{q_{+}^{4}q_{-}^{4}}. \label{IcDL}$$The sum over $n$ in Eq. (\[IcDL\]) can be calculated analytically using the theory of residues$$\frac{eI_{C2}R_{N}}{2\pi T_{C}}=\frac{2hT}{WT_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BF}\gamma _{BS}\omega ^{2}}S_{1},
\label{IcDLA}$$$$S_{1}=\Real\left[ \frac{i}{\widetilde{\Omega }_{+}^{3/2}}\left(
\frac{1}{\cosh \left( d_{F}\sqrt{\widetilde{\Omega }_{+}}\right) }+\frac{d_{F}\sqrt{\widetilde{\Omega }_{+}}}{\sinh \left( d_{F}\sqrt{\widetilde{\Omega }_{+}}\right) }\right) \right]$$ It is seen that $I_{C2}$ is vanished as $(\gamma _{BF}W)^{-1}$ with increase of $\gamma _{BF}W$ product and scales on the same characteristic lengths $\xi_{1},$ $\xi_{2}$ as the critical current for single-domain SIFS structures (\[Ic1\]).
**Limit of small $\gamma _{BF}.$** In the opposite limit, $\gamma _{BF}\ll \max \left\{ 1,(Wq_{\pm
})^{-1}\right\} $ we have$$\frac{eI_{C2}R_{N}}{2\pi T_{C}}=\frac{8h^{2}T}{Wd_{F}T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}\omega ^{2}}S_{2}, \label{IcSmG}$$$$S_{2}=\sum_{n=-\infty }^{\infty }\frac{(-1)^{n}S_{-}S_{+}}{q_{+}^{3}q_{-}^{3}\left( q_{-}C_{+}S_{-}+q_{+}S_{+}C_{-}\right) }.$$ It is seen that in full agreement with the result obtained in [@Buzdin] in the considered limit of large domain width, $W\gg \Real(q_{\pm }),$ $$\frac{eI_{C2}R_{N}}{2\pi T_{C}}=\frac{4h^{2}T}{Wd_{F}T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}\omega ^{2}}\sum_{n=-\infty
}^{\infty }\frac{(-1)^{n}}{q_{+}^{3}q_{-}^{3}\left( q_{-}+q_{+}\right) }.
\label{Ic2LW}$$contribution to the critical current from domain wall region falls as $W^{-1} $ and decays in the scale of $\xi _{1}.$
**Limit of small domain width.** In the opposite case, $W\ll \Real(q_{\pm }),$ presentation of the critical current as a sum of $I_{C1}$ and $I_{C2}$ is not physically reasonable and for $I_{C}$ from (\[CurrF\]) we get$$\frac{eI_{C}R_{N}}{2\pi T_{C}}=\frac{T}{2T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}d_{F}\omega ^{2}}S_{3}, \label{Ic2AG}$$$$S_{3}=\sum_{n=-\infty }^{\infty }(-1)^{n}\left[ \frac{\left(
q_{-}^{2}+q_{+}^{2}\right) \gamma _{BW}+4}{\left( q_{-}^{2}q_{+}^{2}\gamma
_{BW}+q_{-}^{2}+q_{+}^{2}\right) }\right] ,$$where $\gamma _{BW}=\gamma _{BF}W/2.$ It is seen that for $\gamma _{BW}\gg 1$ expression (\[Ic2AG\]) transforms to (\[Ic1\]) and $I_{C}=I_{C1},$ while in the limit $\gamma _{BW}\rightarrow 0$ from (\[Ic2AG\]) it follows that the critical current $$\frac{eI_{C}R_{N}}{2\pi T_{C}}=\frac{T}{T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}\omega ^{2}\sqrt{\Omega }\sinh \left( d_{F}\sqrt{\Omega }\right) } \label{SINS}$$ is independent on exchange energy and falls with increase of $d_{F}$ in the same scale as it is for SINS devices. Previously it was found that such transformation of decay length takes place in a vicinity of domain wall [@Chtchelkatchev] - [@Crouzy]. In particular, it was shown that if a sharp domain wall is parallel [@Maleki], [@Volkov2008] or perpendicular to SF interface [@Crouzy] and the thickness of ferromagnetic layers, $d_{f}$ $\lesssim \xi _{F},$ then for antiparallel direction of magnetization the exchange field effectively averages out, and the decay length of superconducting correlations becomes close to that of a single nonmagnetic N metal $\xi _{F}=\sqrt{D_{F}/2\pi T_{C}}.$ The same effect may also take place in S-FNF-S variable thickness bridges [@Karminskaya2], [@Karminskaya4].
For arbitrary values of $\gamma _{BW}$ the sum over $n$ in (\[Ic2AG\]) can be also calculated analytically. The denominator in (\[Ic2AG\]) has the poles at $$n=\pm i\frac{d_{F}}{\pi }\sqrt{\Omega +\frac{1\pm \sqrt{1-\gamma
_{BW}^{2}h^{2}}}{\gamma _{BW}}.}$$Application of the residue theorem to the summation of the series in $n$ in the expression (\[Ic2AG\]) leads to$$\frac{eI_{C}R_{N}}{2\pi T_{C}}=\frac{T}{2T_{C}}\sum_{\omega >0}\frac{G_{0}^{2}\Delta ^{2}}{\gamma _{BS}\omega ^{2}}\frac{\gamma _{BM}}{\sqrt{1-\gamma _{BM}^{2}h^{2}}}S_{4}, \label{IcAW}$$$$S_{4}=\frac{q}{\sqrt{\Omega +p}\sinh \left( d_{F}\sqrt{\Omega +p}\right) }-\frac{p}{\sqrt{\Omega +q}\sinh \left( d_{F}\sqrt{\Omega +q}\right) },$$$$p=\frac{1-\sqrt{1-\gamma _{BW}^{2}h^{2}}}{\gamma _{BW}},~q=\frac{1+\sqrt{1-\gamma _{BW}^{2}h^{2}}}{\gamma _{BW}}. \label{pq}$$It is seen that for $\gamma _{BW}h\leq 1$ s-wave superconducting correlations decay exponentially into the F metal without any oscillations with two characteristic scales, $\xi _{11}=\xi_{F}(\Omega+p)^{-1/2},$ and, $\xi _{12}=\xi_{F}(\Omega+q)^{-1/2}.$ If $\gamma _{BW}$ tends to zero then one of the damping characteristic scale $\xi _{11}$ goes to that $\xi _{F}\Omega ^{-1/2}$ of SINF junctions (see (\[SINS\])), while the other $\xi _{12}$ goes to zero. With $\gamma _{BW}$ increase $\xi _{11}$ reduces, whereas $\xi _{12}$ increases, so that at $\gamma
_{BW}h=1$ they become equal to each other $\xi _{11}=\xi _{12}=\xi
_{F}\left( \Omega +h\right) ^{-1/2}.$ Further increase of $\gamma _{BW}h$ leads to appearance of the damped oscillations in $I_{C}(d_{F})$ dependence with the ratio $$\frac{\xi _{1}}{\xi _{2}}=\frac{\sqrt{\gamma _{BW}^{2}h^{2}-1}}{\sqrt{\left(
\gamma _{BW}\Omega +1\right) ^{2}+\gamma _{BW}^{2}h^{2}-1}+\Omega \gamma
_{BW}+1}, \label{ratio}$$which monotonically increase from zero at $\gamma _{BW}h=1$ up to that of single domain SIFS junctions $$\frac{\xi _{1}}{\xi _{2}}=\frac{h}{\left( \sqrt{\Omega ^{2}+h^{2}}+\Omega
\right) }, \label{rdirty}$$in the limit $\gamma _{BW}\rightarrow \infty .$ From (\[ratio\]) (\[rdirty\]) we can conclude that the existence of domain structure in the F layer of SIFS devices can significantly modify the relation between $\xi _{1}$ and $\xi _{2}$ extracted from experimental studies of $I_{C}(d_{F})$ dependence in SIFS tunnel junctions.
This conclusion is valid not only in the limit of small domain width.
**Arbitrary values of the domain width.** For arbitrary values of the width of the magnetic domains to calculate the dependence of $I_{C}(d_{F})$ is necessary to use the general expression (\[CurrF\]). Figure 2 gives the $I_{C}(d_{F})$ curves calculated for $H=10 \pi T_{C},$ $\gamma_{BF}=0$ and for a set of widths $W/ \xi_{F}.$ It is seen that in full accordance with the analytical analysis given above for $W$ smaller than $0.78\xi_{F},$ $I_{C}$ falls monotonically with $W$ increase. At $W\gtrsim0.78$ there is a transformation from a monotonic dependence of $I_{C}(d_{F})$ to a damped oscillatory one. It is interesting to note that in the vicinity of the transition the critical current decays even faster than for large $W.$
![Fig.\[fig:fig2\]. Dependence of the critical current of SIFS Josephson junction as a function of thickness of F layer $d_{F}$ calculated numerically from (\[CurrF\]) for $T=0.5T_{C},$ $H=10 \pi T_{C},$ $\gamma_{BF}=0$ and for a set of widths $W/ \xi_{F}=0,3; 0,5; 0,7; 0,8; 1; 1,2.$ []{data-label="fig:fig2"}](Fig2.eps){width="50.00000%"}
To illustrate this result, we make a fit of the calculated curves by the simple expression $$I_{C}(d_{F})=A\exp (-d_{F}/\xi _{1})\cos (d_{F}/\xi _{2}+\varphi ),$$ which is ordinary used for estimation of the decay lengths $\xi_{1}$ and $\xi_{2}$ from an experimental data [@BK], [@BR]. At the first step we define $\xi _{2}\,$ $$\xi _{2}=(d_{F2}-d_{F1})/\pi$$ from the positions of the first, $d_{F1},$ and the second, $d_{F2},$ $0$-$\pi $ transitions in $I_{C}(d_{F})$ dependence and put $$\varphi =\pi /2-d_{F1}/\xi _{2}$$ in order to get $I_{C}(d_{F1})=0.$ The decay length $\xi_{1}$ is determined from the ratio of magnitudes of critical current taken in two points having equal phase of oscillation:$$\xi _{1}=\pi \xi _{2}\ln \left[ \frac{I_{C}(d_{F1}+\xi _{2}\pi /2)}{I_{C}(d_{F2}+\xi _{2}\pi /2)}\right]$$ and normalization constant $A$ $$A=\frac{I_{C}(d_{F1}+\xi _{2}\pi /2)}{\exp (-d_{F}/\xi _{1})\cos (d_{F}/\xi
_{2}+\varphi )}$$ has been determined by direct calculation of magnitude in the certain point between $0$-$\pi $ transitions. If the position of the second $0$-$\pi $ transition exceeds $10~\xi _{F}$, we suppose that $\xi _{2}$ is infinite and $I_{C}(d_{F1})$ dependence can be fitted by function $$J_{C}(dF)=A\exp (-d_{F}/\xi _{1}).$$The results of the fitting procedure are presented in Fig.3-Fig.5, which give the decay lengths $\xi_{1}$ and $\xi_{2}$ as well as their ratio $\xi_{1}/ \xi_{2}$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ for a set of suppression parameter $\gamma_{BF}=0; 0.3; 1.$ Thin vertical lines in Fig.3, Fig.4 give values on the x-axis, at which there is a transition from a monotonous exponential decay of $I_{C}(d_{F})$ to the damped oscillation lows. Thin horizontal lines in Fig.3 - Fig.5 provide the asymptotic values of $\xi_{1},$ $\xi_{2}$ and $\xi_{1}/\xi_{2}$ in the limit $W \gg \xi_{F},$ which are coincide with the magnitudes calculated for single domain SIFS junction for given temperature $T=0.5T_{C}$ and exchange energy $H=10\pi T_{C}.$
![Fig.\[fig:fig3\]. Dependence of decay length $\xi_{1}$ as a function of domain width $W$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ and $\gamma_{BF}=0; 0.3; 1.$ []{data-label="fig:fig3"}](Fig3.eps){width="50.00000%"}
![Fig.\[fig:fig4\]. Dependence of decay length $\xi_{2}$ as a function of domain width $W$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ and $\gamma_{BF}=0; 0.3; 1.$ []{data-label="fig:fig4"}](Fig4.eps){width="50.00000%"}
It is seen that the transition point at which monotonic decay of $I_{C}(d_{F1})$ dependence transforms to a damped oscillation behavior the smaller the larger is suppression parameter $\gamma_{BF}.$ Interestingly, in the vicinity of this transition decay length $\xi_{1}$ is even smaller compare to its magnitude in the limit of large $W.$
![Fig.\[fig:fig5\]. The ratio of decay lengths $\xi_{1}$ and $\xi_{2}$ as a function of domain width $W$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ and $\gamma_{BF}=0; 0.3; 1.$ []{data-label="fig:fig5"}](Fig5.eps){width="50.00000%"}
It is also necessary to note that despite of the fact that the transition takes place at $W<\xi_{F},$ the difference between $\xi_{1}$ and $\xi_{2},$ as it follows from Fig.5, exists even for large domain width: the ratio $\xi_{1}/\xi_{2}$ is only around $0.8$ at $W=4\xi_{F}$ and very slowly tends to the following from (\[rdirty\]) the single domain value $0.95$ with $W$ increase. This fact permits us to conclude that the difference between $\xi_{1}$ and $\xi_{2}$ experimentally observed in SFS Josephson structures based on dilute magnetic alloys can be also the consequence of existence of magnetic domains in the F layer.
This work was supported by RFBR grants l4-02-90018-bel$\_$a, 14-02-31002-mol$\_$a, 15-32-20362-mol$\_$a$\_$ved, Ministry of Education and Science of the Russian Federation in the frameworks of Grant No. 14.587.21.0006 (RFMEFI58714X0006) and the Program for the Promotion of Competitiveness of the Kazan Federal University among the World-Leading Scientific Educational Centers, Russian President grant MK-1841.2014.2, Dynasty Foundation, Scholarship of the President of the Russian Federation and Dutch FOM. A.A. Golubov is also acknowledge EU COST program MP1201.
[99]{} A. A. Golubov, M. Yu. Kupriyanov, E. Il’ichev, Rev. Mod. Phys. **76**, 411 (2004).
A. I. Buzdin, Rev. Mod. Phys. **77**, 935 (2005).
F. S. Bergeret, A. F. Volkov, K. B. Efetov, Rev. Mod. Phys. **77**, 1321 (2005).
T. Kontos, M. Aprili, J. Lesueur, F. Genet, B. Stephanidis, and R. Boursier, Phys. Rev. Lett. **89**, 137007 (2002).
C. Bell, R. Loloee, G. Burnell, and M. G. Blamire, Phys. Rev. B **71**, 180501 (R) (2005).
V. Shelukhin, A. Tsukernik, M. Karpovski, *et al.*, Phys. Rev. B **73**, 174506 (2006).
V. A. Oboznov, V. V. Bol’ginov, A. K. Feofanov, V. V. Ryazanov, and A. Buzdin, Phys. Rev. Lett. **96**, 197003 (2006).
J. W. A. Robinson, S. Piano, G. Burnell, C. Bell, and M. G. Blamire, Phys. Rev. Lett. **97**, 177003 (2006).
A. A. Bannykh, J. Pfeiffer, V. S. Stolyarov, I. E. Batov, V. V. Ryazanov, and M. Weides, Phys. Rev. B **79**, 054501 (2009).
F. Born, M. Siegel, E.K. Hollmann, H. Braak, A.A. Golubov, D.Yu Gusakova, and M.Yu Kupriyanov, Phys. Rev. B **74**, 140501 (2006).
J. W. A. Robinson, F. Chiodi, M. Egilmez, G. B. Halasz, M. G. Blamire, Sci. Rep. **2**, 00699 (2012)
Y. Blum, A. Tsukernik, M. Karpovski, and A. Palevski, Phys. Rev. B **70**, 214501 (2004).
N.G. Pugach, M.Yu Kupriyanov, E. Goldobin, R. Kleiner, and D. Koelle, Phys. Rev. B **84**, 144513 (2011).
K. D. Usadel, Phys. Rev. Lett. **25**, 507 (1970).
M.Yu. Kuprianov and V.F. Lukichev, Zh. Eksp. Teor.Fiz. **94**, 139 (1988) \[Sov. Phys. JETP **67**, 1163 (1988)\].
A. Buzdin and I. Baladie, Phys. Rev. B **67**, 184519 (2003).
M. Faure, A. I. Buzdin, A. A. Golubov, and M. Yu. Kupriyanov, Phys. Rev. B 73, 064505 (2006).
A.S. Vasenko, A.A. Golubov, M.Yu Kupriyanov, and M. Weides, Phys. Rev. B **77**, 134507 (2008).
A. I. Buzdin, A. S. Mel’nikov, and N. G. Pugach, Phys. Rev. B **83**, 144515 (2011).
N. M. Chtchelkatchev and I. S. Burmistrov, Phys. Rev. B **68**, 140501(R) (2003).
M. Houzet and A. I. Buzdin, Phys. Rev. B **74**, 214507 (2006).
M. A. Maleki and M. Zareyan, Physical Review B **74**, 144512 (2006).
I. S. Burmistrov and N. M. Chtchelkatchev, Phys. Rev. B **72**, 144520 (2005).
A.F. Volkov, K.B. Efetov, Phys Rev B **78**, 024519 (2008).
I.I. Soloviev, N.V. Klenov, S.V. Bakursky, M.Yu Kupriyanov, A.A. Golubov, Pis’ma Zh. Eksp. Teor. Fiz. **101**, 258 (2015) [\[]{}JETP Lett.**101**, 240 (2015)[\]]{}.
B. Crouzy, S. Tollis, D. A. Ivanov, Phys. Rev. B **75**, 054503 (2007).
I. B. Sperstad, J. Linder, and A. Sudbo, Phys. Rev. B **78**, 104509 (2008).
J. Linder and K. Halterman, Phys. Rev. B **90**, 104502 (2014).
T. Baker, A. Richie-Halford, and A. Bill, New J. Phys. **16**, 093048 (2014).
Ya. M. Blanter and F. W. J. Hekking, Phys. Rev. B **69**, 024525 (2004).
T. Champel and M. Eschrig, PRB **72**, 054523 (2005).
Ya. V. Fominov, A. F. Volkov, and K. B. Efetov, Phys. Rev. B **75**, 104509 (2007).
B. Crouzy, S. Tollis, D. A. Ivanov, Phys. Rev. B **76**, 134502 (2007).
T. Yu. Karminskaya and M. Yu. Kupriyanov, Pis’ma Zh. Eksp. Teor. Fiz. **85**, 343 (2007) [\[]{}JETP Lett. **86**, 61 (2007)[\]]{}.
T. Yu. Karminskaya, A. A. Golubov, M. Yu. Kupriyanov, and A. S. Sidorenko, Phys. Rev. B **79**, 214509 (2009).
A. I. Buzdin, and M. Yu. Kupriyanov, Pis’ma Zh. Eksp. Teor. Fiz. **53**, 308 (1991) [\[]{}JETP Lett. **53**, 321 (1991)[\]]{}.
A. I. Buzdin, V. V. Ryazanov, Physica C **460**, 238 (2007).
Fig.1. Geometry of the considered SIFS Josephson junction and its enlarged part, which includes two halves of domains and domain wall separating them. The insulating barrier I has a small transparency (shown by a blue line).
Fig.2. Dependence of the critical current of SIFS Josephson junction as a function of thickness of F layer $d_{F}$ calculated numerically from (\[CurrF\]) for $T=0.5T_{C},$ $H=10 \pi T_{C},$ $\gamma_{BF}=0$ and for a set of widths $W/ \xi_{F}=0,3; 0,5; 0,7; 0,8; 1; 1,2.$
Fig.3. Dependence of decay length $\xi_{1}$ as a function of domain width $W$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ and $\gamma_{BF}=0; 0.3; 1.$
Fig.4. Dependence of decay length $\xi_{2}$ as a function of domain width $W$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ and $\gamma_{BF}=0; 0.3; 1.$
Fig.5. The ratio of decay lengths $\xi_{1}$ and $\xi_{2}$ as a function of domain width $W$ calculated at $T=0.5T_{C},$ $H=10 \pi T_{C}$ and $\gamma_{BF}=0; 0.3; 1.$
[^1]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Ludovic Dan Lemle
date: version 23 May 2008
title: '**Domains of uniqueness for $C_0$-semigroups on the dual of a Banach space**'
---
[[^1] Let $({\cal X},\|\:.\:\|)$ be a Banach space. In general, for a $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}on $({\cal X},\|\:.\:\|)$, its adjoint semigroup [$\left\{T^{*}(t)\right\}_{t\geq 0}\;\;$]{}is no longer strongly continuous on the dual space $({\cal X}^{*},\|\:.\:\|^{*})$. Consider on ${\cal X}^{*}$ the topology of uniform convergence on compact subsets of $({\cal X},\|\:.\:\|)$ denoted by ${\cal C}({\cal X}^{*},{\cal X})$, for which the usual semigroups in literature becomes $C_0$-semigroups.\
The main purpose of this paper is to prove that only a core can be the domain of uniqueness for a $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$. As application, we show that the generalized Schrödinger operator ${\cal A}^Vf=\frac{1}{2}\Delta f+b\cdot\nabla f-Vf$, $f\in C_0^\infty(\R^d)$, is $L^\infty\left(\R^d,dx\right)$-unique. Moreover, we prove the $L^1(\R^d,dx)$-uniqueness of weak solution for the Fokker-Planck equation associated with ${\cal A}^V$.]{}
Preliminaries
=============
A complete information on the general theory of strongly continuous semigroups of linear operators can be obtained by consulting the books of [Yosida]{} [@yosida'71], [Davies]{} [@davies'80], [Pazy]{} [@pazy'83] or [Goldstein]{} [@goldstein'85].\
In general, for a $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}on a Banach space $({\cal X},\|\:.\:\|)$, it is well known that its adjoint semigroup [$\left\{T^{*}(t)\right\}_{t\geq 0}\;\;$]{}is no longer strongly continuous on the dual space $({\cal X}^{*},\|\:.\:\|^{*})$ with respect to the strong topology of ${\cal X}^{*}$. Without that strong continuity, the theory of semigroups becomes quite complicated and the Hille-Yosida theorem becomes very difficult (see [Feller]{} [@feller'52], [@feller'53], [Dynkin]{} [@dynkin'65], [Jefferies]{} [@jefferies'86], [@jefferies'87] or [Cerrai]{} [@cerrai'94]).\
Recentely [Wu]{} and [Zhang]{} [@wu-zhang'06] introduced on ${\cal X}^{*}$ a topology for which the usual semigroups in literature becomes $C_0$-semigroups. That is [*the topology of uniform convergence on compact subsets of $({\cal X},\|\:.\:\|)$*]{}, denoted by ${\cal C}({\cal X}^{*},{\cal X})$.\
It is not difficult to prove (see [@wu-zhang'06 Lemma 1.10, p. 567])
\[28\] Let $({\cal X},\|\:.\:\|)$ be a Banach space. Then $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ is a locally convex space and:\
i) the dual space $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))^{*}$ of $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ is $\cal X$;\
ii) any bounded subset of $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ is $\|\:.\:\|^{*}$-bounded. And restriction to a $\|\:.\:\|^{*}$-bounded subset of $({\cal X}^{*}$, ${\cal C}({\cal X}^{*},{\cal X}))$ coincides with $\sigma({\cal X}^{*},{\cal X})$;\
iii) $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ is complete;\
iv) the topology ${\cal C}({\cal X},{\cal X}^{*}_{\cal C})$, where ${\cal X}^{*}_{\cal C}=\left({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X})\right)$, coincides with the $\|\:.\:\|$-topology of $\cal X$.
Moreover, if [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}is a $C_0$-semigroup on $({\cal X},\|\:.\:\|)$ with generator $\cal L$, then [$\left\{T^{*}(t)\right\}_{t\geq 0}\;\;$]{}is a $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ with generator ${\cal L}^{*}$ (see [@wu-zhang'06 Theorem 1.4, p.564]). This is a satisfactory variant of Phillips theorem concerning the adjoint of a $C_0$-semigroup.\
Therefore we have all ingredients to consider $C_0$-semigroups on the locally convex space $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$. In accord to [@yosida'71 Definiton, p.234], we say that a family [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}of linear continuous operators on $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ is a [*$C_0$-semigroup*]{} on $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ if the following properties holds:\
(i) $T(0)=I$;\
(ii) $T(t+s)=T(t)T(s)$, for all $t,s\geq 0$;\
(iii) $\lim_{t\searrow 0}T(t)x=x$, for all $x\in({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$;\
(iv) there exist a number $\omega_0\in\R$ such that the family $\left\{e^{-\omega_0 t}T(t)\right\}_{t\geq 0}$ is equicontinuous.\
The [*infinitesimal generator*]{} of the $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}is a linear operator $\cal L$ defined on the domain $${\cal D}({\cal L})=\left\{x\in{\X}\:\left|\:\lim_{t\searrow 0}\frac{T(t)x-x}{t}\mbox{ exists in }({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))\right.\right\}$$ by $${\cal L}x=\lim_{t\searrow 0}\frac{T(t)x-x}{t}\quad,\quad\forall
x\in{\cal D}({\cal L}).$$ We can see that $\cal L$ is a densely defined and closed operator on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ and the resolvent $R(\lambda;{\cal L})=(\lambda I-{\cal L})^{-1}$, for any $\lambda\in\rho({\cal L})$ (the resolvent set of $\cal L$) satisfies the equality $$R(\lambda;{\cal L})x=\int\limits_{0}^{\infty}\!e^{-\lambda
t}T(t)x\:dt\quad,\quad\forall \lambda>\omega_0\mbox{ and }\forall
x\in{\cal X}^{*}.$$ Unfortunately, in applications it is difficult to characterise completely the domain of generator ${\cal L}$. For this reason, sometimes we need to work on a subspace ${\cal D}\subset{\cal D}({\cal L})$ dense in $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ which is called a [*core*]{} of generator (see [@davies'80 p.7]). More precisely,
*We say that ${\cal D}\subset{\cal D}({\cal L})$ is a core of generator ${\cal L}$ if ${\cal D}$ is dense in ${\cal D}({\cal L})$ with respect to the graph topology ${\cal C}_{\cal L}({\cal X}^{*},{\cal X})$ of ${\cal L}$ induced by the topology ${\cal C}({\cal X}^{*},{\cal X})$.*
This paper is organized as follows: in the next section by using a Desch-Schappacher perturbation of generator we prove that only a core can be the domain of uniqueness for a $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$. This property is well known in the case of $C_0$-semigroups on Banach spaces (see [@arendt'86 Theorem 1.33, p.46]), but here we prove it for a $C_0$-semigroup on the dual of a Banach space. In a forthcoming paper [@lemle-wu'08] we extend this property to the must difficult case of the dual of a locally convex space.
The Section 3 is devoted to study the $L^\infty\left(\R^d,dx\right)$-uniqueness of generalized Schrödinger operator. Remark that the natural topology for studying this problem is the topology of uniform convergence on compacts subsets of $\left(L^1\left(\R^d,dx\right),\|\:.\:\|_1\right)$ which is denoted by ${\cal C}\left(L^\infty,L^1\right)$.
In the first main result of Section 3 we find neccesary and sufficient conditions to show that the one-dimensional operator ${\cal A}_1^Vf=a(x)f^{''}+b(x)f^{'}-V(x)f$, $f\in C_0^\infty(x_0,y_0)$, where $-\infty\leq x_0<y_0\leq\infty$, is $L^\infty(x_0,y_0)$-unique.\
In the second important result, by comparison with the one-dimensional case, we prove that the multidimensional generalized Schrödinger operator ${\cal A}^Vf=\frac{1}{2}\Delta f+b\cdot\nabla f-Vf$, $f\in C_0^\infty(\R^d)$ (where $\cdot$ is the iner product in $\R^d$), is $L^\infty\left(\R^d,dx\right)$-unique with respect to the topology ${\cal C}\left(L^\infty,L^1\right)$. As consequence, is obtained the $L^1\left(\R^d,dx\right)$-uniqueness of weak solution for the Fokker-Planck equation associated with ${\cal A}^V$. This result was reported in the conference EQUADIFF2007 held on August 2007 at Vienna.
Uniqueness of pre-generators on the dual of a Banach space
==========================================================
One of the main results of this paper concern the uniqueness of pre-generators on the dual of a Banach space. Recall that a linear operator ${\cal A}:{{\cal D}}\longrightarrow{\cal X}^{*}$ with the domain $\D$ dense in $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ is said to be [*a pre-generator*]{} in $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$, if there exists some $C_0$-semigroup on $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$ such that its generator $\cal L$ extends $\cal A$.\
The main results of this section is
\[1\] Let ${\cal A}:{{\cal D}}\longrightarrow{\cal X}^{*}$ be a linear operator with domain $\D$ dense in $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$. Suppose that there exists a $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ such that its generator $\cal L$ extends $\cal A$ (i.e. $\cal A$ is a pre-generator).\
If $\cal D$ is not a core of $\cal L$, then there exists an infinite number of extensions of $\cal A$ which are generators.
For the proof of Theorem \[1\] we need to use some perturbation result. Perturbation theory has long been a very useful tool in the hand of the analyst and physicist. A very elegant brief introduction to one-parameter semigroups is given in the treatise of [Kato]{} [@kato'84] where on can find all results on perturbation theory. The perturbation by bounded operators is due to [Phillips]{} [@phillips'53] who also investigate permanence of smoothness properties by this kind of perturbation. The perturbation by continuous operators on the graph norm of the generator is due to [Desch]{} and [Schappacher]{} [@desch-schappacher'84].\
Next lemma (comunicated by professor Liming Wu), which presents a Desch-Schappacher perturbation result for $C_0$-semigroups on $({\cal X}^{*}, {\cal C}({\cal X}^{*},{\cal X}))$, play a key rolle in the proof of Theorem \[1\]:
\[desch-schappacher\] Let $({\cal X},\|\:.\:\|)$ be a Banach space, ${\cal L}$ the generator of a $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ and $C$ a linear operator on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ with domain ${\cal D}({C})\supset{\cal D}({\cal L})$.\
(i) If $C$ is ${\cal C}({\cal X}^{*},{\cal X})$-continuous, then ${\cal L}+C$ with domain ${\cal D}({\cal L}+C)={\cal D}({\cal L})$ is the generator of some $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$.\
(ii) If $C:{\cal D}({\cal L})\rightarrow{\cal D}({\cal L})$ is continuous with respect to the graph topology of $\cal L$ induced by the topology ${\cal C}({\cal X}^{*},{\cal X})$, then ${\cal L}+C$ with domain ${\cal D}({\cal L}+C)={\cal D}({\cal L})$ is the generator of some $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$.
[[*Proof. *]{}]{} (i) By the [@wu-zhang'06 Theorem 1.4, p.564] and using Lemma \[28\], ${\cal L}^{*}$ is the generator of the $C_0$-semigroup [$\left\{T^{*}(t)\right\}_{t\geq 0}\;\;$]{}on $({\cal X},{\cal C}({\cal X},{\cal X}^{*}_{\cal C}))=({\cal X},\|\:.\:\|)$. Under the condition on $C$, by [@wu-zhang'06 Lemma 1.12, p.568] it follows that the operator ${C}^{*}$ is bounded on $({\cal X},\|\:.\:\|)$. By a well known perturbation result (see [@davies'80 Theorem 1, p.68]), we find that ${\cal L}^{*}+{C}^{*}=({\cal L}+C)^{*}$ is the generator of some $C_0$-semigroup on $({\cal X},\|\:.\:\|)$. By using again [@wu-zhang'06 Theorem 1.4, p.564], we obtain that $({\cal L}+C)^{**}$ is the generator of some $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$. Moreover, ${\cal D}(({\cal L}+C)^{*})$ is dense in $({\cal X},\|\:.\:\|)$. Hence ${\cal D}(({\cal L}+C)^{*})$ is dense in $({\cal X},\sigma({\cal X},{\cal X}^{*}))$. Then by [@schaefer'71 Theorem 7.1, p.155] it follows that $$({\cal L}+C)^{**}=\overline{({\cal L}+C)}^{\sigma({\cal X}^{*},{\cal X})}$$ Since $C$ is ${\cal C}({\cal X}^{*},{\cal X})$-continuous, by [@wu-zhang'06 Lemma 1.5, p.564] it follows that $C$ is $\sigma({\cal X}^{*},{\cal X})$-continuous hence $\sigma({\cal X}^{*},{\cal X})$-closed. Consequently $${\cal L}+C=\overline{({\cal L}+C)}^{\sigma({\cal X}^{*},{\cal X})}$$ from where it follows that $({\cal L}+C)^{**}={\cal L}+C$. Hence ${\cal L}+C$ is the generator of some $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$.\
(ii) We will follows closely the proof of [Arendt]{} [@arendt'86 Theorem 1.31, p.45]. Remark that $C:{\cal D}({\cal L})\rightarrow{\cal D}({\cal L})$ is continuous with respect to the graph topology of $\cal L$ induced by the topology ${\cal C}({\cal X}^{*},{\cal X})$ if and only if for all $\lambda>\omega_0$ (where $\omega_0$ is the real constatnte in the definition of the $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}) the operator $$\tilde{C}:=(\lambda I-{\cal L})CR(\lambda;{\cal L})$$ is continuous on ${\cal X}^{*}$ with respect to the topology ${\cal C}({\cal X}^{*},{\cal X})$. Consequently, by (i) we find that ${\cal L}+\tilde{C}$ is the generator of some $C_0$-semigroup on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$. We shall prove that ${\cal L}+\tilde{C}$ is similar to ${\cal L}+C$. Remark that $C$ is continuous with respect to the graph norm $\|\:.\:\|^{*}+\|{\cal L}.\:\|^{*}$. By the prove of [@arendt'86 Theorem 1.31, p.45], there exists some $\lambda>\omega_0$ such that the operators $$U:=I-CR(\lambda;{\cal L})\quad\mbox{and}\quad U^{-1}$$ are bounded on $({\cal X}^{*},\|\:.\:\|^{*})$. Moreover $$U({\cal L}+\tilde{C})U^{-1}=U({\cal L}-\lambda I+\tilde{C})U^{-1}+\lambda I=$$ $$=U[{\cal L}-\lambda I+(\lambda I-{\cal L})CR(\lambda;{\cal L})]U^{-1}+\lambda I=$$ $$=U({\cal L}-\lambda I)[I-CR(\lambda;{\cal L})]U^{-1}+\lambda I=$$ $$=U({\cal L}-\lambda I)+\lambda I=[I-CR(\lambda;{\cal L})]({\cal L}-\lambda I)+\lambda I=$$ $$={\cal L}-\lambda I+C+\lambda I={\cal L}+C$$ Now we have only to prove that $U$ and $U^{-1}$ are continuous with respect to the topology ${\cal C}({\cal X}^{*},{\cal X})$. Since $CR(\lambda;{\cal L})=R(\lambda;{\cal L})\tilde{C}$ is continuous with respect to the topology ${\cal C}({\cal X}^{*},{\cal X})$, it follows that $U=I-CR(\lambda;{\cal L})$ is continuous with respect to the topology ${\cal C}({\cal X}^{*},{\cal X})$. On the other hand, by [@wu-zhang'06 Lemma 1.5, p.564], $U^{*}$ and $[CR(\lambda;{\cal L})]^{*}$ are continuous on $({\cal X},\|\:.\:\|)$. By Phillips theorem [@komatsu'64 Proposition 5.9, p.246], $1\in\rho([CR(\lambda;{\cal L})]^{*})$ if and only if $1\in[CR(\lambda;{\cal L})]^{**}$ and $$[I-([CR(\lambda;{\cal L})]^{*})^{-1}]^{*}=(I-[CR(\lambda;{\cal L})]^{**})^{-1}$$ But by [@schaefer'71 Theorem 1.1, p.155] we have $[CR(\lambda;{\cal L})]^{**}=CR(\lambda;{\cal L})$ and the right hand side above becomes $U^{-1}$. Hence $U^{-1}$, being the dual of some bounded operator on $({\cal X},\|\:.\:\|)$, is continuous on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ by [@wu-zhang'06 Lemma 1.5, p.564] and the proof of lemma is completed. [$\square$]{}\
Now we are able to give\
[[*Proof of Theorem \[1\].* ]{}]{}We will follows closely the proof of [Arendt]{} [@arendt'86 Theorem 1.33, p.46]. Endow ${\cal D}({\cal L})$ with the graph topology ${\cal C}_{\cal
L}({\cal X}^{*},{\cal X})$ of ${\cal L}$ induced by the topology ${\cal C}({\cal X}^{*},{\cal X})$. If in contrary $\D$ is not a core of $\cal L$, then $\D$ is not dense in ${\cal
D}({\cal L})$ with respect to the graph topology ${\cal C}_{\cal
L}({\cal X}^{*},{\cal X})$ of $\cal L$. By Hahn-Banach theorem there exist some non-zero linear functional $\phi$ continuous on ${\cal D}({\cal L})$ with respect to the graph topology ${\cal C}_{\cal
L}({\cal X}^{*},{\cal X})$ of $\cal L$ such that $\phi(x)=0$ for all $x\in{\D}$. Fix some $u\in{\cal D}({\cal L})$, $u\neq 0$, and consider the linear operator $$C:{\cal D}({\cal L})\longrightarrow{\cal D}({\cal L})$$ $$Cx=\phi(x)u\quad,\quad\forall x\in{\cal D}({\cal L}).$$ Then $C$ is continuous with respect to the graph topology ${\cal C}_{\cal
L}({\cal X}^{*},{\cal X})$ of $\cal L$ on ${\cal D}({\cal L})$. By (Desch-Schappacher perturbation) Lemma \[desch-schappacher\] it follows that ${\cal L}+C$ is the generator of some $C_0$-semigroupe on $({\cal X}^{*},{\cal C}({\cal X}^{*},{\cal X}))$ and $$({\cal L}+C)/_{\cal D}={\cal L}/_{\cal D}={\cal A}$$ It is obvious that an infinite number of generators can be constructed in that way. [$\square$]{}
$L^\infty(\R^d,dx)$-uniqueness of generalized Schrödinger operators
===================================================================
In this section we consider the generalized Schrödinger operator $${\cal A}^Vf:=\frac{1}{2}\Delta f+b\cdot\nabla f-Vf\quad,\quad\forall f\in C_0^\infty(\R^d)$$ where $b:\R^d\rightarrow\R^d$ is a measurable locally bounded vector field and $V:\R^d\rightarrow\R$ is a locally bounded potential. The study of this operator has attracted much attention both from the people working on Nelson’s stochastic mechanics ([Carmona]{} [@carmona'85], [Meyer]{} and [Zheng]{} [@meyer-zheng'84], etc.) and from those working on the theory of Dirichlet forms ([Albeverio]{}, [Brasche]{} and [Röckner]{} [@albeverio-brasche-rockner'89]). In the case where $V=0$, the essential self-adjointness of ${\cal A}:=\frac{1}{2}\Delta+b\cdot\nabla$ in $L^2$ has been completely charaterized in the works of [Wielens]{} [@wielens'85] and [Liskevitch]{} [@liskevitch'99]. $L^1$-uniqueness of this operator has been introduced and studied by [Wu]{} [@wu'99], its $L^p$-uniqueness has been studied by [Eberle]{} [@eberle'00] for $p\in[1,\infty)$ and by [Wu]{} and [Zhang]{} [@wu-zhang'06] for $p=\infty$.
In accord with the Theorem \[1\], we can introduce $L^\infty\left(\R^d,dx\right)$-uniqueness of pre-generators in a very natural form:
*We say that a pre-generator $\cal A$ is $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$-unique, if there exists only one $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}on $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$ such that its generator $\cal L$ is an extension of $\cal A$.*
This uniqueness notion has been used by [Arendt]{} [@arendt'86], [Röckner]{} [@rockner'98], [Wu]{} [@wu'98] and [@wu'99], [Eberle]{} [@eberle'00], [Arendt]{}, [Metafune]{} and [Pallara]{} [@arendt-metafune-pallara'06], [Wu]{} and [Zhang]{} [@wu-zhang'06], [Lemle]{} [@lemle'07] and others in different contexts. The next characterisation of $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$-uniqueness of pre-generators is wery useful in applications (for others characterisations of the uniqueness of pre-generators we strongly recommanded for the reader the excelent article of [Wu]{} and [Zhang]{} [@wu-zhang'06]):
\[11\] Let $\cal A$ be a linear operator on $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$ with domain $\D$ (the test-function space) which is assumed to be dense in $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$. Assume that there is a $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}on $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$ such that its generator $\cal L$ is an extension of $\cal A$ (i.e., $\cal A$ is a pre-generator). The following assertions are equivalents:\
(i) $\cal A$ is $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$-unique;\
(ii) $\D$ is a core of $\cal L$;\
(iii) for some $\lambda>\omega_0$ (where $\omega_0\in\R$ is the constant in the definition of $C_0$-semigroup [$\left\{T(t)\right\}_{t\geq 0}\;\;$]{}), the range $(\lambda I-{\cal A})(\D)$ is dense in $\left(L^\infty\left(\R^d,dx\right),{\cal C}\left(L^\infty,L^1\right)\right)$;\
(iv) (Liouville property) for some $\lambda>\omega_0$, if $h\in{\cal D}({\cal A}^{*})$ satisfies $(\lambda I-{\cal A}^{*})h=0$, then $h=0$;\
(v) (uniqueness of weak solutions for the dual Cauchy problem) for every $f\in\left(L^1\left(\R^d,dx\right),\|\:.\:\|_1\right)$, the dual Cauchy problem $$\left\lbrace\begin{array}{l}
\partial_tu(t,x)={\cal A}^{*}u(t,x)\\
u(0,x)=f(x)
\end{array}
\right.$$ has a $\left(L^1\left(\R^d,dx\right),\|\:.\:\|_1\right)$-unique weak solution $u(t,x)=T^{*}(t)f(x)$.
Our main purpose in this section is to find some sufficient condition to assure the $L^\infty(\R^d,dx)$-uniqueness of $({\cal A}^V,C_0^\infty(\R^d))$ with respect to the topology ${\cal C}\left(L^\infty,L^1\right)$ in the case where $V\geq 0$.
At first, we must remark that the generalized Schrödinger operator $({\cal A}^V,C_0^\infty(\R^d))$ is a pre-generator on $\left(L^\infty(\R^d,dx),{\cal C}\left(L^\infty,L^1\right)\right)$. Indeed, if we consider the Feynman-Kac semigroup [$\left\{P^V_t\right\}_{t\geq 0}$]{}given by $$P_t^Vf(x):=\E^x1_{[t<\tau_e]}f(X_t)e^{-\int\limits_0^t\!V(X_s)\:ds}$$ where $(X_t)_{0\leq t<\tau_e}$ is the diffusion generated by $\cal A$ and $\tau_e$ is the explosion time, then by [@wu-zhang'06 Theorem 1.4] [$\left\{P^V_t\right\}_{t\geq 0}$]{}is a $C_0$-semigroup on $L^\infty(\R^d,dx)$ with respect to the topology ${\cal C}\left(L^\infty,L^1\right)$. Let $\partial$ be the point at infinity of $\R^d$. If we put $X_t=\partial$ after the explosion time $t\geq\tau_e$, then by Ito’s formula it follows for any $f\in C_0^\infty(\R^d)$ that $$f(X_t)-f(x)-\int\limits_0^t\!{\cal A}^Vf(X_s)\:ds$$ is a local martingale. As it is bounded over bounded times intervals, it is a true martingale. Thus by taking the expectation under $\P_x$, we get $$P_t^Vf(x)-f(x)=\int\limits_0^t\!P_s^V{\cal A}^Vf(x)\:ds\quad,\quad\forall t\geq 0.$$ Therefore $f$ belongs to the domain of the generator ${\cal L}^V_{(\infty)}$ of $C_0$-semigroup [$\left\{P^V_t\right\}_{t\geq 0}$]{}on $(L^\infty(\R^d,dx),{\cal C}\left(L^\infty,L^1\right))$. Consequently, $({\cal A}^V,C_0^\infty(\R^d))$ is a pre-generator on $L^\infty(\R^d,dx)$ with respect to the topology ${\cal C}\left(L^\infty,L^1\right)$ and we can apply the Theorem \[11\] to study the $(L^\infty(\R^d,dx),{\cal C}\left(L^\infty,L^1\right))$-uniqueness of this operator.
The one-dimensional case
------------------------
The purpose of this subsection is to study the $L^\infty$-uniqueness of one-dimensional operator $${\cal A}_1^Vf=a(x)f^{''}+b(x)f^{'}-V(x)f\quad,\quad f\in C_0^\infty(x_0,y_0)$$ where $-\infty\leq x_0<y_0\leq\infty$ and the coefficients $a$, $b$ and $V$ satisfy the next properties $$a(x),\:b(x)\in L_{loc}^\infty(x_0,y_0;dx)$$ $$V(x)\in L_{loc}^\infty(x_0,y_0;dx),\:\:V(x)\geq 0$$ and the following very weak ellipticity condition $$a(x)>0\quad dx-\mbox{a.e.}$$ $$\frac{1}{a(x)},\quad\frac{b(x)}{a(x)}\in L_{loc}^1(x_0,y_0;dx)$$ where $L_{loc}^\infty(x_0,y_0;dx)$ , respectively $L_{loc}^1(x_0,y_0;dx)$, denotes the space of real Lebesgue measurable functions which are essentially bounded, respectively integrable, with respect to Lebesgue measure on any compact sub-interval of $(x_0,y_0)$.\
Fix a point $c\in(x_0,y_0)$ and let $$\rho(x)=\frac{1}{a(x)}e^{\int\limits_c^x\!\frac{b(t)}{a(t)}\:dt}\quad.$$ be [*the speed measure of Feller*]{} and let $$\alpha(x)=e^{\int\limits_c^x\!\frac{b(t)}{a(t)}\:dt}$$ be [*the scale function of Feller*]{}. It is easy to see that $$\left\langle {\cal A}_1^Vf,g\right\rangle_\rho=\left\langle f,{\cal A}_1^Vg\right\rangle_\rho\quad,\quad\forall f,g\in C_0^\infty(x_0,y_0)$$ where $$\langle f,g\rangle_\rho=\int\limits_{x_0}^{y_0}\!f(x)g(x)\rho(x)\:dx\quad.$$ For $f\in C_0^\infty(x_0,y_0)$, we can write ${\cal A}_1^V$ in the Feller form: $${\cal A}_1^V=a(x)f^{''}+b(x)f^{'}-V(x)f=\frac{\alpha(x)}{\rho(x)}f^{''}+\frac{a(x)\alpha^{'}(x)}{\alpha(x)}f^{'}-V(x)f=$$ $$=\frac{\alpha(x)}{\rho(x)}f^{''}+\frac{\alpha^{'}(x)}{\rho(x)}f^{'}-V(x)f=\frac{1}{\rho(x)}\left[\alpha(x)f^{'}\right]^{'}-V(x)f$$ and the assumptions concerning the coeficients $a(x)$ and $b(x)$ can be writen as
- $\rho(x)>0$, $dx$-a.e. and $\rho\in L_{loc}^1(x_0,y_0;dx)$
- $\alpha(x)>0$ everywhere and $\alpha$ is absolutely continuous
- $\alpha/\rho,\quad\alpha^{'}/\rho\in L_{loc}^\infty(x_0,y_0;dx)$.
Now consider the operator $({\cal A}_1^V,C_0^\infty(x_0,y_0))$ as an operator on $L^\infty(x_0,y_0;\rho dx)$ which is endowed with the topology ${\cal C}(L^\infty(x_0,y_0,\rho dx),L^1(x_0,y_0,\rho dx))$. We begin with a series of lemmas.
\[21\] Let $({\cal A}_1^V)^{*}:{\cal D}(({\cal A}_1^V)^{*})\subset L^1(x_0,y_0;\rho dx)\rightarrow L^1(x_0,y_0;\rho dx)$ be the adjoint operator of ${\cal A}_1^V$. Let $\lambda>0$ and let $u\in L^1(x_0,y_0;\rho dx)$ be in ${\cal D}(({\cal A}_1^V)^{*})$ such that $$({\cal A}_1^V)^{*}u=\lambda u.$$ Then $u$ solves the ordinary differential equation $$\left(\alpha u^{'}\right)^{'}=\lambda u\rho+Vu\rho$$ in the following sense: $u$ has an absolutely continuous $dx$-version $\hat{u}$ such that $\hat{u}^{'}$ is absolutely continuous and $$\left(\alpha\hat{u}^{'}\right)^{'}=\lambda\hat{u}\rho+V\hat{u}\rho.$$
[[*Proof. *]{}]{}The sufficiency follows easily by integration by parts.\
Below we prove the necessity. Let $x_0<x_1<y_1<y_0$. The space of distributions on $(x_1,y_1)$ is denoted by ${\cal D}'(x_1,y_1)$.\
[**(I)**]{} We recall that if $k\geq 1$ and $T_1,T_2\in{\cal D}'(x_1,y_1)$ satisfy $T_1^{(k)}=T_2^{(k)}$ i.e. $$\int\limits_{x_1}^{y_1}\!T_1f^{(k)}(x)\:dx=\int\limits_{x_1}^{y_1}\!T_2f^{(k)}(x)\:dx$$ for any $f\in C_0^\infty(x_1,y_1)$, then there exists a polynomial $w$ such that $T_1=T_2+w$.\
[**(II)**]{} Let $u\in L^1(x_0,y_0;\rho dx)$ be in ${\cal D}(({\cal A}_1^V)^{*})$ such that $$({\cal A}_1^V)^{*}u=\lambda u.$$ Then for $f\in C_0^\infty(x_1,y_1)$ we have: $$\begin{aligned}
& &\int\limits_{x_1}^{y_1}\!u\left(\alpha f^{'}\right)^{'}\:dx=\int\limits_{x_1}^{y_1}\!u{\cal A}_1^Vf\rho\:dx+\int\limits_{x_1}^{y_1}\!uVf\rho\:dx=\\
&=&\left\langle u,{\cal A}_1^Vf\right\rangle_\rho+\left\langle u,Vf\right\rangle_\rho=\left\langle ({\cal A}_1^V)^{*}u,f\right\rangle_\rho+\left\langle u,Vf\right\rangle_\rho=\\
&=&\left\langle \lambda u,f\right\rangle_\rho+\left\langle u,Vf\right\rangle_\rho=\lambda\int\limits_{x_1}^{y_1}\!uf\rho\:dx+\int\limits_{x_1}^{y_1}\!uVf\rho\:dx.\end{aligned}$$ From $$|f(x)|=\left|\int\limits_{x_1}^{x}\!f^{'}(t)\:dt\right|
\leq\int\limits_{x_1}^{x}\!|f^{'}(t)|\:dt\leq\int\limits_{x_1}^{y_1}\!|f^{'}(t)|\:dt$$ it follows that $$\|f\|_{L^\infty(x_1,y_1;dx)}\leq\|f^{'}\|_{L^1(x_1,y_1;dx)}$$ and we have $$\begin{aligned}
& &\left|\int\limits_{x_1}^{y_1}\!u\left[\alpha f^{''}+
\alpha^{'}f^{'}\right]\:dx\right|=\left|\int\limits_{x_1}^{y_1}\!u
\left(\alpha f^{'}\right)^{'}\:dx\right|\leq\\
&\leq&\lambda\left|\int\limits_{x_1}^{y_1}\!uf\rho\:dx\right|+\left|\int
\limits_{x_1}^{y_1}\!uVf\rho\:dx\right|\leq\\
&\leq&\left[\lambda\left\|u\rho\right\|_{L^1(x_0,y_0;dx)}+\left\|uV\rho\right
\|_{L^1(x_1,y_1;dx)}\right]\|f\|_{L^\infty(x_1,y_1;dx)}\leq\\
&\leq&C\left\|f^{'}\right\|_{L^1(x_1,y_1;dx)}\end{aligned}$$ where $$C=\lambda\left\|u\rho\right\|_{L^1(x_0,y_0;dx)}+\left\|uV\rho\right
\|_{L^1(x_1,y_1;dx)}$$ is independent of $f$. The above inequality means that the linear functional $$l_u(\eta):=\int\limits_{x_1}^{y_1}\!u\left(\alpha\eta^{'}+
\alpha^{'}\eta\right)\:dx$$ where $\eta\in\left\{f^{'}\:\left|\:f\in C_0^\infty(x_1,y_1)\right.\right\}\subset L^1(x_1,y_1;dx)$, is continuous with respect to the $L^1(x_1,y_1;dx)$-norm. Thus by the Hahn-Banach’s theorem and the fact that the dual of $L^1(x_1,y_1;dx)$ is $L^\infty(x_1,y_1;dx)$, there exists $v\in L^\infty(x_1,y_1;dx)$ such that $$l_u(\eta):=\int\limits_{x_1}^{y_1}\!u\left(\alpha\eta^{'}+
\alpha^{'}\eta\right)\:dx=\int\limits_{x_1}^{y_1}\!v\eta\:dx$$ which implies $$\int\limits_{x_1}^{y_1}\!u\alpha\eta^{'}\:dx=\int\limits_{x_1}^{y_1}\!
\left(v-u\alpha^{'}\right)\eta\:dx=\int\limits_{x_1}^{y_1}\!h\eta^{'}\:dx$$ where $$h(x)=-\int\limits_{x_1}^{x}\!\left[v(t)-u(t)\alpha^{'}(t)\right]\:dt$$ is an absolutely continuous function on $(x_1,y_1)$. It follows from [**(I)**]{} that there exists a polynomial $w$ such that $$u\alpha=h+w$$ on $(x_1,y_1)$ in the sense of distributions, hence $u\alpha=h+w$ a.e. on $(x_1,y_1)$.\
[**(III)**]{} Since $\alpha>0$ is absolutely continuous, the equality $$u=\alpha^{-1}(h+w)\quad\mbox{a.e.}$$ shows that $u$ also has an absolutely continuous version $$\tilde{u}:=\alpha^{-1}(h+w).$$ [**(IV)**]{} Now we have $$\begin{aligned}
\lambda\int\limits_{x_1}^{y_1}\!\tilde{u}f\rho\:dx&=&\int\limits_{x_1}^{y_1}\!\tilde{u}\left(
\alpha f^{'}\right)^{'}\:dx-\int\limits_{x_1}^{y_1}\!\tilde{u}Vf\rho\:dx=\\
&=&-\int\limits_{x_1}^{y_1}\!\tilde{u}^{'}\alpha f^{'}\:dx-\int\limits_{x_1}^{y_1}
\!\tilde{u}Vf\rho\:dx.\end{aligned}$$ so that $$\int\limits_{x_1}^{y_1}\!\left(\lambda\tilde{u}\rho+\tilde{u}V\rho\right)\:dx=
-\int\limits_{x_1}^{y_1}\!\tilde{u}^{'}\alpha f^{'}\:dx.$$ Hence $$\left(\alpha\tilde{u}^{'}\right)^{'}=\lambda\tilde{u}\rho+\tilde{u}V\rho\in L^1(x_1,y_1;dx)$$ in the sense of distributions. Then $\alpha\tilde{u}^{'}$ has an absolutely continuous version, so is $\tilde{\tilde{u}}^{'}$ (a primitive of $\lambda\tilde{u}\rho+\tilde{u}V\rho$) on $(x_1,y_1)$ and $$\tilde{\tilde{u}}^{'}=\lambda\tilde{u}\rho+\tilde{u}V\rho\quad\mbox{a.e.}$$ [**(V)**]{} From the above discution we have $$\alpha\tilde{u}^{'}=\tilde{\tilde{u}}\quad\mbox{a.e.}$$ which implies that $$\tilde{u}^{'}=\alpha^{-1}\tilde{\tilde{u}}\quad\mbox{a.e.}$$ Since $\alpha^{-1}\tilde{\tilde{u}}$ is absolutely continuous, we get that $\tilde{u}$, hence $u$ has a version $\hat{u}$ (a primitive of $\alpha^{-1}\tilde{\tilde{u}}$) such that $$\hat{u}^{'}=\alpha^{-1}\tilde{\tilde{u}}$$ is absolutely continuous. We then go back to [**(IV)**]{}, using $\hat{u}$ in place of $\tilde{u}$, to obtain $$\left(\alpha\hat{u}^{'}\right)^{'}=\lambda\hat{u}\rho+V\hat{u}\rho.$$ The lemma is thus proved since $(x_1,y_1)$ is an arbitrary relatively compact subinterval of $(x_0,y_0)$. [$\square$]{}
\[22\] Let $\lambda>0$ and let $u\in L^1(x_0,y_0;\rho dx)$ be such that $$({\cal A}_1^V)^{*}u=\lambda u$$ in the sense of Lemma \[21\]. We may suppose that $u$ is an absolutely continuous version such that $u^{'}$ is absolutely continuous. Let $c_1\in(x_0,y_0)$ such that $u(c_1)>0$.\
(i) if $u^{'}(c_1)>0$, then $u^{'}(y)>0$ for all $y\in(c_1,y_0)$;\
(ii) if $u^{'}(c_1)<0$, then $u^{'}(x)<0$ for all $x\in(x_0,c_1)$.
[[*Proof. *]{}]{}(i) Suppose $u^{'}(c_1)>0$. Let $$\hat{y}=\sup\left\{y\geq c_1\:\left|\:u^{'}(z)>0,\:\:\forall z\in[c_1,y)\right.\right\}\quad.$$ It is clear that $\hat{y}>c_1$ and $$u(t)\geq u(c_1)>0\quad,\quad\forall t\in[c_1,\hat{y}].$$ From the hypothesis $$({\cal A}_1^V)^{*}u=\lambda u$$ it follows that $$\left(\alpha u^{'}\right)^{'}=\lambda u\rho+uV\rho.$$ Then for any $y\in(c_1,y_0)$ we have $$\alpha(y)u^{'}(y)-\alpha(c_1)u^{'}(c_1)=\int\limits_{c_1}^{y}\!
\rho(t)[\lambda+V(t)]u(t)\:dt\quad.$$ If $\hat{y}<y_0$, then $$\alpha(\hat{y})u^{'}(\hat{y})-\alpha(c_1)u^{'}(c_1)=\int\limits_{c_1}^{\hat{y}}\!
\rho(t)[\lambda+V(t)]u(t)\:dt$$ from where it follows that $$\alpha(\hat{y})u^{'}(\hat{y})=\alpha(c_1)u^{'}(c_1)+\int\limits_{c_1}^{\hat{y}}\!
\rho(t)[\lambda+V(t)]u(t)\:dt>\alpha(c_1)u^{'}(c_1)>0.$$ Then $u^{'}(\hat{y})>0$. Hence $u^{'}(t)>0$ for all $t\in[\hat{y},\hat{y}+\varepsilon]$ for small $\varepsilon>0$, which contradicts the definition of $\hat{y}$.\
(ii) In the same way on can prove that if $u^{'}(c_1)<0$, then $u^{'}(x)<0$, for all $x\in(x_0,c_1)$. [$\square$]{}
\[23\] There exists two strictely positive functions $u_k$, $k=1,2$ on $(x_0,y_0)$ such that\
(i) for $k=1,2$, $u_k^{'}$ is absolutely continuous and $$\left(\alpha u_k^{'}\right)^{'}=\lambda u_k\rho+u_kV\rho\quad\mbox{a.e.}$$ where $\lambda>0$;\
(ii) $u_1^{'}>0$ and $u_2^{'}<0$ over $(x_0,y_0)$.
[[*Proof. *]{}]{}The function $u_2$ was constructed by Feller [@feller'52 Lemma 1.9] in the case where $a=1$ and $V=0$, but his prove works in the actual general framework. [$\square$]{}\
The main result of this subsection is
\[31\] The one-dimensional operator $({\cal A}_1^V,C_0^\infty(x_0,y_0))$ is $L^\infty(x_0,y_0;\rho dx)$-unique with respect to the topology ${\cal C}(L^\infty(x_0,y_0;\rho dx),L^1(x_0,y_0;\rho dx))$ if an only if both $$(*)\quad\quad\int\limits_c^{y_0}\!\rho(y)\sum\limits_{n=0}^\infty\phi_n(y)\:dy=+\infty$$ and $$(**)\quad\quad\int\limits_{x_0}^c\!\rho(x)\sum\limits_{n=0}^\infty\psi_n(x)\:dx=+\infty$$ hold, where $c\in(x_0,y_0)$, $\lambda>0$ and $$\phi_n(y)=\int\limits_{c}^{y}\!\frac{1}{\alpha(r_n)}\:dr_n\int\limits_{c}^{r_n}\!\rho(t_n)[\lambda+V(t_n)]\phi_{n-1}(t_n)\:dt_n,\quad n\geq 1,\quad\phi_0(y)=1$$ and $$\psi_n(x)=\int\limits_{x}^{c}\!\frac{1}{\alpha(r_n)}\:dr_n\int\limits_{r_n}^{c}\!\rho(t_n)[\lambda+V(t_n)]\psi_{n-1}(t_n)\:dt_n,\quad n\geq 1,\quad\psi_0(x)=1.$$
[[*Proof. *]{}]{}$\Rightarrow$ Let $({\cal A}_1^V,C_0^\infty(x_0,y_0))$ be $L^\infty(x_0,y_0;\rho dx)$-unique with respect to the topology ${\cal C}(L^\infty(x_0,y_0;\rho dx),L^1(x_0,y_0;\rho dx))$ and assume that (\*\*) (similar in the case (\*)) doesn’t hold, that is $$\int\limits_{x_0}^c\!\rho(x)\sum\limits_{n=0}^\infty\psi_n(x)\:dx<+\infty$$ where $c\in(x_0,y_0)$ is fixed and $\lambda>0$. We prove that there exists $u\in L^1(x_0,y_0;\rho dx)$, $u\neq 0$ such that $$\left[\lambda I-({\cal A}_1^V)^{*}\right]u=0\quad\mbox{\it in the sense of distributions}$$ which is in contradiction with the $L^\infty(x_0,y_0;\rho dx)$-uniqueness of $({\cal A}_1^V,C_0^\infty(x_0,y_0))$.\
Indeed, by Lemma \[23\] there exists a function $u$ strictely positive on $(x_0,y_0)$ such that $u^{'}$ is absolutely continuous, $u^{'}<0$ over $(x_0,y_0)$ and $$\left(\alpha u^{'}\right)^{'}=\rho(\lambda+V)u.$$ Below we shall prove that $u\in L^1(x_0,y_0;\rho dx)$.\
([**I**]{}) [*integrability near $y_0$*]{}\
For $y\in(c,y_0)$ we have $$\alpha(y)u^{'}(y)-\alpha(c)u^{'}(c)=\int\limits_c^y\!\rho(t)[\lambda+V(t)]u(t)\:dt.$$ Then $$0\geq\alpha(y)u^{'}(y)=\alpha(c)u^{'}(c)+\int\limits_c^y\!\rho(t)[\lambda+V(t)]u(t)\:dt$$ which implies that $$\int\limits_c^y\!u(t)\rho(t)\:dt\leq\int\limits_c^y\!\rho(t)[\lambda+V(t)]u(t)\:dt\leq-\alpha(c)u^{'}(c)<+\infty.$$ ([**II**]{}) [*integrability near $x_0$*]{}\
For $x\in(x_0,c)$ we have $$\alpha(c)u^{'}(c)-\alpha(x)u^{'}(x)=\int\limits_x^c\!\rho(t)[\lambda+V(t)]u(t)\:dt$$ so that $$\alpha(x)u^{'}(x)=\alpha(c)u^{'}(c)-\int\limits_x^c\!\rho(t)[\lambda+V(t)]u(t)\:dt.$$ Moreover for $c_0\in(x,c)$ we have: $$u(x)=u(c)-\int\limits_x^c\!u^{'}(r)\:dr=\\$$ $$=u(c)-\int\limits_x^c\!\left\{\frac{\alpha(c)u^{'}(c)}{\alpha(r)}-
\frac{1}{\alpha(r)}\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt\right\}\:dr=\\$$ $$=u(c)-\alpha(c)u^{'}(c)\int\limits_x^c\!\frac{1}{\alpha(r)}\:dr+
\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt=$$ $$=u(c)-\alpha(c)u^{'}(c)\left[\int\limits_x^{c_0}\frac{1}{\alpha(r)}\:dr+\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr\right]+$$ $$+\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt=$$ $$=u(c)-\alpha(c)u^{'}(c)\int\limits_x^{c_0}\frac{1}{\alpha(r)}\cdot
\frac{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}\:dr-$$ $$-\alpha(c)u^{'}(c)\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr+
\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt=$$ $$=u(c)-\frac{\alpha(c)u^{'}(c)}{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}\int\limits_x^{c_0}\frac{1}{\alpha(r)}\:dr
\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt-$$ $$-\alpha(c)u^{'}(c)\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr+
\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt\leq$$ $$\leq u(c)-\frac{\alpha(c)u^{'}(c)}{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}\int\limits_x^{c_0}\frac{1}{\alpha(r)}\:dr
\int\limits_{r}^c\!\rho(t)[\lambda+V(t)]\:dt-$$ $$-\alpha(c)u^{'}(c)\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr+
\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt\leq$$ $$\leq u(c)-\frac{\alpha(c)u^{'}(c)}{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}\int\limits_x^{c}\frac{1}{\alpha(r)}\:dr
\int\limits_{r}^c\!\rho(t)[\lambda+V(t)]\:dt-$$ $$-\alpha(c)u^{'}(c)\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr+
\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt.$$ Thus: $$u(x)\leq u(c)-\alpha(c)u^{'}(c)\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr-$$ $$-\frac{\alpha(c)u^{'}(c)}{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}\int\limits_x^{c}\frac{1}{\alpha(r)}\:dr
\int\limits_{r}^c\!\rho(t)[\lambda+V(t)]\:dt+$$ $$+\int\limits_x^c\frac{1}{\alpha(r)}\:dr\int\limits_r^c\!\rho(t)[\lambda+V(t)]u(t)\:dt.$$ If we denote $$M=u(c)-\alpha(c)u^{'}(c)\int\limits_{c_0}^{c}\frac{1}{\alpha(r)}\:dr,$$ $$N=-\frac{\alpha(c)u^{'}(c)}{\int\limits_{c_0}^c\!\rho(t)[\lambda+V(t)]\:dt}$$ and $$\psi_n(x)=\int\limits_{x}^{c}\!\frac{1}{\alpha(r_n)}\:dr_n\int\limits_{r_n}^{c}\!\rho(t_n)[\lambda+V(t_n)]\psi_{n-1}(t_n)\:dt_n,\quad n\geq 1,\quad\psi_0(x)=1$$ then $$u(x)\leq M+N\psi_1(x)+\int\limits_x^c\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{r_1}^c\!\rho(t_1)[\lambda+V(t_1)]u(t_1)\:dt_1.$$ But $$u(t_1)\leq M+N\psi_1(t_1)+
\int\limits_{t_1}^c\frac{1}{\alpha(r_2)}\:dr_2\int\limits_{r_2}^c\!\rho(t_2)[\lambda+V(t_2)]u(t_2)\:dt_2.$$ By iteration we obtain: $$u(x)\leq M+N\psi_1(x)+
M\int\limits_x^c\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{r_1}^c\!\rho(t_1)[\lambda+V(t_1)]\:dt_1+$$ $$+N\int\limits_x^c\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{r_1}^c\!\rho(t_1)[\lambda+V(t_1)]\psi_1(t_1)\:dt_1+$$ $$+\int\limits_x^c\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{r_1}^c\!\rho(t_1)[\lambda+V(t_1)]\:dt_1
\int\limits_{t_1}^c\frac{1}{\alpha(r_2)}\:dr_2\int\limits_{r_2}^c\!\rho(t_2)[\lambda+V(t_2)]u(t_2)\:dt_2\leq$$ $$\leq(M+N)\psi_0(x)+(M+N)\psi_1(x)+N\psi_2(x)+$$ $$+\int\limits_x^c\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{r_1}^c\!\rho(t_1)[\lambda+V(t_1)]\:dt_1
\int\limits_{t_1}^c\frac{1}{\alpha(r_2)}\:dr_2\int\limits_{r_2}^c\!\rho(t_2)[\lambda+V(t_2)]u(t_2)\:dt_2\leq\cdots$$ $$\cdots\leq(M+N)\sum\limits_{n=0}^\infty\psi_n(x).$$ Hence $$\int\limits_{x_0}^c\!u(x)\rho(x)\:dx\leq(M+N)\int\limits_{x_0}^c\!\rho(x)\sum\limits_{n=0}^\infty\psi_n(x)\:dx<+\infty.$$ This show the $\rho$-integrability of $u$ near $x_0$.\
$\Leftarrow$ Assume that (\*) and (\*\*) hold. Suppose in contrary that $({\cal A}_1^V,C_0^\infty(x_0,y_0))$ is not $L^\infty(x_0,y_0;\rho dx)$-unique. Then there exists $h\in L^1(x_0,y_0;\rho dx)$, $h\neq 0$ which satisfies $$\left(\lambda I-({\cal A}_1^V)^{*}\right)h=0$$ for some $\lambda>0$. We can assume that $h\in C^1(x_0,y_0)$ and $h>0$ on some interval $[x_1,y_1]\subset(x_0,y_0)$, where $x_1<y_1$. Notice that $h^{'}\neq 0$ on $(x_1,y_1)$.\
Let $c_1\in(x_1,y_1)$.\
([**I**]{}) [*case*]{} $h^{'}(c_1)>0$.\
By Lemma \[22\], it follows $$h^{'}(y)>0\quad,\quad\forall y\in(c_1,y_1).$$ Hence $$h(y)\geq h(c_1)>0\quad,\quad\forall y\in[c_1,y_1].$$ Then we have: $$h(y)=h(c_1)+\int\limits_{c_1}^y\!h^{'}(r)\:dr=\\$$ $$=h(c_1)+\int\limits_{c_1}^y\!\left\{\frac{\alpha(c_1)h^{'}(c_1)}{\alpha(r)}+\frac{1}{\alpha(r)}
\int\limits_{c_1}^r\!\rho(t)[\lambda+V(t)]h(t)\:dt\right\}\:dr>\\$$ $$>h(c_1)+\int\limits_{c_1}^y\!\frac{1}{\alpha(r)}\:dr\int\limits_{c_1}^r\!\rho(t)[\lambda+V(t)]h(t)\:dt.$$ Using inductively this inequality we get $$h(y)>h(c_1)+\int\limits_{c_1}^y\!\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{c_1}^{r_1}\!\rho(t_1)[\lambda+V(t_1)]h(t_1)\:dt_1>$$ $$>h(c_1)+h(c_1)\int\limits_{c_1}^y\!\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{c_1}^{r_1}\!\rho(t_1)[\lambda+V(t_1)]\:dt_1+$$ $$+\int\limits_{c_1}^y\!\frac{1}{\alpha(r_1)}\:dr_1\int\limits_{c_1}^{r_1}\!\rho(t_1)[\lambda+V(t_1)]\:dt_1
\int\limits_{c_1}^{t_1}\!\frac{1}{\alpha(r_2)}\:dr_2\int\limits_{c_1}^{r_2}\!\rho(t_2)[\lambda+V(t_2)]h(t_2)\:dt_2>\cdots$$ $$\cdots>h(c_1)\sum\limits_{n=0}^{\infty}\phi_n(y).$$ Consequently $$\int\limits_{x_0}^{y_0}\!h(y)\rho(y)\:dy\geq\int\limits_{c_1}^{y_0}\!h(y)\rho(y)\:dy >h(c_1)\int\limits_{c_1}^{y_0}\!\rho(y)\sum\limits_{n=0}^{\infty}\phi_n(y)\:dy=+\infty$$ which is a contradiction with the assumption $h\in L^1(x_0,y_0;\rho dx)$.\
([**II**]{}) [*case*]{} $h^{'}(c_1)<0$.\
We prove in a similar way that $$\int\limits_{x_0}^{y_0}\!h(x)\rho(x)\:dx>+\infty.\quad \square$$ In particular, for $V=0$, the one-dimensional operator $${\cal A}_1f=a(x)f^{''}+b(x)f^{'}$$ is $L^\infty(x_0,y_0;\rho dx)$-unique with respect to the topology ${\cal C}(L^\infty(x_0,y_0;\rho dx),L^1(x_0,y_0;\rho dx))$ if an only if both $$(\circ)\quad\quad\int\limits_c^{y_0}\!\rho(y)\:dy\int\limits_{c}^{y}\!\frac{1}{\alpha(r)}\:dr
\int\limits_{c}^{r}\!\rho(t)\:dt=+\infty$$ and $$(\circ\circ)\quad\quad\int\limits_{x_0}^c\!\rho(x)\:dx\int\limits_{x}^{c}\!\frac{1}{\alpha(r)}\:dr
\int\limits_{r}^{c}\!\rho(t)\:dt=+\infty$$ hold. In the terminology of Feller this means that $y_0$ and, respectively $x_0$ are [*no entrance boundaries*]{} (see [@wu-zhang'06 Theorem 4.1,p.590]).
The multidimensional case
-------------------------
In this subsection we consider the multidimensional generalized Schrödinger operator $${\cal A}^Vf:=\frac{1}{2}\Delta f+b\cdot\nabla f-Vf\quad,\quad\forall f\in C_0^\infty(\R^d)$$ where $d\geq 2$ and $V$ is non-negative. Denote the euclidian norm in $\R^d$ by $|x|=\sqrt{x\cdot x}$. If there is some mesurable locally bounded function $$\beta:\R^{+}\rightarrow\R$$ such that $$b(x)\cdot\frac{x}{|x|}\geq\beta(|x|)\quad,\quad\forall x\in\R^d,\:x\neq 0,$$ then for any initial point $x\neq 0$ we have $$|X_t|-|x|\geq\int\limits_0^t\!\left[\beta(|X_t|)+\frac{d-1}{2|X_t|}\right]\:dt+\mbox{\it a real Brownian motion},\quad\forall t<\tau_e.$$ In other words, $|X_t|$ go to infinity more rapidly than the one-dimensional diffusion generated by $${\cal A}_1=\frac{1}{2}\frac{d^2}{dr^2}+\left[\beta(r)+\frac{d-1}{2r}\right]\frac{d}{dr}.$$ This is standard in probability (see [Ikeda]{}, [Watanabe]{} [@ikeda-watanabe'81]). Remark that for the one-dimensional operator $${\cal A}_1^V=\frac{1}{2}\frac{d^2}{dr^2}+\left[\beta(r)+\frac{d-1}{2r}\right]\frac{d}{dr}-V(r)$$ the speed measure of Feller is given by $$\rho(r)=2e^{\int\limits_1^r\!2\left[\beta(t)+\frac{d-1}{2t}\right]\:dt}=
2e^{\int\limits_1^r\!2\beta(t)\:dt}e^{\int\limits_1^r\!\frac{d-1}{t}\:dt}=
2r^{d-1}e^{\int\limits_1^r\!2\beta(t)\:dt}$$ and the scale function of Feller is $$\alpha(r)=r^{d-1}e^{\int\limits_1^r\!2\beta(t)\:dt}.$$ Now we can formulate the main result of this subsection:
\[31\] Suppose that there is some mesurable locally bounded function $$\beta:\R^{+}\rightarrow\R$$ such that $$b(x)\cdot\frac{x}{|x|}\geq\beta(|x|)\quad,\quad\forall x\in\R^d,\:x\neq 0.$$ If the one-dimensional diffusion operator $${\cal A}_1^V=\frac{1}{2}\frac{d^2}{dr^2}+\left[\beta(r)+\frac{d-1}{2r}\right]\frac{d}{dr}-V(r)$$ is $L^\infty(0,\infty;\rho dx)$-unique with respect to the topology ${\cal C}(L^\infty(0,\infty;\rho dx),L^1(0,\infty;\rho dx))$, then the generalized Schrödinger operator $\left({\cal A}^V,C_0^\infty(\R^d)\right)$ is $L^\infty(\R^d,dx)$-unique with respect to the topology ${\cal C}(L^\infty,L^1)$.
[[*Proof. *]{}]{}By Theorem \[11\], for the $L^\infty(\R^d,dx)$-uniqueness of $\left({\cal A}^V,C_0^\infty(\R^d)\right)$ it is enough to show that if for some $\lambda>0$, $u\in L^1(\R^d,dx)$ satisfies $$\left(({\cal A}^V)^{*}-\lambda I\right)u=0\quad\mbox{\it in the sense of distributions}$$ then $u=0$.\
Let $\lambda>0$ and $u\in L^1(\R^d,dx)$ such that $$\left\langle u,\left({\cal A}^V-I\right)f\right\rangle=0\quad,\quad\forall f\in C_0^\infty(\R^d)$$ where $$\langle f,g\rangle:=\int\limits_{\R^d}\!fg\:dx.$$ The above equality becomes $$\frac{1}{2}\int\limits_{\R^d}\!u(x)\Delta f(x)\:dx+\int\limits_{\R^d}\!u(x)b\cdot\nabla f(x)\:dx=\int\limits_{\R^d}\!u(x)(\lambda+V)f(x)\:dx=0\quad,\quad\forall f\in C_0^\infty(\R^d).$$ By the ellipticity regularity result in [@eberle'00 Lemma 2, p.341], $u\in L_{loc}^\infty(\R^d)$ and $\nabla u\in L_{loc}^d(\R^d)\subset L_{loc}^2(\R^d)$. By the fact that $C_0^\infty(\R^d)$ is dense in $$\left\{\left.f\in L^2\:\right|\:\nabla f\in L^2\mbox{ and the support of }f\mbox{ is compact}\right\}$$ an integration by parts yields $$-\frac{1}{2}\int\limits_{\R^d}\!\nabla u(x)\cdot\nabla f(x)\:dx+\int\limits_{\R^d}\!u(x)b\cdot\nabla f(x)\:dx=\int\limits_{\R^d}\!u(x)(\lambda+V)f(x)\:dx$$ for all $f\in H^{1,2}(\R^d)$ with compact support. Now on can follow [Eberle]{} [@eberle'00 proof of Theorem 1, 335] to show the next inequality of Kato’s type $$-\frac{1}{2}\int\limits_{\R^d}\!\nabla|u(x)|\cdot\nabla f(x)\:dx+\int\limits_{\R^d}\!|u(x)|b\cdot\nabla f(x)\:dx\geq\int\limits_{\R^d}\!|u(x)|(\lambda+V)f(x)\:dx$$ for all $f\in H^{1,2}(\R^d)$ with compact support.\
Let $$G(r)=\int\limits_{B(r)}\!|u(x)|\:dx$$ where $B(r)=\left\{\left.x\in \R^d\:\right|\:|x|\leq r\right\}$. $G$ is absolutely continuous and $$G^{'}(r)=\int\limits_{\partial B(r)}\!|u(x)|\:d_\sigma x\quad,\quad\mbox{dr-a.e.}$$ where $d_\sigma r$ is the surface measure on the sphere $\partial B(r)$ (the boundary of $B(r)$). Now for every $0<r_1<r_2$ we consider $$f=\min\left\{r_2-r_1,(r_2-|x|)^{+}\right\}$$ and $$\gamma(x)=\frac{x}{|x|}=\nabla |x|\quad.$$ Then we have $$-\frac{1}{2}\int\limits_{B(r_2)-B(r_1)}\!\nabla|u(x)|\cdot\nabla(r_2-|x|)\:dx+
\int\limits_{B(r_2)-B(r_1)}\!|u(x)|b(x)\cdot\nabla(r_2-|x|)\:dx\geq$$ $$\geq\int\limits_{B(r_2)-B(r_1)}\!|u(x)|(\lambda+V)(r_2-|x|)\:dx$$ from where it follows that $$\frac{1}{2}\int\limits_{B(r_2)-B(r_1)}\!\nabla|u(x)|\cdot\gamma(x)\:dx-
\int\limits_{B(r_2)-B(r_1)}\!|u(x)|b(x)\cdot\gamma(x)\:dx\geq$$ $$\geq\int\limits_{B(r_2)-B(r_1)}\!|u(x)|(\lambda+V)(r_2-|x|)\:dx\quad.$$ Since $$\nabla|u|\gamma=div(|u|\gamma)-|u|div(\gamma)=div(|u|\gamma)-|u|\frac{d-1}{|x|},$$ by the Gauss-Green formula we have $$\int\limits_{B(r_2)-B(r_1)}\!\nabla|u(x)|\cdot\gamma(x)\:dx=G^{'}(r_2)-G^{'}(r_1)-
(d-1)\int\limits_{r_1}^{r_2}\!\frac{1}{r}G^{'}(r)\:dr$$ for $dr_1\otimes dr_2$-a.e. $0<r_1<r_2$.\
By another hand, using the hypothese $$b(x)\cdot\gamma(x)=b(x)\cdot\frac{x}{|x|}\geq\beta(|x|)$$ and Fubini’s theorem, we get $$-\int\limits_{B(r_2)-B(r_1)}\!|u(x)|b(x)\cdot\gamma(x)\:dx\leq
-\int\limits_{r_1}^{r_2}\!G^{'}(r)\beta(r)\:dr$$ and $$\int\limits_{B(r_2)-B(r_1)}\!|u(x)|(\lambda+V)(r_2-|x|)\:dx=
\int\limits_{r_1}^{r_2}\G^{'}(r)\:dr=$$ $$=\int\limits_{r_1}^{r_2}\![\lambda+V(r)]G^{'}(r)\int\limits_{r}^{r_2}\!dt\:dr=
\int\limits_{r_1}^{r_2}\!dr\int\limits_{r_1}^{r}\![\lambda+V(t)]G^{'}(t)\:dt.$$ Consequently $$\frac{1}{2}\left[G^{'}(r_2)-G^{'}(r_1)\right]-
\int\limits_{r_1}^{r_2}\!\left[\beta(r)+\frac{d-1}{2r}\right]G^{'}(r)\:dr\geq$$ $$\geq\int\limits_{r_1}^{r_2}\!dr\int\limits_{r_1}^{r}\![\lambda+V(t)]G^{'}(t)\:dt$$ for $dr_1\otimes dr_2$-a.e. $0<r_1<r_2$.\
Consider the differential form $${\cal A}_1^{-}:=\frac{1}{2}G^{''}(r)-\left[\beta(r)+\frac{d-1}{2r}\right]G^{'}(r)$$ in the sense of distribution on $(0,\infty)$. Notice that the sign of $\beta(r)+\frac{d-1}{2r}$ in ${\cal A}^{-}_1$ is negative, opposite to the sign in the operator ${\cal A}^V_1$ and the speed measure of Feller for ${\cal A}^{-}_1$ is exactely $\rho(r)$ and the scale function of Feller for ${\cal A}^{-}_1$ is $\alpha(r)$. Hence we can write ${\cal A}^{-}_1$ in the Feller form $${\cal A}^{-}_1=\frac{1}{2}G^{''}-\left[\beta(r)+\frac{d-1}{2r}\right]G^{'}=\frac{1}{2}G^{''}-\frac{\alpha^{'}}{\rho}G^{'}=$$ $$=\frac{1}{2}G^{''}-\frac{\rho^{'}}{2\rho}G^{'}=\frac{\rho}{2}\frac{\rho G^{''}-\rho^{'}G^{'}}{\rho^2}=\alpha\left(\frac{G^{'}}{\rho}\right)^{'}.$$ Then we have $$\left(\frac{G^{'}}{\rho}\right)^{'}\geq\frac{1}{\alpha}\int\limits_{r_1}^{r_2}\![\lambda+V(t)]G^{'}(t)\:dt$$ in the sense of distribution on $(0,\infty)$.\
Assume now in contrary that $u\neq 0$. Then there exists $c\in(r_1,r_2)$ such that $G^{'}(c)>0$. Then for $dy$-a.e. $y>c$ we have $$\frac{G^{'}}{\rho}(y)\geq\frac{G^{'}}{\rho}(c)+\int\limits_{c}^{y}\!\frac{1}{\alpha(r)}\:dr
\int\limits_{c}^{r}\![\lambda+V(t)]G^{'}(t)\:dt=$$ $$=\frac{G^{'}}{\rho}(c)+\int\limits_{c}^{y}\!\frac{1}{\alpha(r)}\:dr
\int\limits_{c}^{r}\!\rho(t)[\lambda+V(t)]\frac{G^{'}}{\rho}(t)\:dt.$$ Using the above inequality inductively we get $$\frac{C^{'}}{\rho}(y)\geq\frac{G^{'}}{\rho}(c)\sum\limits_{n=0}^{\infty}\phi_n(y)$$ where $\phi_0(y)=1$ and for any $n\in\N^{*}$, $$\phi_n(y)=\int\limits_{c}^{y}\!\frac{1}{\alpha(r_n)}\:dr_n\int\limits_{c}^{r_n}\!\rho(t_n)[\lambda+V(t_n)]\phi_{n-1}(t_n)\:dt_n.$$ By Theorem \[31\] it follows that $$\int\limits_{\R^d}\!|u(x)|\:dx=G(\infty)\geq\frac{G^{'}}{\rho}(c)\int\limits_{c}^{\infty}\!\rho(y)\sum\limits_{n=0}^{\infty}\phi_n(y)\:dy=+\infty$$ because ${\cal A}^V_1$ is suppose to be $L^\infty(0,\infty;\rho dx)$-unique. This in contradiction with the assumption that $u\in L^1(\R^d,dx)$. [$\square$]{}\
Remark that if $\cal A$ is a second order elliptic differential operator with ${\cal D}=C_0^\infty(\R^d)$, then the weak solutions for the dual Cauchy problem in the Theorem \[11\] (v) correspond exactly to those in the distribution sense in the theory of partial differential equations and the dual Cauchy problem becomes the Fokker-Planck equation for heat diffusion. Then we can formulate
In the hypothesis of Theorem \[31\], for any $f\in L^1(\R^d,dx)$ the Fokker-Planck equation $$\left\lbrace\begin{array}{l}
\partial_tu(t,x)=\frac{1}{2}\Delta u(t,x)-div\left(bu(t,x)\right)-Vu(t,x)\\
u(0,x)=f(x)
\end{array}
\right.$$ has one $L^1(\R^d,dx)$-unique weak solution.
[[*Proof. *]{}]{}The assertion follows by the Theorem \[11\] and the Theorem \[31\]. [$\square$]{}
[99]{}
Schrödinger Operators (H. Holden et A. Jensen, Eds.), Lect. Notes Math., Springer-Verlag, New York-Berlin, 1989.
One Parameter Semigroups of Positive Operators (R. Nagel, Eds.), Lect. Notes in Math., 1184, Springer, Berlin, 1986.
J. Operators Theory, 55(2006), 185-211.
Probabilistic Methods in Math. Phys. (K. Itô and N. Ikeda, Eds.), Proc. of the Taniguchi International Symp., Katata and Kyoto, 1985, 55-82.
Semigroups Forum, 49(1994), 349-367.
Academic Press, London, New York, Toronto, Sydney, San Francisco, 1980.
Ann. Scuola Norm. Sup. Pisa, 11(1984), 327-341.
Grundlehren der mathematischen Wissenschaften 121,122, Springer-Verlag, Berlin-Göttingen-Heidelberg, 1965.
J. Funct. Anal., 173(2000), 328-342.
Ann. Math., 55(1952), 468-519.
Ann. Math., 57(1953), 287-308.
. Oxford University Press, 1985.
North-Holland, Amsterdam, Kodansha, Tokyo, 1981.
J. Funct. Anal., 66(1986), 347-364.
J. Funct. Anal., 73(1987), 195-215.
Springer Verlag, Berlin, Heidelberg, New York, 1984.
J. Math. Soc. Japan, 16(1964), 230-262.
Doctor-thesis, Blaise Pascal University of Clermont-Ferrand, 2007.
In preparation
J. Diff. Geom., 20(1984), 447-457.
J. Funct. Anal., 162(1999), 1-13.
Lect. Notes in Math., 1123(1984), 12-26.
Springer Verlag, New York, Berlin, 1983.
Trans. Amer. Math. Soc., 74(1953), 199-221.
Lect. Notes in Math., 1715(1998), 65-116.
Springer-Verlag, Berlin-Heidelberg-New York-Tokyo, 1971.
J. Funct. Anal., 61(1985), 98-115.
J. Funct. Anal., 153(1998), 276-319.
Probab. Theory Relat. Fields, 114(1999), 549-585.
J. Funct. Anal., 241(2006), 557-610.
Springer Verlag, New York, 1971.
Engineering Faculty of Hunedoara,\
“Politehnica” University of Timişoara,\
331128 Hunedoara, Romania\
and\
Institut Camille Jordan (CNRS-UMR 5208),\
Université Claude Bernard Lyon1,\
69622 Villeurbanne, France.\
e-mail: [[email protected]]{}
[^1]: [**Key Words:**]{} uniqueness of $C_0$-semigroups; $L^\infty$-uniqueness of generalized Schrödinger operator; $L^1$-uniqueness of weak solution for the Fokker-Planck equation.\
[**2000 AMS Subject Classification:**]{} 47D03, 47A55, 35J10, 60J60, 82C31
|
{
"pile_set_name": "ArXiv"
}
|
There has been interest recently in the ground state of a disordered two-dimensional (2D) carrier system. Twenty years ago, scaling arguments and supporting experimental data indicated that at temperature $T$ = 0 K such a system must be insulating [@Abrahamsscaling; @BishopTsui]. However, prompted by new data in Si 2D electrons [@KravMIminus] and subsequently multiple different 2D carrier systems [@MIpapers], revealing a metallic-like behavior, this question is being revisited both experimentally and theoretically [@MItheory; @Pudalov97; @DasSarma99; @DasSarmaC99].
One specific area of interest has been the spin degree of freedom [@Pudalov97; @Murzin98; @Papadakis99; @Papadakis99c; @Yaish99]. Measurements have shown that increasing the spin-orbit induced zero-magnetic-field spin-splitting leads to a more pronounced metallic behavior in GaAs 2D holes [@Papadakis99; @Papadakis99c]. References [@Murzin98; @Yaish99] also report that the metallic behavior in this system is related to transport by two spin-subbands. Experiments with an in-plane magnetic field ($B$) similarly suggest that the effects of spin are important [@Pudalov97b; @Simonian97; @Mertes99; @Okamoto99; @Yoon99b]. On the other hand, for 2D systems with finite layer thickness recent calculations predict an anisotropic positive magnetoresistance (MR) caused by the coupling of the orbital motion to $B$ [@DasSarmaC99; @Chen-more]. The MR is calculated to be larger for $B \perp I$ than for $B \parallel I$, where $I$ is the current in the sample. Motivated by this prediction, we measure the MR of a high-mobility 2D hole system (2DHS) in a GaAs (311)A quantum well, in a density range such that the $B = 0$ sample resistivity ($\rho$) shows metallic $T$-dependence. We apply an in-plane $B$ parallel to the $[\bar233]$ and $[01\bar1]$ crystal axes, and measure the MR with $I \parallel B$ and $I \perp B$ for each case. Some characteristics of our data, such as an overall positive MR, are consistent with the predictions of Ref. [@DasSarmaC99]. However, we observe a striking dependence of the MR, and in particular of the [*onset of insulating behavior*]{}, on the orientation of $B$ relative to the [*crystal axes*]{}. We show that this dependence is linked to the anisotropy of the 2DHS band structure, and a re-population of the spin-subbands with increasing $B$.
The samples are Si-modulation doped GaAs quantum wells grown on (311)A GaAs substrates. Even at $B = 0$, these samples exhibit a mobility anisotropy believed to be due to an anisotropic surface morphology (see [@Heremans94; @Wassermeier95] and references therein). The interfaces between the GaAs quantum well and the AlGaAs barriers are believed to be corrugated, with ridges along the $[\bar233]$ direction which reduce the mobility for $I
\parallel [01\bar1]$. While the metallic behavior has been studied in this system extensively, little attention has been paid so far to the differences between transport along $[01\bar1]$ and $[\bar233]$. Our sample is patterned with an L-shaped Hall bar aligned along $[01\bar1]$ and $[\bar233]$ to allow simultaneous measurement of the resistivities along the two directions. The sample has metal front and back gates to control both the 2DHS density ($p$) and the perpendicular electric field ($E_{\perp}$) applied to the well [@Papadakis99; @Papadakis99c]. Measurements are done in dilution and pumped $^3$He refrigerators with $B$ up to 16 T. In the $^3$He refrigerator, the sample is mounted on a single-axis tilting stage that can be rotated [*in-situ*]{} to change the plane of the 2DHS from perpendicular to parallel to the applied $B$.
Figure \[aniso\](a) demonstrates the high quality of the 2DHS in our sample.
-1pc =3.25in -2pc
The data of Fig. \[aniso\] also reveal the mobility anisotropy observed in this sample: at 30 mK and $p = 6.3 \times 10^{10}$ cm$^{-2}$, we have $\mu_{[01\bar1]} = 425,000$ cm$^2$/Vs and $\mu_{[\bar233]} = 530,000$ cm$^2$/Vs. As illustrated in Fig. \[aniso\](b), the $T$-dependence of $\rho$ is also significantly different along the two directions in the density range where the behavior is metallic. The $[01\bar1]$ direction typically shows a larger fractional change in $\rho$, ${\rho}(T)/{\rho}(30$ mK), than the $[\bar233]$ direction, as $T$ is increased [@Papadakis99; @Papadakis99c]. This suggests that the scattering mechanisms associated with the two mobility directions have different $T$ dependencies, and that the orientation of $I$ relative to the crystal axes is an important parameter in understanding the data.
Figure \[intro\] shows $\rho$ at $T = 0.3$ K as a function of $B$ applied in the plane of the 2DHS. The left (right) column shows data for $I \parallel [01\bar1]$ ($I
\parallel [\bar233]$), with the in-plane $B$ both parallel and perpendicular to $I$. To obtain these data, on separate cooldowns the sample was mounted with the $[01\bar1]$ or the $[\bar233]$ crystal axis parallel to the tilt axis. The density $p$ was deduced from the Hall coefficient by measuring the transverse MR in a $B$ perpendicular to the plane of the 2DHS. The stage was then tilted to make the 2DHS plane parallel to the applied $B$, and the MR was measured. The front and back gates were used to change $p$. For $I \parallel [01\bar1]$, $\rho$ is always larger when $I \perp B$. However, the $I \parallel [\bar233]$ data are qualitatively different: at low $B$ and $p$, the $I \perp B$ traces have lower $\rho$, and cross the $I \parallel B$ traces at higher $B$.
=3.25in
In Fig. \[intro\], there are pronounced qualitative similarities between the dashed traces on the right and the solid traces on the left, and vice versa. To highlight these similarities, in Fig. \[density\] we show the fractional change in $\rho$, ${\rho}(B)/{\rho}(B = 0)$, for $B$ along the $[\bar233]$ (left column) and $[01\bar1]$ (right column) directions. Plotting this way, a striking similarity is evident in the qualitative features of the traces with the same $B$ orientation relative to the crystal axes, even though the $I$ orientations are different. All traces start with a small slope and curve upwards. Then there is an inflection point followed by a reduction in slope, followed by another inflection point beyond which the traces curve upwards again. To highlight this behavior, the arrows in Fig. \[density\] are placed between the two inflection points, at a $B$ we will refer to as $B^*$. Surprisingly, for each $p$, $B^*$ for the $B \parallel [\bar233]$ traces is about 4 T smaller than for the $B \parallel [01\bar1]$ traces, regardless of the $I$ direction. Also, $B^*$ becomes smaller as $p$ is reduced. Figure \[density\] reveals that the relative orientations of $B$ and the crystal axes play an important role in the MR features.
=3.25in
The existence of the MR features around $B^*$ is intriguing. Similar, though sharper, features have been observed in in-plane $B$ measurements in systems with multiple confinement subbands when a subband is de-populated [@Jo93]. We propose that the MR features observed in our data are related to the changes in the relative populations of the spin-subbands and the resulting changes in subband mobility and inter-subband scattering as the in-plane $B$ is increased. To test this hypothesis, we have done self-consistent subband calculations with no adjustable parameters [@KPtheory] that give us spin-subband densities as a function of in-plane $B$ (Fig. \[Sim\]). Figure \[Sim\] shows that the upper spin-subband de-populates more quickly for $B \parallel
[\bar233]$ than for $B
\parallel [01\bar1]$, which can be traced back to the strong anisotropy of the 2DHS band structure in our system [@SpinSp]. This is consistent with the experimental observation that $B^*$ is smaller for $B \parallel [\bar233]$. Also, the $B$ at which the subband completely de-populates changes with $p$ in much the same way as $B^*$ does. However, $B^*$ is significantly smaller than the field at which the calculations show the upper spin-subband to reach zero density. We believe that the spin-subband de-population occurs at a lower $B$ than the band calculations predict because hole-hole interaction enhances the effective mass $m^*$ and effective $g$-factor $g^*$ in a dilute 2D system like ours. The average hole spacing in units of effective Bohr radius, $r_s$, for our experiment ranges from $r_s = 6.9$ to 15.6 for $p = 6.6 \times 10^{10}$ cm$^{-2}$ to $1.3 \times
10^{10}$ cm$^{-2}$ [@Rs]. Okamoto [*et al.*]{} [@Okamoto99], conclude that for Si 2D electrons with $r_s$ in this range, $g^*m^*$ is enhanced by a factor of 2.7 to 5.5. Assuming similar enhancement in our samples, we would expect a reduction by the same factor of $B$ required to de-populate a subband. Using these numbers to adjust the de-population field given by the band calculations would put it near $B^*$, strongly suggesting that the observed MR features are due to spin-subband de-population.
=3.25in
Further evidence linking the MR features to spin-subband de-population is provided by our data at constant $p$ with changing $E_{\perp}$. The degree of asymmetry in the potential that confines the carriers to 2D controls the spin-splitting, and plays an important role in the magnitude of the $B = 0$ temperature-dependence of the resistivity [@Papadakis99; @Papadakis99c]. For the data in Figs. \[intro\] and \[density\], $E_{\perp}$ is kept within 1 kV/cm of 5 kV/cm. This $E_{\perp}$ is included in the calculations plotted in Fig. \[Sim\]. Measurements at a constant $p = 3.9 \times 10^{10}$ cm$^{-2}$ as $E_{\perp}$ is increased from 4.5 kV/cm to 12.5 kV/cm reveal that $B^*$ shifts to higher $B$ by about 2 T. This observation is in agreement with the spin-subband de-population calculations done at fixed $p$ for varying $E_{\perp}$.
At higher in-plane $B$, beyond the MR features around $B^*$, the data in Fig. \[density\] are qualitatively similar. The traces for $B \perp I$ have greater slope than the corresponding traces with $B
\parallel I$, regardless of crystal axes. In this regime the magnetic confinement can become comparable to the electric confinement, and the effects due to the finite-thickness of the 2DHS may be dominant. Indeed, Ref. [@DasSarmaC99] predicts that MR with in-plane $B$ should be significantly larger for $B
\perp I$ than for $B \parallel I$, in agreement with our highest $B$ data. The data in which $E_{\perp}$ is changed at constant $p$ support this interpretation as well. As $E_{\perp}$ is increased, the confining potential becomes narrower, and the thickness of the 2DHS decreases. This should increase the $B$ required for finite-thickness effects to become important, and the data show that the MR anisotropy at $B = 16$ T is smaller for larger $E_{\perp}$.
=3.25in
We now turn to the $T$-dependence of MR to investigate the metallic phase in our 2D system. Figure \[temperature\] shows the $T$-dependence of MR at $p = 3.9 \times 10^{10}$ cm$^{-2}$, for the four measured relative orientations of $B$, $I$, and crystal axes. For each panel, the traces exhibit a nearly $T$-independent magnetic field $B_T$ which occurs near the trace’s first inflection point. This is consistent with the data of Ref. [@Yoon99b]. For $B < B_T$, the data show metallic behavior, and for $B > B_T$, insulating behavior. $B_T$ is different in each panel and, similar to $B^*$, it changes much more for a rotation of the crystal axes relative to $B$ than it does for a rotation of $I$ relative to $B$. Our experiments indicate that $B^*$ and $B_T$ depend very similarly on the parameters of our systems ($p$, $E_{\perp}$, direction of $B$). Our observation, which is in agreement with the in-plane MR data of Ref. [@Okamoto99], strongly suggests that the metallic behavior is linked to the presence of two populated spin-subbands [@Papadakis99; @Papadakis99c; @Murzin98; @Yaish99].
In the data of Ref. [@Okamoto99], and likely in ours, the spin-subband de-population is linked to $B^*$ which is somewhat larger than $B_T$, so it appears that the metallic behavior changes to insulating before the upper spin-subband is fully de-populated. This may be because the low-density spin-subband stops playing a role in transport when its mobility $\mu$ becomes sufficiently low, before it is fully de-populated.
Finally, we note that Das Sarma and Hwang have recently reported calculations aiming to explain the $T$-dependence of the resistivity [@DasSarma99] and the in-plane MR [@DasSarmaC99] of 2D systems that exhibit metallic behavior at finite $T$. Their calculations, which include only charged impurity scattering and the orbital motion, qualitatively reproduce some of the experimental data. We wish to point out that our results reveal the importance of the spin degree of freedom, and suggest that for an understanding of the experimental data it is important to also consider a scattering mechanism involving the spin-subbands, perhaps intersubband scattering [@Murzin98; @Yaish99]. Also important for (311)A GaAs 2D holes is the inclusion of interface roughness scattering: both the $T$-dependence of $\rho$ at $B = 0$ (Fig. \[aniso\]b), as well as the in-plane MR data (Figs. \[intro\] and \[density\]), depend on the direction of the current in the crystal.
To summarize, our data reveal a surprising anisotropy of the in-plane magnetoresistance and its temperature dependence for GaAs (311)A 2D holes. The results show that the rate of the upper spin-subband’s de-population with in-plane $B$ critically depends on the relative orientation of $B$ and the crystal axes. This points to the anisotropic nature of the $g$-factor and the spin-subband structure of GaAs (311)A 2D holes. Furthermore, we observe that the $B$ = 0 metallic behavior turns into insulating near $B$ at which the upper spin-subband de-populates. This observation, in agreement with the data for 2D electrons in Si [@Okamoto99], suggests that two spin-subbands are necessary for the expression of metallic behavior. These results also complement those of previous experiments [@Papadakis99; @Papadakis99c; @Murzin98; @Yaish99] which revealed that the presence of two spin-subbands with different populations appears to be linked to the metallic behavior.
This work was supported by the NSF and ARO. We thank M. Hofmann for stimulating discussions.
[10]{}
E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, , 673 (1979).
D. J. Bishop, D. C. Tsui, and R. C. Dynes, , 5737 (1980).
S. V. Kravchenko, G. V. Kravchenko, and J. E. Furneaux, Phys. Rev. B [**50**]{}, 8039 (1994); S. V. Kravchenko, D. Simonian, M. P. Sarachik, W. Mason, and J. E. Furneaux, Phys. Rev. Lett. [**77**]{}, 4938 (1996).
D. Popović, A. B. Fowler, and S. Washburn, [**79**]{}, 1543 (1997); P. T. Coleridge, R. L. Williams, Y. Feng, and P. Zawadzki, [**56**]{}, R12764 (1997); J. Lam, M. D’Iorio, D. Brown, and H. Lafontaine, [ **56**]{}, R12741 (1997); Y. Hanein [*et al.*]{}, [**80**]{}, 1288 (1998); M. Y. Simmons [*et al.*]{}, [**80**]{}, 1292 (1998); S. J. Papadakis and M. Shayegan, [**57**]{}, R15068 (1998).
V. Dobrosavljević, E. Abrahams, E. Miranda, and S. Chakravarty, , 455 (1997); B. L. Altshuler and D. Maslov, [**82**]{}, 145 (1999).
V. M. Pudalov, Pis’ma Zh. Éksp. Teor. Fiz. [**66**]{}, 168 (1997) \[JETP Lett. [**66**]{}, 175 (1997)\].
S. [Das Sarma]{} and E. H. Hwang, , 164 (1999).
S. [Das Sarma]{} and E. H. Hwang, cond-mat/9909452.
S. S. Murzin, S. I. Dorozhkin, G. Landwehr, and A. C. Gossard, JETP Lett. [ **67**]{}, 113 (1998).
S. J. Papadakis [*et al.*]{}, Science [**283**]{}, 2056 (1999).
S. J. Papadakis [*et al.*]{}, Physica E [**6**]{}, 284 (2000).
Y. Yaish [*et al.*]{}, cond-mat/9904324.
V. M. Pudalov, G. Brunthaler, A. Prinz, and G. Bauer, , pis’ma Zh. Éksp. Teor. Fiz. [**65**]{}, 168 (1997) \[JETP Lett. [**65**]{}, 932 (1997)\].
D. Simonian, S. V. Kravchenko, M. P. Sarachik, and V. M. Pudalov, , 2304 (1997).
K. M. Mertes [*et al.*]{}, , R5093 (1999).
T. Okamoto, K. Hosoya, S. Kawaji, and A. Yagi, , 3875 (1999).
J. Yoon [*et al.*]{}, cond-mat/9907128.
Another theoretical study predicts a qualitatively similar MR anisotropy due to spin-orbit coupling \[G. H. Chen, M. E. Raikh, Y. S. Wu, cond-mat/9904451\].
J. J. Heremans, M. B. Santos, K. Hirakawa, and M. Shayegan, J. Appl. Phys. [ **76**]{}, 1980 (1994).
M. Wassermeier [*et al.*]{}, , 14721 (1995).
J. Jo [*et al.*]{}, , 4056 (1993).
R. Winkler and U. Rössler, [**48**]{}, 8918 (1993); G. Goldoni and A. Fasolino, [**48**]{}, 4948 (1993).
For $B = 0$ the upper spin-subband density is smaller than half the $p$ because of the inversion-asymmetry-induced spin splitting of the subband states \[9\].
To calculate $r_s$ we use an unenhanced $m^* = 0.2m_e$ \[B. E. Cole [*et al.*]{}, , 2503 (1997)\]. In our calculations we obtain a density-of-states $m^*$ at $B = 0$ with approximately the same value.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that short-range interactions between the fundamental particles in the universe can drive a period of accelerated expansion. This description fits the early universe. In the present day universe, if one postulates short-range interactions or a sort of ‘"shielded gravity‘", the picture may repeat.'
address: 'Dpto. de Física Teórica, Universidad del País Vasco, Apdo. 644, 48080, Bilbao, Spain.'
author:
- 'Alberto Díez-Tejedor[^1] and Alexander Feinstein[^2]'
title: 'Accelerating Universes from Short-Range Interactions.'
---
There is no doubt that the inflationary paradigm has played a central role during the last several decades in our understanding of the early universe [@Guth]. While most of the authors rarely question the initial acceleration itself, the underlying cause of the acceleration remains rather obscure. It is often assumed that the initial accelerated expansion is driven by a sort of a self interacting scalar field [@Belinsky], the so-called *inflaton*. Much work has been done during the last twenty years studying the possible form of the potential for the scalar field which best fits within the cosmological model [@Potentials]. Recently, other proposals have been put forward where the inflaton takes the form of a non-canonical scalar field, also known as K-field [@K-field]. Needless to say that the inflaton is rather an effective field which most probably does not correspond to any fundamental particle, but could be looked at as an effective description of an underlying physical theory [@Vega].
A related issue is the state of the matter at the very high densities which prevail in the early universe. Presumably, the fundamental theories such as Superstrings, M-theory, etc. [@string] might guide one as to what symmetries and laws to expect when the densities, velocities and energies approach those present near the Big Bang. Certain clues about the behavior of the ultradense matter can be also obtained in the framework of Quark-Gluon Plasma theory [@QGP], which could be handy to model the description of the primordial matter in the early universe. This area of research is now under intensive theoretical and experimental study, mainly at RHIC and at CERN, with the surprising new observation that in this extreme state the matter seems to behave almost as a non-viscous perfect fluid [@RHIC]. At this stage, however, there is little one can say with a certain degree of rigor about the properties of matter nearly the initial singularity.
This brings us to the following question: Is there a way to obtain inflationary solutions in a highly dense universe within a so-to-say “conventional physics”? Physics, which one might label as fairly realistic and, to some extent, generic. The answer to this question is the main purpose of this Letter, and follows to be affirmative.
We will be considering a universe filled with an ultradense matter and modeling the interaction between the particles by a short-range attractive force. For computational purposes we will be using the Yukawa type potentials, yet, our discussion is generic and applies to *any* attractive short-range interaction, as is explained below. Moreover, our findings hold through as well in the case of the diluted matter, but in this case, one would need to postulate some unknown short-range interaction, or assume that the Newtonian gravity in the expanding universe is somehow “shielded”. We will comment on these issues before closing. Our main result is, that describing the matter as a fluid of interacting particles via short-range attractive forces, the cosmological models undergo a phase of accelerated expansion.
The introduction of short-range forces between the particles leads to the following equation of state for the matter: $$\rho=m_{0}n-An^{2},\quad p=-An^{2},\label{eq:1}$$ or equivalently, in a $p(\rho)$-form, $$p=\rho-\frac{m_{0}^{2}}{2A}\left[1-\sqrt{1-\frac{4A}{m_{0}^{2}}\rho}\right].$$ Here $A$ is a term depending on the interaction, $m_{0}$ is the rest mass of the fundamental particle and $n$ is the particle number density. The conventions and units we use are $c=8\pi G=\hbar=1$ and the metric has a signature $\left(-,+,+,+\right)$.
This, somewhat unusual in cosmology equation of state, is the one which could drive the accelerated expansion in the early universe. We see that if one neglects the interaction between particles ($A=0$), or consider a very dilute matter ($n\rightarrow0$), the equation of state (\[eq:1\]) reduces to the standard dust equation $\rho=m_{0}n$, $p=0$. Here we impose also the positivity of the energy density ($n<m_{0}/A$).
To obtain the above equation of state (see for example [@Bludman]), we consider a system of $N$ identical interacting particles placed at points $\mathbf{l}_{i}$ ($i=1,2,...,N$). The interaction between the particles is modeled by a potential $V\left(l_{ij}\right)$, where we have defined $l_{ij}\equiv\left|\mathbf{l}{}_{i}-\mathbf{l}{}_{j}\right|$. We further neglect the temperature effects by assuming that the particle masses and the strength of the interaction are much larger than the effects of the temperature, and write the energy of the system as$$U=m_{0}N+\frac{1}{2}\sum_{i=1}^{N}\sum_{j\neq i}V(l_{ij}).$$ Dividing by the volume and assuming that the system is homogeneous and isotropic, at least at the scale of the interaction, one readily obtains the energy density as a function of $n$ as given in the equation (\[eq:1\]). The pressure, then, is computed using the thermodynamic relation $p=n\left(\partial\rho/\partial n\right)-\rho$. The interaction term $A$ can be evaluated passing as usual to the continuous limit and replacing the sum by the integral: $$A=-2\pi\int_{0}^{\infty}V(l)l^{2}dl.\label{eq:a}$$ Note, that the sign of the parameter $A$ defines the sign of the pressure, and therefore, in order to obtain an accelerated expansion, the interaction should be attractive.
A typical way of modeling a short range interaction is via the Yukawa potential $V(l)=-g^{2}e^{-\mu l}/l$. Here $g$ is the coupling constant of the theory and $\mu$ the mass of the boson mediating the force, whose Compton length $l_{0}=1/\mu$ defines the range of the interaction. The integral of the equation (\[eq:a\]), therefore, is unaffected by the upper limit at infinity. Performing the integral, we obtain: $$A=\frac{2\pi g^{2}}{\mu^{2}}.$$
It is important to say that the convergence of the above integral basically defines as to whether the interaction is short- or long-range, and whether the system of particles may be treated by conventional thermo- and hydro-dynamics. If the potential energy due to the interaction between the particles falls faster than $r^{-3}$, the force is short-range, the above integral converges and the usual thermodynamics apply. Such is the case for the Yukawa potential or, for example, the Van der Waals forces, etc. If, however, the potential energy does not fall as fast as $r^{-3}$, as for example in the case of Newtonian gravity one deals with long-range forces, and the dynamics of the systems governed by those may be quite involved [@Padma]. There is little doubt, however, that the forces between the fundamental particles in the early universe are short-range, therefore providing a broad motivation for our further analysis.
We have found it useful to express the pressure, using the thermodynamic relations, in terms of the enthalpy per particle $h$: $$p(h)=-\frac{1}{4A}\left(h-m_{0}\right)^{2},\label{eq:2}$$ where the enthalpy is given by $$h=\frac{\partial\rho}{\partial n}=m_{0}-2An.\label{eq:4}$$
We further assume that the ultradense matter is described by an isentropic irrotational perfect fluid, and to proceed, we give the action from which the equations of the fluid motion are derived [@Schutz; @nosotros]: $$S=\int d^{4}x\sqrt{-g}\left\{ p\left(\left|V\right|\right)-\left(\frac{\partial p}{\partial h}\right)\left[\left|V\right|-\frac{V^{\mu}\phi_{,\mu}}{\left|V\right|}\right]\right\} .$$ Here the current $V^{\mu}$, usually known as the Taub current, is given by $V^{\mu}=hu^{\mu}$ ($\left|V\right|=h$), where $u^{\mu}$, the 4-velocity of the fluid, verifies $u_{\mu}u^{\mu}=-1$. The dynamical variables are $g^{\mu\nu}$, $V^{\mu}$ and $\phi$, and the following equations of motion result: $$u_{\mu}=-h^{-1}\phi_{,\mu},\label{eq:din1}$$ $$\left(nu^{\mu}\right)_{;\mu}=0,\label{eq:din2}$$ with the stress-energy tensor given by $$T^{\mu\nu}=\frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g_{\mu\nu}}=\left(\frac{\partial p}{\partial h}\right)hu^{\mu}u^{\nu}+pg^{\mu\nu}.\label{eq:momento}$$ The usual thermodynamic relation between the density and the pressure, $\rho=nh-p$ (with $n=\left(\partial p/\partial h\right)$), allows one to cast the energy-momentum tensor into the standard perfect fluid form: $$T^{\mu\nu}=\left(\rho+p\right)u^{\mu}u^{\nu}+pg^{\mu\nu}.$$ The equation (\[eq:din1\]) is the Clebsch decomposition of the 4-velocity for an irrotational flow, whereas the equation (\[eq:din2\]) is the conservation law for the particle number. Equation (\[eq:din1\]) shows that the scalar field $\phi$ plays the role of a velocity potential.
The identity $u_{\mu}u^{\mu}=-1$ and the equation (\[eq:din1\]) lead to the following expression for the enthalpy in terms of the derivatives of the velocity potential: $$h=\sqrt{-g^{\mu\nu}\phi_{,\mu}\phi_{,\nu}}.\label{eq:entalpia}$$ Taking into account the Clebsch decomposition of the 4-velocity (\[eq:din1\]), and introducing the new variable $X=-\frac{1}{2}g^{\mu\nu}\phi_{,\mu}\phi_{,\nu}=h^{2}/2$, we obtain the on-shell expression for the action: $$S_{on-shell}=\int d^{4}x\sqrt{-g}F(X),\label{eq:action}$$ where in the final expression we have defined$$p(h)=p\left(\sqrt{2X}\right)\equiv F(X).\label{eq:def2}$$
Expression (\[eq:action\]) gives the action for an irrotational perfect fluid. This functional takes the form of a non-canonical scalar field action and is analogous to the one often used in K-essence cosmology (known as purely kinetic K-essence) [@K-field; @nosotros; @Armendariz; @Scherrer]. The Lagrangian density is obtained from the equation of state which relates the pressure with the enthalpy (\[eq:def2\]), and the scalar field is the velocity potential of the fluid, which plays the role of the inflaton field. For completeness, the density and particle number in terms of the variable $X$ are given: $$\rho=2XF'(X)-F(X),\quad n=\pm\sqrt{2X}F'(X),\label{eq:presion}$$ where $F'(X)$ denotes the derivative of the function with respect to its variable.
The above formalism together with the equation of state in the form (\[eq:2\]) and the assumption that one deals with an irrotational perfect fluid, gives the following matter Lagrangian: $$F_{\pm}(X)=-aX\pm b\sqrt{X}-\frac{b^{2}}{4a},\label{eq:3a}$$ where the parameters $a$ and $b$ are given by: $$a=\frac{1}{2A},\quad b=\frac{m_{0}}{\sqrt{2}A}.$$ The $\pm$ sign in the equations appears due to the definition of the enthalpy ($h=\pm\sqrt{2X}$) and separates the matter into an ordinary one, in the case of positive enthalpy, and the so-called *phantom matter* ($\rho+p<0$) [@phantom], in the case where the enthalpy is negative. It is interesting that in the case of the perfect fluid, the phantom and the ordinary matter have a simple physical separation depending whether the enthalpy per particle is negative or positive respectively.
The behavior of the function $F(X)$ is depicted in the figure 1. Note, that the function $F(X)$ is defined by the physical parameters of the system: $m_{0}$ and $A$ and is not assumed *ad hock* as is usually done in the context of the K-essence cosmology. The positivity of the energy and of the number of particles imposes $X\leq X_{s}$ for both the ordinary and the phantom branch. The condition of the accelerated expansion $\rho+3p\leq0$ is always verified in the phantom case, whereas in the case of the ordinary matter one needs $X\leq X_{r}$ for inflation.
We now consider an isotropic and homogeneous universe, modeled for simplicity by the spatially flat Friedmann-Robertson-Walker metric: $$ds^{2}=-dt^{2}+R^{2}(t)\left[dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right)\right].$$ $R(t)$ is the scale factor and ($t,r,\theta$,$\varphi$) the comoving coordinates. The physical distances in such a universe are given by $l=R(t)r$. The dynamical equation for the field $X$ can be obtained either from the particle conservation equation (\[eq:din2\]) or by the direct variation of the on-shell action and results in: $$\left[F'(X)+2XF''(X)\right]\dot{X}+6HF'(X)X=0.\label{eq:dinamic}$$ Here, $H=\dot{R}/R$ is the Hubble parameter. Solving this expression for $\dot{X}$, we obtain: $$\dot{X}=-\frac{6HF'(X)X}{F'(X)+2XF''(X)}.\label{eq:dotX}$$ The equation (\[eq:dotX\]) basically determines the behavior of the cosmological model [@Scherrer]. We assume we deal with an expanding universe by fixing the sign of the Hubble parameter to be positive. Since the field $X$ is positive by definition, the sign of $\dot{X}$ is determined by the signs of $F'(X)$ and $F'(X)+2XF''(X)$. We can check that $F'(X)+2XF''(X)=-a$ for both branches, whereas $F'(X)$ is positive on the positive branch of our picture, and negative on the negative one. This means, that whatever the initial value of $X$ is, one always finishes with the dust behavior, i.e. the pressure vanishes. If the initial value of the field $X$ falls into the phantom branch, $X$ decreases to zero, passes to the standard matter branch and then increases, till finally the model becomes pure dust near $X_{s}$. The interesting point is, however, that the short-range interaction between the particles leads to a period of accelerated expansion naturally finished at $X_{r}$, which is an exit point towards the dust-like universe (grateful exit).
The picture that emerges, is therefore, as follows. The short-range attractive forces in the early universe introduce a measure of negative pressure (tension) and could be responsible for the early universe inflation. The inflation stops naturally at the exit point defined by the parameters of the theory.
One should not expect from this simple model, as it stands, a reasonable amount of inflation. Indeed, our estimates do not seem to produce anything close to 60-70 e-foldings one would like to have for an effective inflation. The point is that the model is too simple to acomodate the interplay between the scales of the particle horizon and the interaction range. Moreover, in the case of the very early universe there will be instants where the horizon size would be even smaller than the range of the interaction. Then, the interaction term $A$ will be time dependent, due to the fact that the upper limit of the integral in the equation (\[eq:a\]) will be bounded by the size of the time-dependent particle horizon. This, introduces interesting physics and enhances the inflation, on one hand, but complicates the equations of the dynamical evolution on the other. We hope to be able to report the results of the numerical studies of these equations in the near future.
It would be extremely interesting if a similar mechanism could explain as well the present day acceleration [@Acc.]. We see here two possible alternatives. First, would be to postulate the existence of fundamental particles which dominate the universe and interact via short-range forces [@Fischbach], in the sense that the expression (\[eq:a\]) converges. We will not speculate about this possibility here, however, it should be pointed out that interesting models have been recently proposed in which the late time acceleration of the universe is obtained with the use of a *Van der Waals* equation of state [@Capozziello], and, as is known from statistical mechanics, this kind of equation of state appears when one takes into account the interaction between the particles.
Another possibility, which looks more appealing to us and is connected again to the interplay between the horizon size and the range of the interaction, is to consider the Newtonian gravity between the galaxies, or clusters, as the dominant contribution to the matter density and the pressure in the present day universe. One can think of the Newtonian gravity as the interaction between the fundamental particles in the universe. “Averaging” on a scale comparable with the cluster scale one can consider this term as an effective part of the energy-momentum tensor which drives the expansion. Now, the Newtonian gravity is a long-range force. Nevertheless, in an expanding past singular universe, one may apply a natural *cut off* to the integral (\[eq:a\]) to evaluate the interaction term $A$. Since no interaction (including gravity) acts beyond the particle horizon, the latter may serve as the interaction range scale for the gravity. Effectively, then, the gravity would become shielded by the horizon in an infinite past-singular expanding universe and could produce the necessary negative pressure to accelerate the universe. We leave, however, this and the related problem of the time-dependent interaction term for future report.
A.D.T. work is supported by the Basque Government predoctoral fellowship BFI03.134. This work is supported by the Spanish Science Ministry Grant 1/MCYT 00172.310-15787/2004, and the University of the Basque Country Grant 9/UPV00172.310-14456/2002.
[10]{} A.H. Guth, Phys. Rev. D **23**, 357 (1981); A. Linde, “Particle Physics and Inflationary Cosmology”, Harwood, 1990; A.R. Liddle and D.H. Lyth, “Cosmological Inflation and Large-Scale Structure”, Cambridge University Press, 2000. V.A. Belinsky, L.P. Grishchuk, I.M. Khalatnikov and Ya.B Zeldovich, Phys. Lett. B **155**, 232 (1985). J.E. Lidsey, A.R. Liddle, E.W. Kolb, E.J. Copeland, T. Barreiro and M. Abney, Rev. Mod. Phys. **69**, 373 (1997). C. Armendariz-Picon, T. Damour and V. Mukhanov, Phys. Lett. B **458**, 209 (1999). D. Cirigliano, H.J. de Vega and N.G. Sanchez, Phys. Rev. D **71**, 103518 (2005); D. Cirigliano, H.J. de Vega and N.G. Sanchez, arXiv astro-ph/0507595. J.E. Lidsey, D. Wands and E.J. Copeland, Phys. Rep. **337**, 343 (2000); M. Gasperini and G. Veneziano, Phys. Rep. **373**, 1 (2003). J. Harris and B. Müller, Ann. Rev. Nucl. Part. Sci. **46**, 71 (1996). Results from the first 3 years at RHIC, www.bnl.gov/bnlweb/prubaf/pr/docs/Hunting-the-QGP.pdf (to appear in Nucl. Phys. A); E. Shuryak, Prog. Part. Nucl. Phys. **53**, 273 (2004). S.A. Bludman and M.A. Ruderman, Phys. Rev. **170**, 1176 (1968); Ya.B. Zeldovich, JETP **14**, 1143 (1962); H.Dehnen and H. Hönl, Astrophys. Space Sci. **33**, 49 (1975). Th. Dauxois, S. Ruffo, E. Arrimondo and M. Wilkens, “Dynamics and Thermodynamics of Systems with Long Range Interaction”, Lecture Notes in Physics **602**, 1 (2002); T. Padmanabhan, Phys. Rep. **188**, 285 (1990). B.F. Schutz, Phys. Rev. D **2**, 2762 (1970); B.F. Schutz and R. Sorkin, Ann. Phys. **107**, 1 (1977); J.D. Brown, Class. Quant. Grav. **10**, 1579 (1993). A. Diez-Tejedor and A. Feinstein, Int. Jour. Mod. Phys. D **14**, 1561 (2005), arXiv gr-qc/0501101 C. Armendariz-Picon, V. Mukhanov and P.J. Steinhardt, Phys. Rev. Lett. **85**, 4438 (2000); T. Chiba, T. Okabe and M. Yamaguchi, Phys. Rev. D **62**, 023511 (2000). L.P. Chimento, Phys. Rev. D **69**, 123517 (2004); R.J. Scherrer, Phys. Rev. Lett. **93**, 011301 (2004) R.R. Caldwell, Phys. Lett. B **545**, 23 (2002); S.M. Carroll, M. Hoffman and M. Trodden, Phys. Rev. D **68**, 023509 (2003). A.G. Riess *et al*, Astrom. J. **116**, 1009 (1998); S. Perlmutter *et al*, Astrophys. J. **517**, 565 (1999). E. Fischbach, D. Sudarsky, A. Szafer and C. Talmadge, Phys. Rev. Lett. **56**, 3 (1986). S. Capozziello, S. De Martino and M. Falanga, Phys. Lett. A **299**, 494 (2002).
[^1]: [email protected]
[^2]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.